text
stringlengths
8.19k
1.23M
summary
stringlengths
342
12.7k
You are an expert at summarizing long articles. Proceed to summarize the following text: DOD is one of the largest federal agencies with its budget representing over half of the entire federal government’s discretionary spending. For fiscal year 2010, Congress appropriated over $694 billion for DOD. This included $530 billion in regular appropriations for base needs and about $164 billion in regular and supplemental appropriations for contingency operations in Iraq, Afghanistan, and other locations. As of June 2010, DOD had received about $1 trillion since 2001 to support contingency operations. The department is currently facing near-term and long-term internal fiscal pressures as it attempts to balance competing demands to support ongoing operations, rebuild readiness following extended military operations, and manage increasing personnel and health care costs and significant cost growth in its weapons systems programs. For more than a decade, DOD has dominated GAO’s list of federal programs and operations at high-risk of being vulnerable to fraud, waste, abuse, and mismanagement. In fact, all the DOD programs on GAO’s High- Risk List relate to business operations, including systems and processes related to management of contracts, finances, the supply chain, and support infrastructure, as well as weapon systems acquisition. Long- standing and pervasive weaknesses in DOD’s financial management and related business processes and systems have (1) resulted in a lack of reliable information needed to make sound decisions and report on the financial status and cost of DOD activities to Congress and DOD decision makers; (2) adversely affected its operational efficiency in business areas, such as major weapons system acquisition and support and logistics; and (3) left the department vulnerable to fraud, waste, and abuse. Detailed examples of these effects are presented in appendix I. DOD is required by various statutes to improve its financial management processes, controls, and systems to ensure that complete, reliable, consistent, and timely information is prepared and responsive to the financial information needs of agency management and oversight bodies, and to produce audited financial statements. Collectively these statues required DOD to do the following Establish a leadership and governance framework and process, including a financial management improvement plan or strategy (over time the department’s strategy evolved into the FIAR Plan, which ultimately became a subordinate plan to the department’s Strategic Management Plan) for addressing its financial management weaknesses and report to Congress and others semi-annually on its progress. Concentrate the department’s efforts and resources on improving the department’s financial management information. Systematically tie actions to improve processes and controls with business system modernization efforts described in the business enterprise architecture and enterprise transition plan required by 10 U.S.C. § 2222. Limit the resources the department spend each year to develop, compile, report, and audit unreliable financial statements. Submit an annual report to defense committees, the Office of Management and Budget (OMB), the Department of the Treasury (Treasury), GAO, and the DOD Inspector General (DOD IG) concluding on whether DOD policies, procedures, and systems support financial statement reliability, and the expected reliability of each DOD financial statement. Certify to the DOD IG whether a component or DOD financial statement for a specific fiscal year is reliable. Following DOD’s assertion that a financial statement is reliable, DOD may expend resources to develop, compile, report, and audit the statement and the statements of subsequent fiscal years. Because of the complexity and magnitude of the challenges facing the department in improving its business operations, GAO has long advocated the need for a senior management official to provide strong and sustained leadership. Recognizing that executive-level attention and a clear strategy were needed to put DOD on a sustainable path toward successfully transforming its business operations, including financial management, the National Defense Authorization Act (NDAA) for fiscal year 2008 designated the Deputy Secretary of Defense as the department’s Chief Management Officer (CMO), created a Deputy CMO position, and designated the undersecretaries of each military department as CMOs for their respective departments. The act also required the Secretary of Defense, acting through the CMO, to develop a strategic management plan that among other things would provide a detailed description of performance goals and measures for improving and evaluating the overall efficiency and effectiveness of the department’s business operations and actions underway to improve operations. To further draw the department’s attention to the need to improve its strategy for addressing financial management weaknesses and achieve audit readiness the NDAA for Fiscal Year 2010 made the FIAR Plan a statutory mandate, requiring the FIAR Plan to include, among other things specific actions to be taken and costs associated with (a) correcting the financial management deficiencies that impair DOD’s ability to prepare timely, reliable, and complete financial management information; and (b) ensuring that DOD’s financial statements are validated as ready for audit by no later than September 30, 2017, and actions taken to correct and link financial management deficiencies with process and control improvements and business system modernization efforts described in the business enterprise architecture and enterprise transition plan required by 10 U.S.C. § 2222. Consistent with the priorities announced by the DOD Comptroller in August 2009, the act also focused the department’s improvement efforts on first ensuring the reliability of the department’s budgetary information and property accountability records for mission-critical assets. In addition, the act directed DOD to report to congressional defense committees no later than May 15 and November 15 each year on the status of its FIAR Plan implementation. Furthermore, the act required that the first FIAR Plan issued following enactment of this legislation (1) include a mechanism to conduct audits of the military intelligence programs and agencies and submit the audited financial statements to Congress in a classified manner and (2) identify actions taken or to be taken by the department to address the issues identified in our May 2009 report on DOD’s efforts to achieve financial statement auditability. Over the years, the department has initiated several broad-based reform efforts, including the 1998 Biennial Strategic Plan for the Improvement of Financial Management within the Department of Defense and the 2003 Financial Improvement Initiative, intended to fundamentally transform its financial management operations and achieve clean financial statement audit opinions. In 2005, DOD’s Comptroller established the DOD FIAR Directorate to develop, manage, and implement a strategic approach for addressing the department’s financial management weaknesses and achieving auditability and to integrate those efforts with other improvement activities, such as the department’s business system modernization efforts. The first FIAR Plan was issued in December 2005. DOD’s FIAR Plan defines DOD’s strategy and methodology for improving financial management and controls, and summarizes and reports the results of the department’s improvement activities and progress toward achieving financial statement auditability. Further, the FIAR Plan has focused on achieving three goals: (1) implement sustained improvements in business processes and controls to address internal control weaknesses, (2) develop and implement financial management systems that support effective financial management, and (3) achieve and sustain financial statement audit readiness. To date, the department’s improvement efforts have not resulted in the fundamental transformation of DOD’s financial management operations necessary to resolve the department’s long-standing financial management weaknesses; however, some progress has been made and the department’s strategy has continued to evolve. While none of the military services have obtained unqualified (clean) audit opinions on their financial statements, some DOD organizations, such as the Army Corps of Engineers, Defense Finance Accounting Service, the Defense Contract Audit Agency, and the DOD IG, have achieved this goal. Moreover, some DOD components that have not yet received clean audit opinions, such as the Defense Information Service Agency (DISA), are beginning to reap the benefits of strengthened controls and processes gained through ongoing efforts to improve their financial management operations and reporting capabilities. For example, according to DISA’s Comptroller, the agency was able to resolve over $270 million in Treasury mismatches through reconciliations of over $12 billion in disbursement and collection activities. In addition, DISA’s efforts to improve processes and controls over its accounts receivable and payable accounts have resulted in improvements in its ability to (1) substantiate the validity of DISA’s customer billings and collect funds due to DISA, and (2) identify areas where funds could be deobligated and put to better use. Moreover, DISA management has gained increased assurance over its reported cash availability balance, thereby improving mission-critical decision making. Since its inception, the FIAR Plan has followed an incremental approach to structure its process for examining operations, diagnosing problems, planning corrective actions, and preparing for audit. Moreover, the FIAR Plan has continued to evolve and mature as a strategic plan. Initially, DOD components independently established their own financial management improvement priorities and methodologies and were responsible for implementing the corrective actions they determined were needed to address weaknesses and achieve financial statement auditability. However, as we reported in May 2009, it was difficult to link corrective actions or accomplishments reported by the FIAR Plan to FIAR goals and measure progress. In addition, we reported that as the department’s strategic plan and management tool for guiding and reporting on incremental progress toward achieving these goals, the FIAR Plan could be improved in several areas. Specifically, we found the following: Clear guidance was needed in developing and implementing improvement efforts. A baseline of the department’s and/or key component’s current financial management weaknesses and capabilities was needed to effectively measure and report on incremental progress. Linkage between FIAR Plan goals and corrective actions and reported accomplishments was needed. Clear results-oriented metrics for measuring and reporting incremental progress were needed. Accountability should be clearly defined and resources budgeted and consumed should be identified. We made several recommendations in our May 2009 report to increase the FIAR Plan’s effectiveness as a strategic and management tool for guiding, monitoring, and reporting on financial management improvement efforts and increasing the likelihood of meeting the department’s goal of financial statement auditability, which were incorporated into the NDAA for fiscal year 2010. In its May 2010 FIAR Status Report and Guidance, the department identified steps taken to address our recommendations to strengthen its FIAR Plan strategy and chances of sustained financial management improvements and audit readiness. For example, DOD has established shared priorities and methodology, including guidance to develop component financial improvement plans, and an improved governance framework. In August 2009, DOD’s Comptroller directed that the department focus on improving processes and controls supporting information that is most often used to manage the department, while continuing to work toward achieving financial improvements aimed at achieving unqualified audit opinions on the department’s financial statements. As a result, in 2010 DOD revised its FIAR strategy, governance framework, and methodology to support these objectives and focus financial management improvement efforts primarily on achieving two interim departmentwide priorities— first, strengthening processes, controls, and systems that produce budgetary information and support the department’s Statements of Budgetary Resources; and second, improving the accuracy and reliability of management information pertaining to the department’s mission-critical assets, including military equipment, real property, and general equipment, and validating improvement through existence and completeness testing. In addition, the DOD Comptroller directed DOD components to use a standard financial improvement plan template to support and emphasize achievement of the two FIAR priorities. The department intends to progress toward achieving financial statement auditability in five waves (or phases) of concerted improvement activities within groups of end-to-end business processes. According to DOD’s May 2010, FIAR Plan Status Report, the lack of resources dedicated to financial improvement activities at DOD components has been a serious impediment to progress, except in the Navy and the Defense Logistics Agency (DLA). As a result, the components are at different levels of completing the waves. For example, the Air Force has already received a positive validation by the DOD IG on the Air Force Appropriations Received account (wave 1) and the Navy is currently undergoing a similar review of its account. Army and DLA, are expected to complete wave 1 and be ready for validation by the end of fiscal year 2010. However, DOD is only beginning wave 1 work at other defense agencies to ensure that transactions affecting their appropriations received accounts are properly recorded and reported. The first three waves focus on achieving the DOD Comptroller’s interim budgetary and asset accountability priorities, while the remaining two waves are intended to complete actions needed to achieve full financial statement auditability. However, the department has not yet fully defined its strategy for completing waves 4 and 5. The focus and scope of each wave include the following: Wave 1—Appropriations Received Audit focuses efforts on assessing and strengthening, as necessary, internal controls and business systems involved in appropriations receipt and distribution process, including funding appropriated by Congress for the current fiscal year and related apportionment/reapportionment activity by OMB, as well as allotment and sub-allotment activity within the department. Wave 2—Statement of Budgetary Resources (SBR) Audit focuses efforts on assessing and strengthening, as necessary, the internal controls, processes, and business systems supporting the budgetary-related data (e.g., status of funds received, obligated, and expended) used for management decision making and reporting, including the SBR. In addition to fund balance with Treasury reporting and reconciliation, significant end-to-end business processes in this wave include procure-to- pay, hire-to-retire, order-to-cash, and budget-to-report. Wave 3—Mission-Critical Assets Existence and Completeness Audit focuses efforts on assessing and strengthening, as necessary, internal controls and business systems involved in ensuring that all assets (including military equipment, general equipment, real property, inventory, and operating materials and supplies) are recorded in the department’s accountable property systems of record exist, all of the reporting entities’ assets are recorded in those systems of record, reporting entities have the right (ownership) to report these assets, and the assets are consistently categorized, summarized, and reported. Wave 4—Full Audit Except for Legacy Asset Valuation focuses efforts on assessing and strengthening, as necessary, internal controls, processes, and business systems involved in the proprietary side of budgetary transactions covered by the Statement of Budgetary Resources effort of wave 2, including accounts receivable, revenue, accounts payable, expenses, environmental liabilities, and other liabilities. This wave also includes efforts to support valuation and reporting of new asset acquisitions. Wave 5—Full Financial Statement Audit focuses efforts on assessing and strengthening, as necessary, processes, internal controls, and business systems involved in supporting the valuations reported for legacy assets once efforts to ensure control over the valuation of new assets acquired and the existence and completeness of all mission assets are deemed effective on a go-forward basis. Given the lack of documentation to support the values of the department’s legacy assets, federal accounting standards allow for the use of alternative methods to provide reasonable estimates for the cost of these assets. According to DOD, critical to the success of each wave and the department’s efforts to ultimately achieve full financial statement auditability will be departmentwide implementation of the FIAR methodology as outlined in DOD’s FIAR Guidance document. Issued in May 2010, the FIAR Guidance document, which DOD intends to update annually, defines in a single document the department’s FIAR goals, strategy, and methodology (formerly referred to as business rules) for becoming audit ready. The FIAR methodology prescribes the process components should follow in executing efforts to assess processes, controls, and systems; identify and correct weaknesses; assess, validate, and sustain corrective actions; and achieve full auditability. Key changes introduced in 2010 to the FIAR methodology include an emphasis on internal controls and supporting documentation. Utilization of standard financial improvement plans and methodology should also aid both DOD and its components in assessing current financial management capabilities in order to establish baselines against which to measure, sustain, and report progress. More specifically, the standard financial improvement plan and FIAR Guidance outline key control objectives and capabilities that components must successfully achieve to complete each wave (or phase) of the FIAR strategy for achieving audit readiness. For example, to successfully complete wave 2 (SBR audit) one of the capabilities that each component must be able to demonstrate is that it is capable of performing Fund Balance with Treasury reconciliations at the transaction level. Based on what we’ve seen of the revised FIAR Plan strategy and methodology to date, we believe the current strategy reflects a reasonable approach. We are hopeful that a consistent focus provided through the shared priorities of the FIAR strategy will increase the department’s ability to show incremental progress toward achieving auditability in the near term, if the strategy is implemented properly. In the long term, while improved budgetary and asset accountability information is an important step in demonstrating incremental progress, it will not be sufficient to achieve full financial statement auditability. Additional work will be required to ensure that transactions are recorded and reported in accordance with generally accepted accounting principles. At this time, it is not possible to predict when DOD’s efforts to achieve audit readiness will be successful. The department continues to face significant challenges in providing and sustaining the leadership and oversight needed to ensure that improvement efforts, including ERP implementation efforts, result in the sustained improvements in process, control, and system capabilities necessary to transform financial management operations. We will continue to monitor DOD’s progress in addressing its financial management weaknesses and transforming its business operations. As part of this effort, we plan to assess implementation of DOD’s FIAR strategy and guidance, as part of our review of the military departments’ financial improvement plans. GAO supports DOD’s current approach of prioritizing efforts, focusing first on information management views as most important in supporting its operations, to demonstrate incremental progress to addressing weaknesses and achieving audit readiness. There are advantages to this approach, including building commitment and support throughout the department and the potential to obtain preliminary assessments on the effectiveness of current processes and controls and identify potential issues that may adversely affect subsequent waves. For example, testing expenditures in wave 2 will also touch on property accountability issues, as DOD makes significant expenditures for property. Identifying and resolving potential issues related to expenditures for property in wave 2 will assist the department as it enters subsequent waves dealing with its ability to reliably and completely identify, aggregate, and account for the cost of the assets it acquires through various acquisition and construction programs. We also support efforts to first address weaknesses in the department’s ability to timely, reliably, and completely record the cost of assets as they are acquired over efforts to value legacy assets. Prior efforts to achieve auditability of DOD’s mission assets failed, in large part, because these efforts were focused primarily on deriving values for financial statement reporting and not on assessing and addressing the underlying weaknesses that impaired the department’s ability to reliably identify, aggregate, and account for current transactions affecting these assets. GAO is willing to work with the department to revisit the question of how DOD reports assets in its financial statements to address unique aspects of military assets not currently reflected in traditional financial reporting models. Developing sound plans and a methodology and getting leaders and organizations in place is only a start. Consistent with our previous reports regarding the department’s CMO positions, including the CMO, Deputy CMO and military department CMOs, and our May 2009 recommendations to improve DOD’s FIAR Plan as a strategic and management tool for addressing financial management weaknesses and achieving and sustaining audit readiness, DOD needs to define specific roles and responsibilities—including when and how the CMO and military department CMOs and other leaders are expected to become involved in problem resolution or efforts to ensure cross-functional area commitment and support to financial management improvement efforts; effectively execute its plans; gauge actual progress against goals; strengthen accountability; and make adjustments as needed. In response to our report, DOD expanded its FIAR governance framework to include the CMOs. While expansion of the FIAR governance framework to include the CMOs is also encouraging, the specific roles and responsibilities of these important leaders have not yet been fully defined. As acknowledged by DOD officials, sustained and active involvement of the CMOs and other senior leaders is critical in enabling a process by which DOD can more timely identify and address cross-functional issues and ensure that other business functions, such as acquisition and logistics, fully acknowledge and are held accountable for their roles and responsibilities in achieving the department’s financial management improvement goals and audit readiness. Sustained and active leadership and effective oversight and monitoring at both the department and component levels are critical to ensuring accountability for progress and targeting resources in a manner that results in sustained improvements in the reliability of data for use in supporting and reporting on operations. As part of GAO’s prior work pertaining to DOD’s key ERP implementation efforts and the FIAR Plan, we have seen a lack of focus on developing and using interim performance metrics to measure progress and the impact of actions taken. For example, our review of DOD’s ERP implementation efforts, which we plan to report on in October 2010, found that DOD has not yet defined success for ERP implementation in the context of business operations and in a way that is measurable. In May 2009 we reported that the FIAR Plan does not use clear results-oriented metrics to measure and report corrective actions and accomplishments in a manner that clearly demonstrates how they contribute individually or collectively to addressing a defined weakness, providing a specific capability, or achieving a FIAR goal. To its credit, DOD has taken action to begin defining results-oriented FIAR metrics it intends to use to provide visibility of component-level progress in assessment and testing and remediation activities, including progress in identifying and addressing supporting documentation issues. We have not yet had an opportunity to assess implementation of these metrics or their usefulness in monitoring and redirecting actions. In the past, DOD has had many initiatives and plans that failed due to a lack of sustained leadership focus and effective oversight and monitoring. Without sustained leadership focus and effective oversight and monitoring, DOD’s current efforts to achieve audit readiness by a defined date are at risk of following the path of the department’s prior efforts and fall short of obtaining sustained substantial improvements in DOD’s financial management operations and capabilities or achieving validation through independent audits. DOD officials have said that successful implementation of ERPs is key to resolving the long-standing weaknesses in the department’s business operations in areas such as business transformation, financial management, and supply chain management, and improving the department’s capability to provide DOD management and Congress with accurate and reliable information on the results of DOD’s operations. For example in 2010, we reported that the Army Budget Office lacked an adequate funds control process to provide it with ongoing assurance that obligations and expenditures do not exceed funds available in the Military Personnel, Army (MPA) appropriation. These weaknesses resulted in a shortfall of $200 million in 2008. Army Budget Office personnel explained that they rely on estimated obligations, rather than actual data from program managers, to record the initial obligation or adjust the estimated obligation due to inadequate financial management systems. DOD has identified 10 ERPs, 1 of which has been fully implemented, as essential to its efforts to transform its business operations. Appendix II contains a description of each of the remaining 9 ERPs currently being implemented within the department. According to DOD, as of December 2009, it had invested approximately $5.8 billion to develop and implement these ERPs and will invest additional billions before the remaining 9 ERPs are fully implemented. The department has noted that the successful implementation of these 10 ERPs will replace over 500 legacy systems that reportedly cost hundreds of millions of dollars to operate annually. However, our prior reviews of several ERPs have found that the department has not effectively employed acquisition management controls or delivered the promised capabilities on time and within budget. More specifically, significant leadership and oversight challenges, as illustrated by the Logistics Modernization Program (LMP) example discussed appendix I, have hindered the department’s efforts to implement these systems on schedule, within cost, and with the intended capabilities. Based upon the information provided by the program management offices (PMOs), six of the ERPs have experienced schedule slippages, as shown in table 1, based on comparing the estimated date that each program was originally scheduled to achieve full deployment to the full deployment date as of December 2009. For the remaining three ERPs, the full deployment date has either remained unchanged or has not been established. The GFEBS PMO noted that the acquisition program baseline approved in November 2008, established a full deployment date in fiscal year 2011 and that date remains unchanged. Additionally, according to the GCSS-Army PMO a full deployment date has not been established for this effort. The PMO noted that a full deployment date will not be established for the program until a full deployment decision has been approved by the department. A specific timeframe has not been established for when the decision will be made. Further, in the case of DAI, the original full deployment date was scheduled for fiscal year 2012, but the PMO is in the process of reevaluating the date and a new date has not yet been established. Prior work by GAO and the U.S. Army Test and Evaluation Command found that delays in implementing the ERPs have occurred, in part, due to inadequate requirements management and system testing, and data quality issues. These delays have contributed not only to increased implementation costs in at least five of the nine ERPS, as shown in table 2, they have also resulted in DOD having to fund the operation and maintenance of the legacy systems longer than anticipated, thereby reducing funds that could be used for other DOD priorities. Effective and sustained leadership and oversight of the department’s ERP implementations is needed to ensure that these important initiatives are implemented on schedule, within budget, and result in the integrated capabilities needed to transform the department’s financial management and related business operations. In closing, I am encouraged by continuing congressional oversight of DOD’s financial management improvement efforts and the commitment DOD’s leaders have expressed to improving the department’s financial management and achieving financial statement audit readiness. For instance, we have seen positive short-term progress on the part of DOD in moving forward. In its May 2010 FIAR status report, DOD reported actions it had taken in response to the 2010 NDAA and our prior recommendations to enhance effectiveness of the FIAR Plan as a strategic plan and management tool for guiding, monitoring, and reporting on the department’s efforts to resolve its financial management weaknesses and achieve audit readiness. The department has expanded the FIAR governance body to include the Chief Management Officer, issued guidance to aid DOD components in their efforts to address their financial management weaknesses and achieve audit readiness, and standardized component financial improvement plans to facilitate oversight and monitoring, as well as sharing lessons learned. In addition, DOD has revised its FIAR strategy to focus its financial management improvement efforts on departmentwide priorities, first on budgetary information and preparing the department’s Statements of Budgetary Resources for audit and second on accountability over the department’s mission-critical assets as a way of improving information used by DOD leaders to manage operations and to more effectively demonstrate incremental progress toward achieving audit readiness. Whether promising signs, such as shared priorities and approaches, develop into sustained progress will ultimately depend on DOD leadership and oversight to help achieve successful implementation. The expanded FIAR governance framework, including the CMOs, is a start; but their specific roles and responsibilities toward the department’s financial management improvement efforts still need to be defined. Importantly, sustained and effective leadership, oversight, and accountability at the department and component levels will be needed in order to help ensure that DOD’s current efforts to achieve auditability by a defined date don’t follow the path of the department’s prior efforts and fall short of obtaining sustained substantial improvement. The revised FIAR strategy is still in the early stages of implementation, and DOD has a long way and many long-standing challenges to overcome, particularly in regard to active and sustained leadership and oversight, before its military components and the department are fully auditable, and financial management is no longer considered high risk. However, the department is heading in the right direction. Some of the most difficult challenges ahead lie in effectively implementing the department’s strategy, including successful implementation of ERP systems and integration of financial management improvement efforts with other DOD initiatives. We will be issuing a report on DOD’s business system modernization efforts in October 2010 that discusses in greater detail the cost, schedule, and other issues that have hindered the success of important efforts. GAO will continue to monitor progress of the department’s financial management improvement efforts and provide feedback on the status of DOD’s financial management improvement efforts. We currently have work in progress to assess implementation of the department’s FIAR strategy through ongoing or recently initiated engagements related to (1) the U.S. Marine Corps’ (USMC) efforts to achieve an audit opinion on its Statement of Budgetary Resources, which regardless of its success should provide lessons learned that can be shared with other components, (2) the military departments’ implementation of the FIAR strategy and guidance, and (3) the department’s efforts to develop and implement ERPs. In addition, we will continue our oversight and monitoring of DOD’s financial statement audits, including the Army Corps of Engineers and DOD consolidated financial statements. Mr. Chairman and Ranking Member McCain, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. For further information regarding this testimony, please contact Asif A. Khan, (202) 512-9095 or khana@gao.gov. Key contributors to this testimony include J. Christopher Martin, Senior-Level Technologist; Evelyn Logue, Assistant Director; Darby Smith, Assistant Director; Paul Foderaro, Assistant Director; Gayle Fischer, Assistant Director; F. Abe Dymond, Assistant General Counsel; Beatrice Alff; Maxine Hattery; Jason Kirwan; Crystal Lazcano; and Omar Torres. Despite years of improvement efforts since 2002, DOD has annually reported to Congress that the department is unable to provide reasonable assurance that the information reported in its financial statements is reliable due to long-standing weaknesses in its financial management and related business processes, controls, and systems. Importantly, these weaknesses not only affect the reliability of the department’s financial reports, as illustrated in the following examples, they also adversely affect the department’s ability to assess resource requirements; control costs; ensure basic accountability; anticipate future costs and claims on the budget; measure performance; maintain funds control; prevent fraud waste, abuse, and mismanagement; and address pressing management issues, as the following examples illustrate, The Army Budget Office lacks an adequate funds control process to provide it with ongoing assurance that obligations and expenditures do not exceed funds available in the Military Personnel, Army (MPA) appropriation. In June 2010, we reviewed Army obligation and expenditure reports pertaining to Army’s fiscal year 2008 MPA appropriation and confirmed that the Army had violated the Antideficiency Act, as evidenced by the Army’s need to transfer $200 million from the Army working capital fund to cover the shortfall. This shortfall stemmed, in part, from a lack of reliable financial information on enlistment and reenlistment contracts, which provide specified bonuses to service members. Army Budget personnel explained that they rely on estimated obligations, rather than actual data from program managers, to record the initial obligation or adjust the estimated obligation due to inadequate financial management systems. Without adequate processes, controls, and systems to establish and maintain effective funds control, the Army’s ability to prevent, identify, and report potential Antideficiency Act violations is impaired. While DOD has invested over a trillion of dollars to acquire weapon systems, also referred to as military equipment, the department continues to lack the processes and system capabilities to reliably identify, aggregate and report the full cost of its investment in these assets. We reported this as an issue to the Air Force over 20 years ago. In July 2010, we reported that although DOD and the military departments have efforts underway to begin addressing these financial management weaknesses, DOD officials acknowledged that additional actions were needed that will require the support of other business areas beyond the financial community, before they will be fully addressed. Without timely, reliable, and useful financial information on the full cost associated with acquiring assets, both DOD management and Congress lack key information needed for use in effective decision making, such as determining how to allocate resources to programs or evaluating program performance to help strengthen oversight and accountability. The department’s ability to identify, aggregate, and use financial management information to develop plans for managing and controlling operating and support costs for major weapons systems is limited. DOD spends billions of dollars each year to sustain its weapon systems. These operating and support (O&S) costs can account for a significant portion of a weapon’s system’s total life-cycle costs and include costs for, among other things, repair parts, maintenance, and contract services. However, in July 2010 we reported that the department lacked key information needed to effectively manage and reduce O&S costs for most of the weapon systems we reviewed—including life-cycle O&S cost estimates and consistent and complete historical data on actual O&S costs. Specifically, we found that the military departments lacked (1) life-cycle O&S cost estimates developed at the production milestone for five of the seven aviation systems we reviewed and (2) complete data on actual O&S costs. Without historical life-cycle O&S cost estimates and complete data on actual O&S costs, DOD officials lack important data for analyzing the rate of O&S cost growth for major weapon systems, identifying cost drivers, and developing plans for managing and controlling these costs. The department and military services continue to have difficultly effectively deploying business systems, on time, within budget, and with the functionality intended to significantly transform business operations. For example, in April 2010, we reported that the management processes the Army established prior to the second deployment of its Logistics Modernization Program (LMP) were not effective in managing and overseeing the second deployment of this system. Specifically, we found that due to data quality issues, the Army was unable to ensure that the data used by LMP were of sufficient quality to enable the Corpus Christi and Letterkenny Army depots to perform their day-to-day missions after LMP became operational at these locations. For example, LMP could not automatically identify the materials needed to support repairs and ensure that parts would be available in time to carry out the repairs. Labor rates were also missing for some stages of repair, thereby precluding LMP from computing labor costs for the repair projects. As a result of these data issues, manual work-around processes had to be developed and used in order for the depots to accomplish their repair missions. Furthermore, the performance measures the Army used to assess implementation failed to detect that manual work-arounds rather than LMP were used to support repair missions immediately following LMP’s implementation at the depots. Without adequate performance measures to evaluate how well these systems are accomplishing their desired goals, DOD decision makers including program managers do not have all the information they need to evaluate their systems investments to determine the extent to which individual programs are helping DOD achieve business transformation, including financial management, and whether additional remediation is needed. In addition to the DOD IG reports on internal controls and compliance with laws and regulations included in DOD and military department annual financial reports, the DOD IG has other reports highlighting a variety of internal controls weaknessesin the department’s financial management that affect DOD operations as the following illustrate. In January 2010, the DOD IG evaluated the internal controls over the USMC transactions processed through the Deployable Disbursing System (DDS) and determined that USMC did not maintain adequate internal controls to ensure the reliability of the data processed. Specifically, the DOD IG found that USMC disbursing personnel had not complied with the statute when authorizing vouchers for payment or segregated certifying duties from disbursing when making payments. Further, the DOD IG found that USMC personnel had circumvented internal controls restricting access to DDS information. As a result, the DOD IG concluded that USMC was at risk of incurring unauthorized, duplicate, and improper payments. In June 2009, the DOD IG reported that the Army did not have adequate internal controls over accountability for approximately $169.6 million of government-furnished property at two Army locations reviewed. Specifically, the DOD IG found that Army personnel had not ensured the proper recording of transfers of property accountability to contractors, physical inventories and reconciliation, or the identification of government property at these locations. As a result, the DOD IG concluded that the Army’s property accountability databases at these two locations were misstated and these two Army locations were at risk of unauthorized use, destruction or loss of government property. The department stated that implementation of the following nine ERPs are critical to transforming the department’s business operations and addressing some of its long-standing weaknesses. A brief description of each ERP is presented below. The General Fund Enterprise Business System (GFEBS) is intended to support the Army’s standardized financial management and accounting practices for the Army’s general fund, with the exception of that related to the Army Corps of Engineers which will continue to use its existing financial system, the Corps of Engineers Financial Management System. GFEBS will allow the Army to share financial, asset and accounting data across the active Army, the Army National Guard, and the Army Reserve. The Army estimates that when fully implemented, GFEBS will be used to control and account for about $140 billion in spending. The Global Combat Support System-Army (GCSS-Army) is expected to integrate multiple logistics functions by replacing numerous legacy systems and interfaces. The system will provide tactical units with a common authoritative source for financial and related non-financial data, such as information related to maintenance and transportation of equipment. The system is also intended to provide asset visibility for accountable items. GCSS-Army will manage over $49 billion in annual spending by the active Army, National Guard, and the Army Reserve. The Logistics Modernization Program (LMP) is intended to provide order fulfillment, demand and supply planning, procurement, asset management, material maintenance, and financial management capabilities for the Army’s working capital fund. The Army has estimated that LMP will be populated with 6 million Army-managed inventory items valued at about $40 billion when it is fully implemented. The Navy Enterprise Resource Planning System (Navy ERP) is intended to standardize the acquisition, financial, program management, maintenance, plant and wholesale supply, and workforce management capabilities at six Navy commands. Once it is fully deployed, the Navy estimates that the system will control and account for approximately $71 billion (50 percent), of the Navy’s estimated appropriated funds—after excluding the appropriated funds for the Marine Corps and military personnel and pay. The Global Combat Support System–Marine Corps (GCSS-MC) is intended to provide the deployed warfighter enhanced capabilities in the areas of warehousing, distribution, logistical planning, depot maintenance, and improved asset visibility. According to the PMO, once the system is fully implemented, it will control and account for approximately $1.2 billion of inventory. The Defense Enterprise Accounting and Management System (DEAMS) is intended to provide the Air Force the entire spectrum of financial management capabilities, including collections, commitments and obligations, cost accounting, general ledger, funds control, receipts and acceptance, accounts payable and disbursement, billing, and financial reporting for the general fund. According to Air Force officials, when DEAMS is fully operational, it is expected to maintain control and accountability for about $160 billion. The Expeditionary Combat Support System (ECSS) is intended to provide the Air Force a single, integrated logistics system—including transportation, supply, maintenance and repair, engineering and acquisition—for both the Air Force’s general and working capital funds. Additionally, ECSS is intended to provide the financial management and accounting functions for the Air Force’s working capital fund operations. When fully implemented, ECSS is expected to control and account for about $36 billion of inventory. The Service Specific Integrated Personnel and Pay Systems are intended to provide the military departments an integrated personnel and pay system. Defense Agencies Initiative (DAI) is intended to modernize the defense agencies’ financial management processes by streamlining financial management capabilities and transforming the budget, finance, and accounting operations. When DAI is fully implemented, it is expected to have the capability to control and account for all appropriated, working capital and revolving funds at the defense agencies implementing the system. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As one of the largest and most complex organizations in the world, the Department of Defense (DOD) faces many challenges in resolving its pervasive and long-standing financial management and related business operations and systems problems. DOD is required by various statutes to (1) improve its financial management processes, controls, and systems to ensure that complete, reliable, consistent, and timely information is prepared and responsive to the financial information needs of agency management and oversight bodies, and (2) produce audited financial statements. DOD has initiated numerous efforts over the years to improve the department's financial management operations and ultimately achieve unqualified (clean) opinions on the reliability of reported financial information. The Subcommittee has asked GAO to provide its perspective on DOD's current efforts to address its financial management weaknesses and achieve auditability, including the status of its Enterprise Resource Planning (ERP) system implementations. GAO's testimony is based on its prior work related to DOD's financial improvement and audit readiness strategy and related activities, including its ERP implementation efforts. DOD has initiated numerous efforts over the years to address its financial management weaknesses and achieve audit readiness. In 2005, DOD issued its Financial Improvement and Audit Readiness (FIAR) Plan to define the department's strategy and methodology for improving financial management operations and controls, and reporting its progress. In 2009, DOD Comptroller directed that the department's FIAR efforts be focused on improving processes and controls supporting information most often used to manage operations, while continuing to work toward achieving financial statement auditability. To support these objectives, DOD established two priority focus areas: budget information and information pertaining to mission-critical assets. In 2010, DOD revised its FIAR strategy, governance framework, and methodology to support the DOD Comptroller's direction and priorities and to comply with fiscal year 2010 defense authorizing legislation, which incorporated GAO recommendations intended to improve the FIAR Plan as a strategic plan. Based on what GAO has seen to date, DOD's revised FIAR Plan strategy and methodology reflects a reasonable approach. Moreover, GAO supports prioritizing focus areas for improvement and is hopeful that a consistent focus provided through shared FIAR priorities will increase incremental progress toward improved financial management operations. However, developing sound plans and methodology, and getting leaders and organizations in place is only a start. DOD needs to define specific roles and responsibilities for the Chief Management Officers (CMO)--including when and how the CMOs are expected to become involved in problem resolution and in ensuring cross-functional area commitment to financial improvement activities. A key element of the FIAR strategy is successful implementation of the ERPs. According to DOD, as of December 2009, it had invested approximately $5.8 billion to develop and implement these ERPs and will invest additional billions before these efforts are complete. However, as GAO has previously reported inadequate requirements management, systems testing, ineffective oversight over business system investments, and other challenges have hindered the department's efforts to implement these systems on schedule and within cost. Whether DOD's FIAR strategy will ultimately lead to improved financial management capabilities and audit readiness depends on DOD leadership and oversight to help achieve successful implementation. Sustained effort and commitment at the department and component levels will be needed to address weaknesses and produce financial management information that is timely, reliable, and useful for managers throughout DOD. GAO will continue to monitor DOD's progress and provide feedback on the status of DOD's financial management improvement efforts.
You are an expert at summarizing long articles. Proceed to summarize the following text: TRICARE beneficiaries used the program’s pharmacy benefit to fill almost 134 million outpatient prescriptions in fiscal year 2012. Through its acquisition process, DOD contracts with a pharmacy benefit manager— currently Express Scripts—to provide access to a retail pharmacy network and operate a mail order pharmacy for beneficiaries, and to provide administrative services. Under TRICARE, beneficiaries have three primary health plan options in which they may participate: (1) a managed care option called TRICARE Prime, (2) a preferred-provider option called TRICARE Extra, and (3) a fee-for-service option called TRICARE Standard. TRICARE beneficiaries may obtain medical care through a direct-care system of military treatment facilities or a purchased-care system consisting of network and non-network private sector primary and specialty care providers, and hospitals. In addition, TRICARE’s pharmacy benefit— offered under all TRICARE health plan options—provides beneficiaries with three options for obtaining prescription drugs: from military treatment facility pharmacies, from network and non-network retail pharmacies, and through the TRICARE mail-order pharmacy. TRICARE’s pharmacy benefit has a three-tier copayment structure based on whether a drug is included in DOD’s formulary and the type of pharmacy where the prescription is filled. (See table 1.) DOD’s formulary includes a list of drugs that all military treatment facilities must provide, and a list of drugs that military treatment facilities may elect to provide on the basis of the types of services offered at that facility (e.g., cancer drugs at facilities that provide cancer treatment).as “non-formulary” on the basis of its evaluation of their cost and clinical effectiveness. Non-formulary drugs are available to beneficiaries at a higher cost, unless the provider can establish medical necessity. See Pub. L. No. 110-181, § 703, 122 Stat. 3, 188 (codified at 10 U.S.C. § 1074g(f)). This act provides that with respect to any prescriptions filled on or after January 28, 2008, the TRICARE retail pharmacy program is to be treated as an element of DOD for purposes of procurement of drugs by federal agencies under 38 U.S.C. § 8126 to ensure that drugs paid for by DOD that are dispensed to TRICARE beneficiaries at retail pharmacies are subject to federal pricing arrangements. manufacturers are required to refund to DOD the difference between the federal pricing arrangements and the retail price paid for prescriptions filled dating back to the NDAA’s enactment on January 28, 2008. As of July 31, 2013, according to DOD, its total estimated savings from fiscal year 2009 through fiscal year 2013 were about $6.53 billion as a result of these refunds. DOD’s TRICARE Management Activity is responsible for overseeing the TRICARE program, including the pharmacy benefit. Within this office, the Pharmaceutical Operations Directorate (hereafter referred to as the program office) is responsible for managing the pharmacy benefit (including the contract to provide pharmacy services), and the Acquisition Management and Support Directorate (hereafter referred to as the contracting office) is responsible for managing all acquisitions for the TRICARE Management Activity. The two offices together manage the acquisition process for the pharmacy services contract. (See fig. 2.) The program office and the contracting office provide the clinical expertise and acquisition knowledge, respectively, for the acquisition planning, evaluation of proposals, and award of the pharmacy services contract. The acquisition process for DOD’s pharmacy services contract includes three main phases: (1) acquisition planning, (2) RFP, and (3) award. Acquisition planning. In the acquisition planning phase, the program office, led by the program manager, is primarily responsible for defining TRICARE’s contract requirements—the work to be performed by the contractor—and developing a plan to meet those requirements. The program office also receives guidance and assistance from the contracting office in the development and preparation of key acquisition documents and in the market research process. The market research process can involve the development and use of several information-gathering tools, including requests for information (RFI), which are publicly released documents that allow the government to obtain feedback from industry on various acquisition elements such as the terms and conditions of the contract. RFIs are also a means by which the government can identify potential offerors and determine whether the industry can meet its needs. In addition, we have previously reported that sound acquisition planning includes an assessment of lessons learned to identify improvements. Towards the end of this phase, officials in the program and contracting offices work together to revise and refine key acquisition planning documents. RFP. In the RFP phase, the contracting officer—the official in the contracting office who has the authority to enter into, administer, modify, and terminate contracts—issues the RFP, the proposals. Award. In the award phase, the program and contracting offices are responsible for evaluating proposals and awarding a contract to the offeror representing the best value to the government based on a combination of technical and cost factors. To monitor the contractor’s performance under the contract after award, the contracting officer officially designates a program office official as the contracting officer’s representative (COR), who acts as the liaison between the contracting officer and the contractor and is responsible for the day-to-day monitoring of contractor activities to ensure that the services are delivered in accordance with the contract’s performance standards. The draft monitoring plan for the upcoming pharmacy services contract includes 30 standards—related to timeliness of claims processing, retail network access, and beneficiary satisfaction, among other things—against which the contractor’s performance will be measured. RFPs include a description of the contract requirements, the anticipated terms and conditions that will be contained in the contract, the required information that the prospective offerors must include in their proposal, and the factors that will be used to evaluate proposals. DOD has department-wide acquisition training and experience requirements for all officials who award and administer DOD contracts, including the pharmacy services contract, as required by law. Training is primarily provided through the Defense Acquisition University, and is designed to provide a foundation of acquisition knowledge, but is not targeted to specific contracts or contract types. In addition, all CORs must meet training and experience requirements specified in DOD’s Standard for Certification of Contracting Officer’s Representatives (COR) for Service Acquisitions issued in March 2010. See appendix I for more information on the certification standards for and experience of officials who award and administer the pharmacy services contract. In September 2010, DOD issued guidance to help improve defense acquisition through its Better Buying Power Initiative. DOD’s Better Buying Power Initiative encompasses a set of acquisition principles designed to achieve greater efficiencies through affordability, cost control, elimination of unproductive processes and bureaucracy, and promotion of competition; it provides guidance to acquisition officials on how to implement these principles. The principles are also designed to provide incentives to DOD contractors for productivity and innovation in industry and government. DOD used market research to align the requirements for the upcoming pharmacy services contract with industry best practices and promote competition. DOD also identified changes to the requirements for the upcoming and current contracts in response to changes in legislation, efforts to improve service delivery, and contractor performance. DOD solicited information from industry during its acquisition planning for the upcoming pharmacy services contract through the required market research process, including issuing RFIs and a draft RFP for industry comment, to identify changes to requirements for its pharmacy services contract. Specifically, DOD used market research to align the requirements for the upcoming contract with industry best practices and promote competition. Align contract requirements with industry best practices. DOD issued five RFIs from 2010 through 2012 related to the upcoming contract. RFIs are one of several market research Although DOD is not methods available to federal agencies. required to use them, RFIs are considered a best practice for service acquisitions in the federal government.provided DOD with the opportunity to assess the capability of potential offerors to provide services that DOD may incorporate in the upcoming pharmacy services contract. In many of the RFIs, The RFIs DOD asked questions about specific market trends so that it could determine if changes were needed to the upcoming contract requirements to help align them with industry best practices. For example, DOD issued one RFI in November 2010 that asked about establishing a mechanism that would allow for centralized distribution of specialty pharmaceuticals and preserve DOD’s federal pricing arrangements. Specialty pharmaceuticals—high- cost injectable, infused, oral, or inhaled drugs that are generally more complex to distribute, administer, and monitor than traditional drugs—are becoming a growing cost driver for pharmacy services. According to DOD officials, the RFI responses received from industry generally reinforced their view that the RFP should define any specialty pharmacy owned or sub-contracted by the contractor as a DOD specialty mail-order outlet, which would subject it to the same federal pricing arrangements as the mail- order pharmacy. Promote competition. DOD has also used the RFI process to obtain information on promoting competition. DOD recognized that a limited number of potential offerors may have the capability to handle the pharmacy services contract given the recent consolidation in the pharmacy benefit management market and the large size of the TRICARE beneficiary population. DOD contracting officials told us that, in part because of the department’s Better Buying Power Initiative to improve acquisition practices, they have a strong focus on maintaining a competitive contracting environment for the pharmacy services contract, thereby increasing the use of market research early in the acquisition planning process. For example, DOD’s December 2011 RFI asked for industry perspectives on the length of the contract period. DOD was interested in learning whether a longer contract period would promote competition. DOD officials told us that the responses they received confirmed that potential offerors would prefer a longer contract period because it would allow a non-incumbent more time to recover any capital investment made as part of implementing the contract. The RFP for the upcoming contract includes a contract period of 1 base year and 7 option years. DOD also used the RFI process to confirm that there were a sufficient number of potential offerors to ensure full and open competition for the pharmacy services contract. DOD officials told us that they found there were at least six potential offerors, which gave them confidence that there would be adequate competition. Since the start of the current pharmacy services contract in 2009, DOD has identified changes to the contract requirements in response to legislative changes to the pharmacy benefit, efforts to improve service delivery to beneficiaries, and improvements identified through monitoring of the current contractor’s performance. In each instance, DOD officials needed to determine whether to make the change for the upcoming contract, or whether to make the change via a modification to the current contract. According to DOD officials, there were over 300 modifications to the current pharmacy services contract; 23 of these were changes to the work to be done by the contractor. DOD officials told us that it is not possible to build a level of flexibility into the contract to accommodate or anticipate all potential changes (and thus avoid modifications to the contract), because doing so would make it difficult for offerors to determine pricing in their proposals. Legislative changes to the pharmacy benefit. Legislative changes have been one key driver of DOD’s revisions to its pharmacy services contract requirements. For example, one legislative change required DOD to implement the TRICARE Young Adult program, which resulted in DOD adding a requirement for the contractor to extend pharmacy services to eligible military dependents through the age of 26. This change was made as a modification under the current contract. Another legislative change that necessitated changes to the contract requirements was the increase in beneficiary copayments for drugs obtained through mail-order or retail pharmacies, enacted as part of the NDAA for fiscal year 2013, which DOD changed through a modification to the current contract. A third legislative change to the pharmacy benefit was the mail-order pilot for maintenance drugs for TRICARE for Life beneficiaries. DOD officials incorporated this change in the requirements for the upcoming pharmacy services contract, as outlined in the RFP. Efforts to improve service delivery. DOD has also updated contract requirements to improve service delivery to beneficiaries under the pharmacy services contract. DOD initiated a modification to the current contract to require the contractor to provide online coordination of benefits for beneficiaries with health care coverage from multiple insurers. Specifically, the contractor is required to ensure that pharmacy data systems include information on government and other health insurance coverage to facilitate coverage and payment determinations. According to DOD officials, this change is consistent with the updated national telecommunication standard from the National Council for Prescription Drug Programs, which provides a uniform format for electronic claims processing. According to DOD officials, this change to the contract requirements eliminates the need for beneficiaries to file paper claims when TRICARE is the secondary payer, simplifying the process for beneficiaries and reducing costs for DOD. Another modification to the current contract to improve service delivery was to require the contractor to provide vaccines through its network of retail pharmacies. According to DOD officials, this modification was made to allow beneficiaries to access vaccines through every possible venue, driven by the 2010 H1N1 influenza pandemic. Contractor performance. DOD officials told us that improvements identified through the monitoring of contractor performance have also led to changes in contract requirements. Through the CORs’ monitoring of the contractor’s performance against the standards specified in the contract, the CORs may determine that a particular standard is not helping to achieve the performance desired or is unnecessarily restrictive. For example, DOD officials told us that in the current contract, they had a three- tiered standard for paper claims processing (e.g., 95 percent of paper claims processed within 10 days, 99 percent within 20 days, and 100 percent within 30 days). Through monitoring the contractor’s performance, the CORs determined that there was a negligible difference between the middle and high tiers, and holding the contractor to this performance standard was not beneficial. The requirements for the upcoming contract as described in the RFP only include two tiers—95 percent of claims processed within 14 calendar days, and 100 percent within 28 calendar days. When making changes to contract requirements, DOD officials told us they try to ensure that the requirements are not overly prescriptive, but rather outcome-oriented and performance-based. For example, DOD officials told us that they allowed the pharmacy and managed care support contractors to innovate and apply industry best practices regarding coverage and coordination of home infusion services. According to DOD officials, the contract requirements regarding home infusion are focused on the desired outcome—providing coordination of care for beneficiaries needing these services with the physician as the key decision maker—and DOD officials facilitated meetings between the pharmacy contractor and managed care support contractors to determine the details of how to provide the services. This approach is consistent with DOD’s Better Buying Power principles that emphasize the importance of well-defined contract requirements and acquisition officials’ understanding of cost-performance trade-offs. This approach also addresses a concern we have previously identified regarding overly prescriptive contract requirements in TRICARE contracts; specifically, in our previous work on the managed care support contracts, we reported that DOD’s prescriptive requirements limited innovation and competition among contractors. Since retail pharmacy services were carved out about 10 years ago, DOD has not conducted an assessment of the appropriateness of its current pharmacy services contract structure that includes an evaluation of the costs and benefits of alternative structures. Alternative structures can include a carve-in of all pharmacy services into the managed care support contracts, or a structure that carves in a component of pharmacy services, such as the mail-order pharmacy, while maintaining a carve-out structure for other components. DOD officials told us they believe that DOD’s current pharmacy services contract structure continues to be appropriate, as it affords more control over pharmacy data and allows for more detailed data analyses and increased transparency about costs. DOD’s continued use of a carve-out contract structure for pharmacy services is consistent with findings from research and perspectives we heard from industry group officials—that larger employers are more likely to carve out pharmacy services to better leverage the economies of scale and cost savings a stand-alone pharmacy benefit manager can achieve.These arrangements may also provide more detailed information on drug utilization that can be helpful in managing drug formularies and their associated costs. In its December 2007 report, the Task Force on the Future of Military Health Care recommended examining an alternative structure for the pharmacy services contract.health care system, the task force reviewed DOD’s pharmacy benefit program, recommending that DOD pilot a carve-in pharmacy contract structure within one of the TRICARE regions with a goal of achieving better financial and health outcomes as a result of having more integrated pharmacy and medical services. The managed care support contractors we spoke with expressed similar concerns. However, DOD did not agree with the task force’s recommendation. In its response, DOD assessed the In addition to other aspects of DOD’s benefits of the current structure and affirmed the department’s commitment to this structure. Potential cost savings. In its response to the task force report, DOD did not concur with the recommendation to pilot a carve-in pharmacy contract structure, in part because of the cost savings achieved through the carve-out. Specifically, DOD stated that the carve-out arrangement is compatible with accessing federal pricing arrangements and other discounts available for direct purchases. DOD stated in its response that, under a carve-in arrangement, even on a pilot basis, it would lose access to discounts available for direct purchases, including some portion of the $400 million in annual discounts available for drugs dispensed at retail pharmacies under the NDAA for fiscal year 2008. DOD officials told us that this loss would result from the managed care support contractor being the purchaser of the drugs, rather than DOD. DOD also stated that it would possibly lose access to the volume discounts obtained for drugs purchased for the mail-order pharmacy and military treatment facility pharmacies under a carve-in structure. DOD officials told us that these disadvantages of a carve-in structure remain the same today. Additionally, during this review, DOD noted that dividing the TRICARE beneficiary population among contractors under a carve-in would dilute the leverage a single pharmacy benefit manager would have in the market. For example, DOD would lose economies of scale for claims processing services provided by the pharmacy contractor, resulting in increased costs. However, research studies have found, and officials from TRICARE’s managed care support contractors told us, that a contract structure with integrated medical and pharmacy services could result in cost savings for DOD. For example, one recent study found that employers with carve-in health plans had 3.8 percent lower total medical care costs compared to employers with pharmacy services carved out. The researchers attributed the cost difference, in part, to increased coordination of care for the carve-in plans, leading to fewer adverse events for patients, resulting in fewer inpatient admissions; they reported that plans with a carve-out arrangement had 7 percent higher inpatient admissions. Similarly, representatives from one managed care support contractor we spoke with stated they thought they could achieve similar cost savings to what DOD currently has through its federal pricing arrangements by using integrated medical and pharmacy services as a means of reducing costs in a carve-in arrangement. Being able to analyze integrated, in-house medical and pharmacy data may help health plans to lower costs by identifying high-cost beneficiaries, including those with chronic conditions such as asthma and diabetes, and targeting timely and cost-effective interventions for this population. Potential health benefits from data integration. In recommending that DOD pilot a carve-in pharmacy contract structure, one of the task force’s goals was to improve health outcomes as a result of integrated medical and pharmacy services. DOD noted in its task force response that it could achieve this goal under the current carve-out contract structure by including requirements in the pharmacy services contract and managed care support contracts requiring data sharing between the contractors. While the current contract requires the pharmacy and managed care support contractors to exchange data for care coordination, current TRICARE managed care support contractors told us there continue to be challenges with data sharing to facilitate disease management. Contractors expressed similar concerns about sharing medical and pharmacy data as part of our previous work related to DOD’s managed care support contracts. Additionally, during this review, officials from one of the managed care support contractors told us that they continue to find it challenging to generate data that provide a holistic view of beneficiaries when medical and pharmacy data remain separate. Representatives from another managed care support contractor told us that their disease management staff faced challenges in analyzing pharmacy data for groups of patients they were managing. They also told us that if these staff had more complete and real-time access to pharmacy data, as they would under a carve-in structure, they could be more proactive in assisting DOD’s efforts to identify patients who should participate in disease management programs. Additionally, researchers have found that disease management interventions may be challenging to conduct in a carve-out arrangement due to the lack of fully integrated medical and pharmacy data. According to DOD officials, any changes to the current contract structure would result in less efficient and inconsistent pharmacy service delivery across the three TRICARE regions, as officials observed when the retail pharmacy benefit was part of the managed care support contracts. One of DOD’s reasons for the initial carve-out was a concern that pharmacy services were not being consistently implemented across the TRICARE regions. For example, DOD officials told us that two health plans in different TRICARE regions were able to have different preferred drugs within the same therapeutic class, and while both drugs may be included on DOD’s formulary, beneficiaries in different parts of the country were not being consistently provided with the same drug. In addition, according to DOD, beneficiaries were dissatisfied with a benefit that was not portable across TRICARE regions—specifically, retail pharmacy networks differed by region, so beneficiaries who moved from one TRICARE region to another would have to change retail pharmacy networks. With one national pharmacy services contract, DOD officials said they can ensure that the formulary is implemented consistently and that beneficiaries have access to the same retail pharmacy network across the TRICARE regions. Since the current pharmacy services contract structure was implemented almost 10 years ago, DOD has not incorporated an assessment of the contract structure that includes an evaluation of alternative structures into its acquisition planning activities. DOD officials told us that they consider their task force response to be an assessment of the current contract structure. While the response included a justification for the current structure, it did not include an evaluation of the potential costs and benefits of alternative structures, such as carving in all or part of the pharmacy benefit. In addition, the acquisition plan for the upcoming contract described two alternative carve-out configurations (separate contracts for the mail-order and retail pharmacies and a government- owned facility to house drugs for the mail-order pharmacy contract). However, the plan similarly did not include an evaluation of the potential costs and benefits of these options, nor did the plan include an evaluation of any carve-in alternatives. DOD officials told us there are no current plans to conduct such an evaluation as part of the department’s acquisition planning efforts. DOD officials also told us that they continue to believe the current structure is appropriate because the current carve-out structure provides high beneficiary satisfaction and is achieving DOD’s original objectives, namely consistent provision of benefits, access to federal pricing arrangements, and transparency of pharmacy utilization and cost data. Further, officials told us that the current carve-out structure is more efficient to administer with one pharmacy services contractor than the previous carve-in structure that involved multiple managed care support contractors. While DOD officials believe the current structure is appropriate, there have been significant changes in the pharmacy benefit management market in the past decade. These changes include mergers, as well as companies offering new services that may change the services and options available to DOD. For example, representatives from one managed care support contractor we spoke with told us that they can offer different services to DOD today than they were able to offer when pharmacy services were part of the managed care support contracts. While the contractor had previously sub-contracted with a separate pharmacy benefit manager to provide pharmacy services under its managed care support contract, this contractor’s parent company now provides in-house pharmacy benefit management services for its commercial clients. Additionally, according to the parent company of another managed care support contractor, its recent decision to bring pharmacy benefit management services in-house will enhance its ability to manage total health care costs and improve health outcomes for clients who carve in pharmacy services. As we have previously reported, sound acquisition planning includes an assessment of lessons learned to identify improvements. The time necessary for such activities can vary greatly, depending on the complexity of the contract. We have also reported that a comparative evaluation of the costs and benefits of alternatives can provide an evidence-based rationale for why an agency has chosen a particular alternative (such as a decision to maintain or alter the current pharmacy services contract structure). We have reported that such an evaluation would consider possible alternatives and should not be developed solely to support a predetermined solution. With each new pharmacy services contract, DOD officials have the opportunity to conduct acquisition planning activities that help determine whether the contract—and its current structure—continues to meet the department’s needs, including providing the best value and services to the government and beneficiaries. These activities can include changing requirements as necessary, learning about current market trends, and incorporating new information and lessons learned. Acquisition planning can also incorporate an assessment of the pharmacy services contract structure that includes an evaluation of the potential costs and benefits of alternative contract structures. Incorporating such an evaluation into the acquisition planning for each new pharmacy services contract can provide DOD with an evidence-based rationale for why maintaining or changing the current structure is warranted. Without such an evaluation, DOD cannot effectively demonstrate to Congress and stakeholders that it has chosen the most appropriate contract structure, in terms of costs to the government and services for beneficiaries. To provide decision makers with more complete information on the continued appropriateness of the current pharmacy services contract structure, and to ensure the best value and services to the government and beneficiaries, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense (Health Affairs) to take the following two actions: conduct an evaluation of the potential costs and benefits of alternative contract structures for the TRICARE pharmacy services contract; and incorporate such an evaluation into acquisition planning. We provided a draft of this report to DOD for comment. DOD generally concurred with our findings and conclusions and concurred with our recommendations. DOD also commented that based on past experience with alternative contract structures, it is confident that the current contract structure is the most cost efficient and beneficial. In response to our recommendation that DOD conduct an evaluation of the potential costs and benefits of alternative contract structures for the TRICARE pharmacy services contract, DOD commented that there is a lack of data to support inferences that a carve-in arrangement would result in cost savings to the government, and noted that the full development of two separate RFPs would be necessary to provide a valid cost comparison. While detailed cost estimates can be a useful tool for DOD, they are not the only means of evaluating alternative structures for the pharmacy services contract. For example, as we noted in our report, DOD has previously used RFIs to obtain information from industry to inform its decisions about the pharmacy services contract, and this process also may be helpful in identifying costs and benefits of alternative contract structures. In response to our recommendation that DOD incorporate such an evaluation into acquisition planning, DOD commented that it included an evaluation of its past contract experience into acquisition planning for the upcoming pharmacy services contract. However, as noted in our report, the acquisition plan for the upcoming contract did not include an evaluation of the potential costs and benefits of alternative contract structures, and DOD did not directly address how it would include such an evaluation in its acquisition planning activities. We continue to emphasize the importance of having an evidence-based rationale for why maintaining or changing the current structure is warranted. With each new pharmacy services contract, DOD officials have the opportunity to determine whether the contract continues to meet the department’s needs, including providing the best value to the government and services to beneficiaries. In addition, DOD stated in its comments that our report did not address its direct-care system and noted that carving pharmacy services back into the managed care support contracts would fragment the pharmacy benefit and undermine its goal of integrating all pharmacy points of service. Our review was focused on DOD’s purchased-care system for providing pharmacy services, although we did provide context about the direct-care system as appropriate. Furthermore, we did not recommend any specific structure for DOD’s pharmacy services contract, but rather that DOD evaluate the costs and benefits of alternative structures such that it can have an evidence-based rationale for its decisions. DOD’s comments are reprinted in appendix II. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Assistant Secretary of Defense (Health Affairs); and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Department of Defense (DOD) officials who award and administer the TRICARE pharmacy services contract are required to meet relevant certification standards applicable to all DOD acquisition officials and, according to DOD officials, some of these officials also have pharmacy- specific experience. The training and related education and experience requirements are tailored to different levels of authority and, due to the size and complexity of the pharmacy services contract, the contracting officer and program manager for the pharmacy services contract are required to be certified at the highest levels, which require the most training and experience. In addition, all contracting officer’s representatives (COR) must meet specific training and experience requirements based on the complexity and risk of the contracts they will be working with, and the two CORs for the pharmacy services contract are also required to meet the highest COR certification level. For example, the CORs for the pharmacy services contract must complete at least 16 hours of COR-specific continuing education every 3 years, which is twice the amount required for low-risk, fixed-price contracts. DOD’s department-wide acquisition training is primarily provided through the Training is designed to provide a Defense Acquisition University.foundation of acquisition knowledge but is not targeted to specific contracts or contract types. Beyond DOD’s required training, the contracting officer, program manager, and CORs also have specialized experience in pharmacy and related issues. See table 2 for the specific certification standards for and pharmacy-specific experience of the officials responsible for awarding or administering the pharmacy services contract. In addition to the contact named above, Janina Austin, Assistant Director; Lisa Motley; Laurie Pachter; Julie T. Stewart; Malissa G. Winograd; and William T. Woods made key contributions to this report.
DOD offers health care coverage--medical and pharmacy services--to eligible beneficiaries through its TRICARE program. DOD contracts with managed care support contractors to provide medical services, and separately with a pharmacy benefit manager to provide pharmacy services that include the TRICARE mail-order pharmacy and access to a retail pharmacy network. This is referred to as a carve-out contract structure. DOD's current pharmacy contract ends in the fall of 2014. DOD has been preparing for its upcoming contract through acquisition planning, which included identifying any needed changes to contract requirements. Senate Report 112-173, which accompanied a version of the NDAA for fiscal year 2013, mandated that GAO review DOD's health care contracts. For this report, GAO examined: (1) how DOD identified changes needed, if any, to requirements for its upcoming pharmacy services contract; and (2) what, if any, assessment DOD has done of the appropriateness of its current contract structure. GAO reviewed DOD acquisition planning documents and federal regulations, and interviewed officials from DOD and its pharmacy services contractor. The Department of Defense (DOD) used various methods to identify needed changes to requirements for its upcoming pharmacy services contract. During acquisition planning for the upcoming TRICARE pharmacy services contract, DOD solicited feedback from industry through its market research process to align the contract requirements with industry best practices and promote competition. For example, DOD issued requests for information (RFI) in which DOD asked questions about specific market trends, such as ensuring that certain categories of drugs are distributed through the most cost-effective mechanism. DOD also issued an RFI to obtain information on promoting competition, asking industry for opinions on the length of the contract period. DOD officials told us that responses indicated that potential offerors would prefer a longer contract period because it would allow a new contractor more time to recover any capital investment made in implementing the contract. The request for proposals for the upcoming contract, issued in June 2013, included a contract period of 1 base year and 7 option years. DOD also identified changes to contract requirements in response to legislative changes to the TRICARE pharmacy benefit. For example, the National Defense Authorization Act (NDAA) for fiscal year 2013 required DOD to implement a mail-order pilot for maintenance drugs for beneficiaries who are also enrolled in Medicare Part B. DOD officials incorporated this change in the requirements for the upcoming pharmacy services contract. DOD has not conducted an assessment of the appropriateness of its current pharmacy services contract structure that includes an evaluation of the costs and benefits of alternative structures. Alternative structures can include incorporating all pharmacy services into the managed care support contracts--a carve-in structure--or a structure that incorporates certain components of DOD's pharmacy services, such as the mail-order pharmacy, into the managed care support contracts while maintaining a separate contract for other components. DOD officials told GAO they believe that DOD's current carve-out contract structure continues to be appropriate, as it affords more control over pharmacy data that allows for detailed data analyses and cost transparency, meets program goals, and has high beneficiary satisfaction. However, there have been significant changes in the pharmacy benefit management market in the past decade, including mergers and companies offering new services that may change the services and options available to DOD. GAO has previously reported that sound acquisition planning includes an assessment of lessons learned to identify improvements. Additionally, GAO has reported that a comparative evaluation of the costs and benefits of alternatives can provide an evidence-based rationale for why an agency has chosen a particular alternative. Without this type of evaluation, DOD cannot effectively demonstrate that it has chosen the most appropriate contract structure in terms of costs to the government and services for beneficiaries. GAO recommends that DOD conduct an evaluation of the potential costs and benefits of alternative structures for the TRICARE pharmacy services contract, and incorporate such an evaluation into acquisition planning. DOD concurred with GAO's recommendations.
You are an expert at summarizing long articles. Proceed to summarize the following text: Both DB and DC plans operate in a voluntary system with tax incentives for employers to offer a plan and for employees to participate. In the past, DC plans, such as 401(k) plans, were supplemental to DB plans. However, over the past several decades, there has been a shift in pension plan coverage; the number of DC plans has increased while the number of DB plans has declined. Today, DC plans are the dominant type of private- sector employee pension. Compared to DB plans, DC plans offer workers more control over their retirement asset management and greater portability over their retirement savings, but also shift much of the responsibility and certain risks onto workers. Workers generally must elect to participate in a plan and accumulate savings in their individual accounts by making regular contributions over their careers. Participants typically choose how to invest plan assets from a range of options provided under their plan and accordingly face investment risk. There are several different categories of DC plans, but most are types of cash or deferred arrangements in which employees can direct pre-tax dollars, along with any employer contributions, into an account, with any asset growth tax-deferred until withdrawal. One option available under some 401(k) plans is automatic enrollment, under which workers are enrolled in a 401(k) plan automatically, unless they explicitly choose to opt out. However, automatic enrollment has not been a traditional feature of 401(k) plans and, prior to 1998, plan sponsors feared that adopting automatic enrollment could lead to plan disqualification. In 1998, the Internal Revenue Service (IRS) addressed this issue by stating that a plan sponsor could automatically enroll newly hired employees and, in 2000, clarified that automatic enrollment is permissible for current employees who have not enrolled. Nonetheless, a number of considerations inhibited widespread adoption of automatic enrollment, including remaining concerns such as liability in the event that the employee’s investments under the plan did not perform satisfactorily, and concerns about state laws that prohibit withholding employee pay without written employee consent. More recently, provisions of the Pension Protection Act of 2006 (PPA) and subsequent regulations further facilitated the adoption of automatic enrollment by providing incentives for doing so and by protecting plans from fiduciary and legal liability if certain conditions are met. In September 2009, the Department of the Treasury announced IRS actions designed to further promote automatic enrollment and the use of automatic escalation policies. The Employee Retirement Income Security Act of 1974 (ERISA), as amended, defines and sets certain standards for employee benefit plans, including 401(k) plans, sponsored by private-sector employers. ERISA establishes the responsibilities of employee benefit plan decision makers and the requirements for disclosing information about plans. ERISA requires that plan fiduciaries, which generally include the plan sponsor, carry out their responsibilities prudently and do so solely in the interest of the plan’s participants and beneficiaries. The Department of Labor’s (Labor) Employee Benefits Security Administration (EBSA) is the primary agency responsible for enforcing Title I of ERISA and thereby protecting private-sector pension plan participants and beneficiaries from the misuse or theft of pension assets. EBSA conducts civil and criminal investigations of plan fiduciaries and service providers to determine whether the provisions of ERISA or other relevant federal laws have been violated. In addition to Labor’s oversight, the Securities and Exchange Commission (SEC) provides oversight for 401(k) investments. For example, the SEC, among other responsibilities, regulates registered securities including company stock and mutual funds under securities law. One issue of concern with DC plans is that participation and saving rates have been low. In 2007, we reported that the majority of U.S. workers, in all age groups, did not participate in DC plans with their current employers. In fact, only about half of all workers participate in any type of employer-sponsored retirement plan at any given time. According to data from the Current Population Survey, about 48 percent of the total U.S. workforce was not covered by an employer-sponsored plan in 2007. About 40 percent worked for an employer that did not sponsor a plan, and about 8 percent did not participate in the plan that their employer sponsored. Certain segments of the working population have consistently had much lower rates of employment with employers sponsoring a plan, and lower participation rates than the working population overall, such as lower-income workers, younger workers, workers employed by smaller companies, and part-time workers who typically lack coverage compared to all full-time workers. According to our analysis of the 2004 Survey of Consumer Finances, only 62 percent of workers were offered a retirement plan by their employer, and 84 percent of those offered a retirement plan participated. Participation rates were even lower for DC plan participants since only 36 percent of working individuals participated in a DC plan with their current employers at the time of our report. Although our analysis focused on DC plans as a group, 401(k) plans make up the vast majority of DC plans. At the household level, participation rates were also low; only 42 percent of households had at least one member actively participating in a DC plan. Further, only 8 percent of workers in the lowest income quartile participated in DC plans offered by their current employer. Participation rates are low partly because not all employers offer a retirement plan, and even when employers offer such plans, workers may not participate. Some small employers are hesitant to sponsor retirement plans because of concerns about cost. In addition, DC participation rates for the U.S. workforce may be low because some employers sponsor a DB plan rather than a DC plan. When companies do sponsor employer plans, some workers may not be eligible to participate in their employers’ plan because they have not met the plan’s minimum participation requirements. In addition, workers may choose not to enroll, or delay enrolling, in a retirement plan for a number of reasons. For example, they may think—in some cases, incorrectly—they are not eligible. They may also believe they cannot afford to contribute to the plan and, for low-income workers, it may be difficult for them to contribute. Also, some may be focused on more immediate savings objectives, such as saving for a house. Many non- participants may not have made a specific decision, but rather fail to participate because of a tendency to procrastinate and follow the path that does not require an active decision. We also found that, for workers who participated in DC plans, plan savings were low. The median total DC account balance was $22,800 for individual workers with a current or former DC plan and $27,940 for households with a current or former DC plan. We reported that the account balances of lower-income and older workers were of particular concern. For example, workers in the lowest income quartile had a median total account balance of only $6,400. Older workers, particularly those who were less wealthy, also had limited retirement savings. For example, those aged 50 through 59 and at or below the median level of wealth had median total savings of only $13,800. The median total savings for all workers aged 50 through 59 was $43,200. We noted that the low level of retirement savings could be occurring for a couple of reasons. Workers who participated in a plan had modest overall balances in DC plans, suggesting a potentially small contribution toward retirement security for most plan participants and their households. For individuals nearing retirement age, total DC plan balances were also low, because DC plans were less common before the 1980s and older workers likely would not have had access to these plans their whole careers. Given trends in coverage since the 1980s, older workers close to retirement age were more likely than younger ones to have accrued retirement benefits in a DB plan. In addition, older workers who rely on DC plans for retirement income may also not have time to substantially increase their total savings without extending their working careers, perhaps for several years. Further, the value of the income tax deferral on contributions is smaller for lower-income workers than for similarly situated higher-income workers, making participation less appealing for lower-income workers. In addition to somewhat small savings contributions, 401(k) participants can take actions, such as taking loans, withdrawals, or lump-sum cashouts, that reduce the savings they have accumulated. This “leakage” continues to affect the retirement security of some participants. While participants may find features that allow access to 401(k) savings prior to retirement desirable, leakage can result in significant losses of retirement savings from the loss of compound interest as well as the financial penalties associated with early withdrawals. Current law limits participant access to 401(k) savings in order to preserve the favorable tax treatment for retirement savings and ensure that the savings are, in fact, being used to provide retirement income. The incidence and amount of the principal forms of leakage from 401(k) plans have remained relatively steady through the end of 2008. For example, we found that approximately 15 percent of 401(k) participants between the ages of 15 and 60 initiated at least one form of leakage in 1998, 2003, and 2006, with loans being the most popular type of leakage in all 3 years. We also found that cashouts made when a worker changed jobs, at any age, resulted in the largest amounts of leakage and the greatest proportional loss in retirement savings. Further, we reported that while most firms informed participants about the short-term costs of leakage, few informed them about the long-term costs. As we reported in August of 2009, experts identified three legal requirements that had likely reduced the overall incidence and amounts of leakage, and another provision that may have exacerbated the long-term effects of leakage. Specifically, experts noted that the requirements imposing a 10 percent tax penalty on most withdrawals taken before age 59½, requiring participants to exhaust their plan’s loan provisions before taking a hardship withdrawal and requiring plan sponsors to preserve the tax-deferred status of accounts with balances of more than $1,000 at job separation all helped reduce 401(k) leakage. However, experts also noted that the requirement for a 6-month suspension of all contributions to an account following a hardship withdrawal exacerbated the effects of leakage. Treasury officials told us that this provision is intended to serve as a test to ensure that the hardship is real and that the participants have no other assets available to address the hardship. However, a few outside experts believed that this provision deters hardship withdrawals and noted that it seems to contradict the goal of creating retirement income. One expert noted that the provision unnecessarily prevented participants who were able to continue making contributions from doing so. For example, an employed participant taking a withdrawal for a discrete, one-time purpose, such as paying for medical expenses, may otherwise be able to continue making contributions. In our August 2009 report, we recommended that Congress consider changing the requirement for the 6- month contribution suspension following a hardship withdrawal. We also called for measures to provide participants with more information on the disadvantages of hardship withdrawals. Although participants may choose to take money out of their 401(k) plans, fees and other factors outside of participants’ control can also diminish their ability to build their retirement savings. Participants often pay fees, such as investment fees and record-keeping fees, and these fees may significantly reduce retirement savings, even with steady contributions and without leakage. Investment fees, which are charged by companies managing mutual funds and other investment products for all services related to operating the fund, comprise the majority of fees in 401(k) plans and are typically borne by participants. Plan record-keeping fees generally account for the next largest portion of plan fees. These fees cover the cost of various administrative activities carried out to maintain participant accounts. Although plan sponsors often pay for record-keeping fees, participants bear them in a growing number of plans. We previously reported that participants can be unaware that they pay any fees at all for their 401(k) investments. For example, investment and record-keeping fees are often charged indirectly by taking them out of investment returns prior to reporting those returns to participants. Consequently, more than 80 percent of 401(k) participants reported in a nationwide survey not knowing how much they pay in fees. The reduction to retirement savings resulting from fees is very sensitive to the size of the fees paid; even a seemingly small fee can have a large negative effect on savings in the long run. As shown in figure 1, an additional 1 percent annual charge for fees would significantly reduce an account balance at retirement. Although all 401(k) plans are required to provide disclosures on plan operations, participant accounts, and the plan’s financial status, they are often not required to disclose the fees borne by individual participants. These disclosures are provided in a piecemeal fashion and do not provide a simple way for participants to compare plan investment options and their fees. Some documents that contain fee information are provided to participants automatically, whereas others, such as prospectuses or fund profiles, may require that participants seek them out. According to industry professionals, participants may not know to seek such documents. Most industry professionals agree that information about investment fees—such as the expense ratio, a fund’s operating fees as a percentage of its assets—is fundamental for plan participants to compare their options. Participants also need to be aware of other types of fees—such as record- keeping fees and redemption fees or surrender charges imposed for changing and selling investments—to gain a more complete understanding of all the fees that can affect their account balances. Whether participants receive only basic expense ratio information or more detailed information on various fees, presenting the information in a clear, easily comparable format can help participants understand the content of disclosures. In our previous reports, we recommended that Congress consider requiring plan sponsors to disclose fee information on 401(k) investment options to participants, such as the expense ratios, and Congress has introduced several bills to address fee disclosures. SEC identified certain undisclosed arrangements in the business practices of pension consultants that the agency referred to as conflicts of interest and released a report in May 2005 that raised questions about whether some pension consultants are fully disclosing potential conflicts of interest that may affect the objectivity of the advice. The report highlighted concerns that compensation arrangements with brokers who sell mutual funds may provide incentives for pension consultants to recommend certain mutual funds to a 401(k) plan sponsor and create conflicts of interest that are not adequately disclosed to plan sponsors. Plan sponsors may not be aware of these arrangements and thus could select mutual funds recommended by the pension consultant over lower-cost alternatives. As a result, participants may have more limited investment options and may pay higher fees for these options than they otherwise would. Conflicts of interest among plan sponsors and plan service providers can also affect participants’ retirement savings. In our prior work on conflicts of interest in DB plans, we found a statistical association between inadequate disclosure of potential conflicts of interest and lower investment returns for ongoing plans, suggesting the possible adverse financial effect of such nondisclosure. Specifically, we detected lower annual rates of return for those ongoing plans associated with consultants that had failed to disclose significant conflicts of interest. These lower rates generally ranged from a statistically significant 1.2 to 1.3 percentage points over the 2000 to 2004 period. Although this work was done for DB plans, some of the same conflicts apply to DC plans as well. Problems may occur when companies providing services to a plan also receive compensation from other service providers. Without disclosing these arrangements, service providers may be steering plan sponsors toward investment products or services that may not be in the best interest of participants. Conflicts of interest may be especially hidden when there is a business arrangement between one 401(k) plan service provider and a third-party provider for services that they do not disclose to the plan sponsor. The problem with these business arrangements is that the plan sponsor will not know who is receiving the compensation and whether or not the compensation fairly represents the value of the service being rendered. Without that information, plan sponsors may not be able to identify potential conflicts of interest and fulfill their fiduciary duty. If the plan sponsors do not know that a third party is receiving these fees, they cannot monitor them, evaluate the worthiness of the compensation in view of services rendered, and take action as needed. Because the risk of 401(k) investments is largely borne by the individual participant, such hidden conflicts can affect participants directly by lowering investment returns. We previously recommended that Congress consider amending the law to explicitly require that 401(k) service providers disclose to plan sponsors the compensation that providers receive from other service providers. Although Congress has not changed the law, Labor has proposed regulations to expand fee and compensation disclosures to help address conflicts of interests. A recent change in law to facilitate automatic enrollment shows promise for increasing participation rates and savings. Under automatic enrollment, a worker is enrolled into the plan automatically, or by default, unless they explicitly choose to opt out. In addition, for participants who do not make their own choices, plan sponsors also establish default contribution rates—the portion of an employee’s salary that will be deposited in the plan—and a default investment fund—the fund or other vehicle into which deferred savings will be invested. The Pension Protection Act of 2006 and recent regulatory changes have facilitated plan sponsors’ adoption of automatic enrollment. In fact, plan sponsors have increasingly been adopting automatic enrollment policies in recent years. According to Fidelity Investments, the number of plans with automatic enrollment has increased from 1 percent in December 2004 to about 16 percent in March 2009, with higher rates of adoption among larger plan sponsors. Fidelity Investments estimates that 47 percent of all 401(k) participants are in plans with automatic enrollment. Employers may also adopt an automatic escalation policy, another policy intended to increase retirement savings. Under automatic escalation, in the absence of an employee indicating otherwise, an employee’s contribution rates would be automatically increased at periodic intervals, such as annually. For example, if the default contribution rate is 3 percent of pay, a plan sponsor may choose to increase an employee’s rate of saving by 1 percent per year, up to some maximum, such as 6 percent. One of our recent reports found that automatic enrollment policies can result in considerably increased participation rates for plans adopting them, with some plans’ participation rates increasing to as high as 95 percent and that these high participation rates appeared to persist over time. Moreover, automatic enrollment had a significant effect on subgroups of workers with relatively low participation rates, such as lower-income and younger workers. For example, according to a 2007 Fidelity Investments study, only 30 percent of workers aged 20 to 29 were participating in plans without automatic enrollment. In plans with automatic enrollment, the participation rate for workers in that age range was 77 percent, a difference of 47 percentage points. Automatic enrollment, through its default contribution rates and default investment vehicles, offers an easy way to start saving because participants do not need to decide how much to contribute and how to invest these contributions unless they are interested in doing so. However, current evidence is mixed with regard to the extent to which plan sponsors with automatic enrollment have also adopted automatic escalation policies. In addition, many plan sponsors have adopted relatively low default contribution rates. While the adoption rate for automatic enrollment shows promise, a lag in adoption of automatic escalation policies, in combination with low default contribution rates, could result in low saving rates for participants who do not increase contribution rates over time. Another recent GAO report offers additional evidence about the positive impact automatic enrollment could have on workers’ savings levels at retirement. Specifically, we projected DC pension benefits for a stylized scenario where all employers that did not offer a pension plan were required to sponsor a DC plan with no employer contribution; that is, workers had universal access to a DC plan. When we coupled universal access with automatic enrollment, we found that approximately 91 percent of workers would have DC savings at retirement. Further, we found that about 84 percent of workers in the lowest income quartile would have accumulated DC savings. In our work on automatic enrollment, we found that plan sponsors have overwhelmingly adopted TDFs as the default investment. TDFs allocate their investments among various asset classes and shift that allocation from equity investments to fixed-income and money market investments as a “target” retirement date approaches; this shift in asset allocation is commonly referred to as the fund’s “glide path.” Recent evidence suggests that participants who are automatically enrolled in plans with TDF defaults tend to have a high concentration of their savings in these funds. However, pension industry experts have raised questions about the risks of TDFs. For example, some TDFs designed for those expecting to retire in or around 2010 lost 25 percent or more in value following the 2008 stock market decline, leading some to question how plan sponsors evaluate, monitor, and use TDFs. GAO will be addressing a request from this committee to examine some of these concerns. DC plans, particularly 401(k) plans, have clearly overtaken DB plans as the principal retirement plan for U.S. workers and are likely to become the sole retirement savings plan for most current and future workers. Yet, 401(k) plans face major challenges, not least of which is the fact that many employers do not offer employer-sponsored 401(k) plans or any other type of plan to their workers. This lack of coverage, coupled with the fact that participants in 401(k) plans sometimes spend their savings prior to retirement or have their retirement savings eroded by fees, make it evident that, without some changes, a large number of people will retire with little or no retirement savings. Employers, workers, and the government all have to work together to ensure that 401(k) plans provide a meaningful contribution to retirement security. Employers have a role in first sponsoring 401(k) plans and then looking at ways to encourage participation, such as utilizing automatic enrollment and automatic escalation. Workers have a role to participate and save in 401(k) plans when they are given the opportunity to do so. In addition, both employers and workers have a role in preserving retirement savings. Government policy makers have an important role in setting the condition and the appropriate incentives that both encourage desired savings behavior but also protects participants. Recent government action that has helped enhance participation in 401(k) plans is a good first step. But action is still needed to improve disclosure on fees, especially those that are hidden, and measures need to be taken to discourage leakage. As this Committee and others move forward to address these issues, improvements may be made to 401(k) plans that can help assure that savings in such plans are an important part of individuals’ secure retirement. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further questions about this statement, please contact Barbara D. Bovbjerg at (202) 512-7215 or bovbjergb@gao.gov. Individuals making key contributions to this statement included Tamara Cross, David Lehrer, Joseph Applebaum, James Bennett, Jennifer Gregory, Angela Jacobs, Jessica Orr, and Craig Winslow. Retirement Savings: Automatic Enrollment Shows Promise for Some Workers, but Proposals to Broaden Retirement Savings for Other Workers Could Face Challenges. GAO-10-31. Washington, D.C.: October 23, 2009. Retirement Savings: Better Information and Sponsor Guidance Could Improve Oversight and Reduce Fees for Participants. GAO-09-641. Washington, D.C.: September 4, 2009. 401(k) Plans: Policy Changes Could Reduce the Long-term Effects of Leakage on Workers’ Retirement Savings. GAO-09-715. Washington, D.C.: August 28, 2009. Private Pensions: Alternative Approaches Could Address Retirement Risks Faced by Workers but Pose Trade-offs. GAO-09-642. Washington, D.C.: July 24, 2009. Private Pensions: Conflicts of Interest Can Affect Defined Benefit and Defined Contribution Plans. GAO-09-503T. Washington, D.C.: March 24, 2009. Private Pensions: Fulfilling Fiduciary Obligations Can Present Challenges for 401(k) Plan Sponsors. GAO-08-774. Washington D.C.: July 16, 2008. Private Pensions: GAO Survey of 401(k) Plan Sponsor Practices (GAO-08-870SP, July 2008), an E-supplement to GAO-08-774. GAO-08-870SP. Washington, D.C.: July 16, 2008. Private Pensions: Low Defined Contribution Plan Savings May Pose Challenges to Retirement Security, Especially for Many Low-Income Workers. GAO-08-8. Washington, D.C.: November 29, 2007. Private Pensions: Information That Sponsors and Participants Need to Understand 401(k) Plan Fees. GAO-08-222T. Washington, D.C.: October 30, 2007. Private Pensions: 401(k) Plan Participants and Sponsors Need Better Information on Fees. GAO-08-95T. Washington, D.C.: October 24, 2007. Employer-Sponsored Health and Retirement Benefits: Efforts to Control Employer Costs and the Implications for Workers. GAO-07-355. Washington, D.C.: March 30, 2007. Private Pensions: Increased Reliance on 401(k) Plans Calls for Better Information on Fees. GAO-07-530T. Washington, D.C.: March 6, 2007. Employee Benefits Security Administration: Enforcement Improvements Made but Additional Actions Could Further Enhance Pension Plan Oversight. GAO-07-22. Washington, D.C.: January 18, 2007. Private Pensions: Changes Needed to Provide 401(k) Plan Participants and the Department of Labor Better Information on Fees. GAO-07-21. Washington, D.C.: November 16, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over the past 25 years, the number of defined benefit (DB) plans has declined while the number of defined contribution (DC) plans has increased. Today, DC plans are the dominant type of employer-sponsored retirement plans, with more than 49 million U.S. workers participating in them. 401(k) plans currently cover over 85 percent of active DC plan participants and are the fastest growing type of employer-sponsored pension plan. Given these shifts in pension coverage, workers are increasingly relying on 401(k) plans for their pension income. Recently, policy makers have focused attention on the ability of 401(k) plans to provide participants with adequate retirement income and the challenges that arise as 401(k) plans become the predominant retirement savings plan for employees. As a result, GAO was asked to report on (1) challenges to building and maintaining of savings in 401(k) plans, and (2) recent measures to improve 401(k) participation and savings levels. There are challenges to building and saving through 401(k) plans. While low participation rates may be due, in part, to the fact that some workers participate in DB plans, there is also a large portion of workers who do not have access to an employer-sponsored retirement plan, as well as some who do not enroll in such a plan when an employer offers it. We found that for those who did participate, their overall balances were low, particularly for low-income and older workers who either did not have the means to save or have not had the opportunity to save in 401(k)s for much of their working lifetimes. There are also challenges workers face in maintaining savings in 401(k) plans. For example, 401(k) leakage--actions participants take that reduce the savings they have accumulated, such as borrowing from the account, taking hardship withdrawals, or cashing out the account when they change jobs--continues to affect retirement savings and increases the risk that 401(k) plans may yield insufficient retirement income for individual participants. Further, various fees, such as investment and other hidden fees, can erode retirement savings and individuals may not be aware of their impact. Automatic enrollment of employees in 401(k) plans is one measure to increase participation rates and saving. Under automatic enrollment, which was encouraged by the Pension Protection Act of 2006 and recent regulatory changes, employers enroll workers into plans automatically unless they explicitly choose to opt out. Plan sponsors are increasingly adopting automatic enrollment policies, which can considerably increase participation rates, with some plans' rates reaching as high as 95 percent. Employers can also set default contribution rates and investment funds. Though target-date funds are a common type of default investment fund, there are concerns about their risks, particularly for participants nearing retirement.
You are an expert at summarizing long articles. Proceed to summarize the following text: GAO’s body of work related to prior workforce reductions at DOD and other organizations demonstrates the importance of strategic workforce planning, including a consideration of costs, to help ensure that DOD has a fully capable workforce to carry out its mission. According to GAO’s Standards for Internal Control, management should ensure that skill needs are continually assessed and that the organization is able to obtain a workforce that has the required skills that match those necessary to achieve organizational goals. Section 322 of the National Defense Authorization Act for Fiscal Year 1991 directed DOD to establish guidelines for reductions in the number of civilian workers employed by industrial or commercial type activities. The act also directed certain DOD agencies or components to submit 5 year master plans for those workers, providing information on workload, demographics, and employee furloughs and involuntary separations, with the materials submitted to Congress in support of the budget request for fiscal year 1991. Subsequently, in 1992, we reported that DOD intended to undertake a multiyear downsizing effort aimed at reducing the civilian workforce by nearly 229,000 positions, or to 20 percent below its fiscal year 1987 levels. However, in 2000, we reported that DOD’s approach to prior force reductions was not oriented toward shaping the makeup of the workforce, resulting in significant imbalances in terms of shape, skills, and retirement eligibility. GAO, Defense Force Management: Expanded Focus in Monitoring Civilian Force Reductions Is Needed, GAO/T-NSIAD-92-19 (Washington, D.C.: March 18, 1992); and Defense Force Management: Challenges Facing DOD as It Continues to Downsize Its Workforce, GAO/NSIAD-93-123 (Washington, D.C.: February 12, 1993). Realignment round and the impacts of Operations Desert Shield and Desert Storm. We concluded that broader assessments were needed to determine the magnitude of civilian workforce reductions and their potential impact on given areas and regions, as well as the impact of hiring constraints on the ability of all DOD civilian organizations to efficiently and effectively accomplish their missions. We also have reported that the approaches DOD has relied on to accomplish past civilian workforce downsizing have sometimes had unintended consequences, such as workforce skills imbalances. For instance, DOD’s approach to past civilian downsizing relied primarily on voluntary attrition and retirements and varying freezes on hiring authority to achieve force reductions, as well as the use of existing authorities for early retirements to encourage voluntary separations at activities facing major reductions-in-force. The National Defense Authorization Act for Fiscal Year 1993 authorized a number of transition assistance programs for civilian employees, including financial separation incentives— ”buyouts”—to induce the voluntary separation of civilian employees. DOD credited the use of these separation incentives, early retirement authority, and various job placement opportunities in its avoidance of nearly 200,000 involuntary demotions and separations. The tools available to DOD to manage its civilian downsizing helped mitigate some adverse effects of force reductions. However, DOD’s approach to civilian workforce reductions was less oriented toward shaping the makeup of the workforce than was the approach it used to manage its military downsizing and resulted in significant imbalances in terms of shape, skills, and retirement eligibility. We also reported that, while managing force reductions for its uniformed military, DOD followed a policy of trying to achieve and maintain a degree of balance between its accessions and losses in order to “shape” its uniformed forces in terms of rank, years of service, and specialties. In contrast, we did not see as much attention devoted to planning and managing civilian workforce reductions. Moreover, the Acquisition 2005 Task Force’s final reportinstance, that this was especially true of the civilian acquisition workforce, which from September 1989 to September 1999 was reduced by almost 47 percent. This rate of reduction substantially exceeded that of the rest of the DOD workforce. Eleven consecutive years of downsizing produced serious imbalances in the skills and experience of the highly talented and specialized civilian acquisition workforce, putting DOD on the verge of a retirement-driven talent drain. Our work on the downsizing conducted by other organizations adds further perspective on some challenges associated with certain strategies and the need to conduct effective planning when downsizing a workforce. of downsizing undertaken by 17 private In 1995, we conducted a review companies, 5 states, and 3 foreign governments, generally selected because they were reputed to have downsized successfully. We reported that a number of factors may constrain organizations’ downsizing strategies, such as public sentiment, budget limitations, legislative mandates to maintain certain programs, and personnel laws. Moreover, we found that using attrition as a sole downsizing tool can result in skills imbalances in an organization’s workforce because the employees who leave are not necessarily those the organization determined to be excess. Further, we also found that attrition is often not sufficient to reduce employment levels in the short term. In addition, some workforce reduction strategies have been found to slow the hiring, promotion, and transfer process and create skills imbalances. However, we found that one key theme emerged from such downsizing efforts. Specifically, most organizations found that workforce planning had been essential in identifying positions to be eliminated and pinpointing specific employees for potential separation. In organizations where planning did not occur or was not effectively implemented, difficulties arose in the downsizing. For example, we reported that a lack of effective planning for skills retention can lead to a loss of critical staff, and that an organization that simply reduces the number of employees without changing work processes will likely have staffing growth recur eventually. We have also identified the potential cost implications of downsizing in our prior work. In 1995, we reported that the savings realized from government downsizing efforts are difficult to estimate. Payroll savings attributed to workforce reductions would not be the amount of actual savings to the federal government from the personnel reductions because of other costs associated with such efforts—for example, separation incentives— or, in the case of reductions-in-force, severance pay. In addition, the ultimate savings would depend on what happened to the work previously performed by the eliminated personnel. For example, if some of the work was contracted out to private companies, contract costs should be considered in determining whether net savings resulted from workforce reductions. In 2001, we concluded that, considering the enormous changes that DOD’s civilian workforce had undergone and the external pressures and demands faced by the department, taking a strategic approach to human capital would be crucial to organizational results. As I will discuss further, this is no less true today than it was in 2001. I turn now to opportunities we have identified for DOD to enhance its strategic human capital planning. Since the end of the Cold War, the civilian workforce has undergone substantial change, due primarily to downsizing, base realignments and closures, competitive sourcing initiatives, and DOD’s changing mission. For example, between fiscal years 1989 and 2002, DOD’s civilian workforce shrank from 1,075,437 to 670,166—about a 38 percent reduction. According to the department, as of January 2012, DOD’s total civilian workforce had grown to include about 783,000 civilians. As I have noted, the achievement of DOD’s mission is dependent in large part on the skills and expertise of its civilian workforce, and today’s current and long-term fiscal outlook underscore the importance of a strategic and efficient approach to human capital management. The ability of federal agencies to achieve their missions and carry out their responsibilities depends in large part on whether they can sustain a workforce that possesses the necessary education, knowledge, skills, and competencies. Our work has shown that successful public and private organizations use strategic management approaches to prepare their workforces to meet present and future mission requirements. Preparing a strategic human capital plan encourages agency managers and stakeholders to systematically consider what is to be done, how it will be done, and how to gauge progress and results. While the department has made progress adopting some of these approaches, we remain concerned that some missing key elements of strategic workforce planning will hinder DOD’s ability to most effectively and efficiently achieve its mission. As we have reported in the past, federal agencies have used varying approaches to develop and present their strategic workforce plans. To facilitate effective workforce planning, we and the Office of Personnel Management have identified six leading principles such workforce plans should incorporate, including: aligning workforce planning with strategic planning and budget involving managers, employees, and other stakeholders in planning; identifying critical skills and competencies and analyzing workforce gaps; employing workforce strategies to fill the gaps; building the capabilities needed to support workforce strategies through steps to ensure the effective use of human capital flexibilities; and monitoring and evaluating progress toward achieving workforce planning and strategic goals. The application of these principles will vary depending on the particular circumstances the agency faces. For example, an agency that is faced with the need for a long lead time to train employees hired to replace those retiring and an increasing workload may focus its efforts on estimating and managing retirements. Another agency with a future workload that could rise or fall sharply may focus on identifying skills to manage a combined workforce of federal employees and contractors. Over the past few years, Congress has enacted a number of provisions requiring DOD to conduct human capital planning efforts for its overall civilian, senior leader, and acquisition workforces and provided various tools to help manage the department’s use of contractors, who augment DOD’s total civilian workforce. For example, the National Defense Authorization Act for Fiscal Year 2006 directed DOD to create and periodically update a strategic human capital plan that addressed, among other things, the existing critical skills and competencies of the civilian workforce as well as projected needs, gaps in the existing or projected civilian workforce, and projected workforce trends. Subsequent acts established additional requirements for the human capital plan, including requirements to assess issues related to funding of its civilian workforce. We have closely monitored DOD’s efforts to address the aforementioned requirements. In our September 2010 review of DOD’s 2009 update to its human capital strategic plan we found that, although DOD had demonstrated some progress in addressing the legislative requirements related to its Civilian Human Capital Strategic Workforce Plan, several key elements continued to be missing from the process—including such elements as competency gap analyses and monitoring of progress. Our work found that DOD’s plan addressed the requirement to assess critical skills. Specifically, the overall civilian workforce plan identified 22 mission- critical occupations that, according to the department, represent the results of its assessment of critical skills. According to DOD, mission- critical occupations are those occupations that are key to current and future mission requirements, as well as those that present a challenge regarding recruitment and retention rates and for which succession planning is needed. Examples of mission-critical occupations include (1) contracting, (2) accounting, and (3) information technology management. However, as noted, DOD’s plan lacked such key elements as competency gap analysis and monitoring of progress. Our prior work identified competency gap analyses and monitoring progress as two key elements in the strategic workforce planning process. Competency gap analyses enable an agency to develop specific strategies to address workforce needs and monitoring progress demonstrates the contribution of workforce planning to achieving program goals. For example, at the time of our review, because the plan discussed competency gap analyses for only 3 of the 22 mission-critical occupations and did not discuss competency gaps for the other 19 mission-critical occupations, we determined that the requirement was only partially addressed. Moreover, DOD was in the initial stages of assessing competency gaps for its senior leader workforce, but it had not completed the analysis needed to identify gaps. Without including analyses of gaps in critical skills and competencies as part of its strategic workforce planning efforts, DOD and the components may not be able to design and fund the best strategies to fill their talent needs through recruiting and hiring or to make appropriate investments to develop and retain the best possible workforce. Further, DOD leadership may not have information necessary to make informed decisions about future workforce reductions, should further reductions to its workforces become necessary. We currently have ongoing work assessing DOD’s 2010 Strategic Workforce Plan, which the department released in March 2012. The results of this review are expected to be released in September 2012. In light of the challenges DOD has faced in its strategic workforce planning, we support the department’s participation in efforts being made across the federal government to address governmentwide critical skills gaps. Currently, the Office of Personnel Management and DOD are leading a working group comprised of members of the Chief Human Capital Officers Council tasked with (1) identifying mission-critical occupations and functional groups, (2) developing strategies to address gaps in these occupations and groups, and (3) implementing and monitoring these strategies. Our reviews of DOD’s acquisition, information technology, and financial management workforces—which include a number of DOD’s identified mission-critical occupations—amplifies some of our overarching observations related to strategic workforce planning. In fiscal year 2011 alone, DOD obligated about $375 billion to acquire goods and services to meet its mission and support its operations in the United States and abroad. As noted, our prior work found that the significant reductions to the acquisition workforce in the 1990s produced serious imbalances in the skills and experience of this highly talented and specialized workforce. The lack of an adequate number of trained acquisition and contract oversight personnel has, at times, contributed to unmet expectations and placed DOD at risk of potentially paying more than necessary. Our February 2011 high-risk report noted that DOD needs to ensure that its acquisition workforce is adequately sized, trained, and equipped to meet department needs. We further reported in November 2011 that the department has focused much-needed attention on rebuilding its acquisition workforce and made some progress in terms of growing the workforce, identifying the skills and competencies it needed, and used such information to help update its training curriculum. While DOD has acknowledged that rebuilding its acquisition workforce is a strategic priority, our most recent review of the Defense Acquisition Workforce Development Fund found that DOD continues to face challenges in strategic workforce planning for its acquisition workforce. Specifically, we found that DOD lacks an overarching strategy to clearly align this fund with its acquisition workforce plan. The department has also not developed outcome-related metrics, such as the extent to which the fund is helping DOD address its workforce skills and competencies gaps. Moreover, we remain concerned that the acquisition workforce continues to face challenges in terms of the age and retirement eligibility of its members. According to the most recent reported data from the Federal Acquisition Institute, as of December 2011, the average age of the acquisition workforce ranged from 47 years to 51.7 years, with at least 36 percent of the workforce becoming eligible to retire over the next 10 years. We have also identified a number of challenges associated with DOD’s workforce planning for its financial management and information technology workforces. With regard to the financial management workforce, we reported in July 2011 that DOD’s financial management has been on GAO’s high-risk list since 1995 and, despite several reform initiatives, remains on the list today. Specifically, we noted that effective financial management in DOD will require a knowledgeable and skilled workforce that includes individuals who are trained and certified in accounting. DOD accounting personnel are responsible for accounting for funds received through congressional appropriations, the sale of goods and services by working capital fund businesses, revenue generated through nonappropriated fund activities, and the sales of military systems and equipment to foreign governments or international organizations. According to DOD’s fiscal year 2012 budget request, the Defense Finance and Accounting Service processed approximately 198 million payment-related transactions and disbursed over $578 billion in fiscal year 2010. However, we also reported in July 2011 that DOD’s strategic workforce plan lacked a competency gap analysis for its financial management workforce, thus limiting the information DOD has on its needs and gaps in that area and the department’s ability to develop an effective financial management recruitment, retention, and investment strategy to address other financial management challenges. With regard to DOD’s information technology workforce, we reported2011 that, as threats to federal information technology infrastructure and systems continue to grow in number and sophistication, the ability to secure these infrastructure and systems will depend on the knowledge, skills, and abilities of the federal and contractor workforce that implements and maintains these systems. We noted that DOD’s information assurance workforce plan—which addresses information technology—incorporates critical skills, competencies, categories, and specialties of the information assurance workforce, but only partially describes strategies to address gaps in human capital approaches and critical skills competencies. DOD’s workforce is comprised of military personnel, civilians, and contractors. DOD has acknowledged, however, that with approximately 30 percent of its workforce eligible to retire by March 31, 2015, and the need to reduce its reliance on contractors to augment the current workforce, it faces a number of significant challenges. Our September 2010 review of DOD’s strategic workforce plan found that the department had issued a directive stating that missions should be accomplished using the least costly mix of personnel (military, civilian, and contractors) consistent with military requirements and other needs. However, the department’s workforce plan did not provide an assessment of the appropriate mix of military, civilian, and contractor personnel capabilities. accompanying a proposed bill for the More recently, the House Report National Defense Authorization Act for Fiscal Year 2013 directs GAO to assess what measures DOD is taking to appropriately balance its current and future workforce structure against its requirements. Specifically, we plan for our review to include: (1) the process by which DOD identified its civilian workforce requirements, taking into consideration the withdrawal from Iraq and impending withdrawal from Afghanistan; and (2) the analysis done by DOD to identify core or critical functions, including which of those functions would be most appropriately performed by military, civilian, or contractor personnel. Our report is due to the Armed Services Committees of the House and Senate by March 15, 2013. H.R. Rep. No. 112-479, at 196-197 (2012), which accompanies H.R. 4310, 112th Cong. (2012). In conclusion, DOD has a large, diverse federal civilian workforce that is key to maintaining our national security. However, as we have noted, DOD’s workforce also includes military and contractor personnel and changes made to one of these groups may impact the others. As such, we are currently assessing the measures the department is taking to appropriately balance its current and future workforce structure and its requirements. Chairman Forbes, Ranking Member Bordallo, this concludes my prepared remarks. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. For future questions about this statement, please contact Brenda S. Farrell, Director, Defense Capabilities and Management, at (202) 512-3604 or farrellb@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Margaret Best, Assistant Director; Spencer Tacktill; Jennifer Weber; Erik Wilkins-McKee; Nicole Willems; and John Van Schaik. In addition, Penny Berrier, Mark Bird, Timothy DiNapoli, Gayle Fischer, Steven Lozano, Belva Martin, Carol Petersen, and Rebecca Shea made contributions to this report. Defense Acquisition Workforce: Improved Processes, Guidance, and Planning Needed to Enhance Use of Workforce Funds. GAO-12-747R. Washington, D.C.: June 20, 2012. Defense Acquisitions: Further Actions Needed to Improve Accountability for DOD’s Inventory of Contracted Services. GAO-12-357. Washington, D.C.: April 6, 2012. Defense Workforce: DOD Needs to Better Oversee In-sourcing Data and Align In-sourcing Efforts with Strategic Workforce Plans. GAO-12-319. Washington, D.C.: February 9, 2012. Streamlining Government: Key Practices from Select Efficiency Initiatives Should Be Shared Governmentwide. GAO-11-908. Washington, D.C.: September 30, 2011. DOD Financial Management: Numerous Challenges Must Be Addressed to Improve Reliability of Financial Information. GAO-11-835T. Washington, D.C.: July 27, 2011. DOD Civilian Personnel: Competency Gap Analyses and Other Actions Needed to Enhance DOD’s Strategic Workforce Plans. GAO-11-827T. Washington, D.C.: July 14, 2011. High Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. Human Capital: Further Actions Needed to Enhance DOD’s Civilian Strategic Workforce Plan. GAO-10-814R. Washington, D.C.: September 27, 2010. Workforce Planning: Interior, EPA, and the Forest Service Should Strengthen Linkages to Their Strategic Plans and Improve Evaluation. GAO-10-413. Washington, D.C.: March 31, 2010. Human Capital: Opportunities Exist to Build on Recent Progress to Strengthen DOD’s Civilian Human Capital Strategic Plan. GAO-09-235. Washington, D.C.: February 10, 2009. High Risk Series: An Update. GAO-07-310. Washington, D.C.: January 31, 2007. Human Capital: Agencies Are Using Buyouts and Early Outs with Increasing Frequency to Help Reshape Their Workforces. GAO-06-324. Washington, D.C.: March 31, 2006. DOD Civilian Personnel: Comprehensive Strategic Workforce Plans Needed. GAO-04-753. Washington, D.C.: June 30, 2004. Human Capital: Major Human Capital Challenges at the Departments of Defense and State. GAO-01-565T. Washington, D.C.: March 29, 2001. High Risk Series: An Update. GAO-01-263. Washington, D.C.: January 1, 2001. Human Capital: Strategic Approach Should Guide DOD Civilian Workforce Management. GAO/T-GGD/NSIAD-00-120. Washington, D.C.: March 9, 2000. Human Capital: A Self Assessment Checklist for Agency Leaders. GAO/GGD-99-179. Washington, D.C.: September 1999. Acquisition Management: Workforce Reductions and Contractor Oversight. GAO/NSIAD-98-127. Washington, D.C.: July 31, 1998. Workforce Reductions: Downsizing Strategies Used in Select Organizations. GAO/GGD-95-54. Washington, D.C.: March 13, 1995. Defense Civilian Downsizing: Challenges Remain Even With Availability of Financial Separation Incentives. GAO/NSIAD-93-194. Washington, D.C.: May 14, 1993. Defense Force Management: Challenges Facing DOD As It Continues to Downsize Its Civilian Workforce. GAO/NSIAD-93-123. Washington, D.C.: February 12, 1993. Defense Force Management: Expanded Focus in Monitoring Civilian Force Reductions Is Needed. GAO/T-NSIAD-92-19. Washington, D.C.: March 18, 1992. Defense Force Management: DOD Management of Civilian Force Reductions. GAO/T-NSIAD-92-10. Washington, D.C.: February 20, 1992. Defense Force Management: Limited Baseline for Monitoring Civilian Force Reductions. GAO/NSIAD-92-42. Washington, D.C.: February 5, 1992. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DOD’s workforce of 783,000 civilians performs a wide variety of duties, including some traditionally performed by military personnel, such as mission-essential logistics support and maintenance, as well as providing federal civilian experts to Afghanistan and other theaters of operations. With the long-term fiscal challenges facing the nation, reductions to the civilian workforce may be considered to achieve cost savings. Human capital has remained a critical missing link in reforming and modernizing the federal government’s management practices, even as legislation and other actions since 1990 have been put in place to address major management areas. In the past, GAO has observed that the federal government has often acted as if people were costs to be cut rather than assets to be valued. DOD previously experienced significant downsizing in the 1990s where it did not focus on reshaping the civilian workforce in a strategic manner. Particularly as decision makers consider proposals to reduce the civilian workforce, it will be critical to DOD’s mission for the department to have the right number of federal civilian personnel with the right skills. This testimony discusses DOD’s 1) prior experience with civilian workforce downsizing, and 2) current strategic human capital planning efforts. This testimony is based on GAO reviews issued from March 1992 through June 2012. Prior Department of Defense (DOD) civilian workforce downsizing efforts in the 1990s were not oriented toward shaping the makeup of the workforce, resulting in significant imbalances in terms of shape, skills, and retirement eligibility. Specifically, in a series of reviews GAO found that DOD’s efforts in the 1990s to reduce its federal civilian workforce to levels below that of 1987 were hampered by incomplete data and lack of a clear strategy for avoiding skill imbalances and other adverse effects of downsizing. For instance, in 1992, GAO found that DOD used incomplete and inconsistent data related to workers, workload, and projected force reductions. Further, the approaches DOD has relied on to accomplish downsizing have sometimes had unintended consequences. The use of voluntary attrition, hiring freezes, and financial separation incentives allowed DOD to mitigate some adverse effects of civilian workforce reductions, but were less oriented toward shaping the makeup of the workforce than was the approach the department used to manage its military downsizing. For DOD, this was especially true of the civilian acquisition workforce. The department, which in 2011 obligated about $375 billion to acquire goods and services, was put on the verge of a retirement-driven talent drain in this workforce after 11 consecutive years of downsizing, according to a DOD report. Finally, GAO has found that the use of strategies such as financial separation incentives makes it difficult to document or estimate the actual cost savings of government downsizing efforts, especially in cases where the work previously performed by the eliminated personnel continues to be required. For example, if the work continues to be required, it may need to be contracted out to private companies and contract costs should be considered in determining whether net savings resulted from workforce reductions. DOD has taken positive steps towards identifying its critical skills, but there are opportunities to enhance the department’s current strategic workforce plans. GAO and the Office of Personnel Management have identified leading principles to incorporate into effective workforce plans, such as the need to identify and address critical skills and competencies. DOD has been required to have a civilian strategic workforce plan since 2006. Currently, DOD is required to develop a strategic workforce plan that includes, among other things, an assessment of the skills, competencies and gaps, projected workforce trends, and needed funding of its civilian workforce. GAO has found improvements in DOD’s efforts to strategically manage its civilian workforce. For instance, GAO reported in 2010 that DOD’s 2009 strategic workforce plan assessed critical skills and identified 22 mission-critical occupations, such as acquisition and financial management. However, DOD’s plan only discussed competency gap analyses for 3 of its 22 mission-critical occupations, which GAO has reported is key to enabling an agency to develop specific strategies to address workforce needs. For example, GAO found that DOD had not conducted a competency gap analysis for its financial management workforce, and GAO remains concerned that DOD lacks critical information it needs to effectively plan for its workforce requirements. GAO is currently reviewing DOD’s latest strategic workforce plan, which was released in March 2012. The results of this review are expected to be released in September 2012.
You are an expert at summarizing long articles. Proceed to summarize the following text: There is no single definition for financial literacy, but it has previously been described as the ability to make informed judgments and to take effective actions regarding current and future use and management of money. Financial literacy encompasses both financial education and consumers’ behavior as it relates to their ability to make informed judgments. Financial education refers to the processes whereby individuals improve their knowledge and understanding of financial products, services, and concepts. However, being financially literate refers to more than simply being knowledgeable about financial matters—it also entails utilizing that knowledge to make informed decisions, avoid pitfalls, and take other actions to improve one’s present and long-term financial well-being. Evidence indicates that many U.S. consumers could benefit from improved financial literacy efforts. In a 2010 survey of U.S. consumers prepared for the National Foundation for Credit Counseling, a majority of consumers reported they did not have a budget and about one-third were not saving for retirement. In a 2009 survey of U.S. consumers by the FINRA Investor Education Foundation, a majority believed themselves to be good at dealing with day-to-day financial matters, but the survey also revealed that many had difficulty with basic financial concepts. Further, about 25 percent of U.S. households either have no checking or savings account or rely on alternative financial products or services that are likely to have less favorable terms or conditions, such as nonbank money orders, nonbank check-cashing services, or payday loans. As a result of this situation, many Americans may not be planning their finances in the most effective manner for maintaining or improving their financial well-being. In addition, individuals today have more responsibility for their own retirement savings because traditional defined-benefit pension plans have declined substantially over the past two decades. As a result, financial skills are increasingly important for those individuals in or planning for retirement to help ensure that retirees can enjoy a comfortable standard of living. Federal financial literacy programs and resources are spread widely among many different federal agencies, raising concerns about fragmentation and potential duplication of effort. As we noted in our recent report on overlap, duplication, and fragmentation, in 2009, more than 20 different agencies had more than 50 financial literacy initiatives under way that covered a number of topics, used a variety of delivery mechanisms, and targeted a range of audiences. This distribution of federal financial literacy efforts across multiple agencies can have certain advantages. For example, different agencies can focus their efforts on particular subject matter or target specific audiences for which they have expertise. However, this fragmentation also increases the risk of inefficiency and redundancy and highlights the need for strong coordination of these efforts. Further, fragmentation of programs across many federal agencies can make it difficult to develop a coherent overall approach for meeting needs, identifying gaps, and rationally allocating overall resources. Because of the fragmentation of federal financial literacy efforts, coordination among agencies is essential to avoid inefficient, uncoordinated, or redundant use of resources. Identifying potential inefficiencies can be challenging because federal financial literacy efforts have numerous different funding streams and there are little good data on the amount of federal funds devoted to financial literacy. Financial literacy efforts are not necessarily organized as separate budget line items or cost centers within federal agencies and there is no estimate of overall federal spending for financial literacy and education, according to the Department of the Treasury. In part to encourage a more coordinated response to financial literacy, in 2003 Congress created the multiagency Financial Literacy and Education Commission and mandated that the Commission develop a national strategy. We conducted a review of the Commission in 2006 and made recommendations related to enhancing public-private partnerships, conducting independent reviews of duplication and effectiveness, and conducting usability testing of the Commission’s MyMoney.gov Web site. We subsequently reported that the Commission had made progress in cultivating sustainable partnerships with states, localities, nonprofits, and private entities, and had acted on our recommendation to measure customer satisfaction with its Web site. The Commission and the Department of the Treasury also initiated two independent reviews, as we had recommended, addressing overlap in federal activities and the availability and impact of federal financial literacy materials. As we have noted in the past, the Commission faces significant challenges in its role as a centralized focal point: it is composed of many agencies, but it has no independent budget and no legal authority to compel member agencies to take any action. Our 2006 review also found that while the Commission’s initial national strategy was a useful first step in focusing attention on financial literacy, it was largely descriptive rather than strategic. In particular, the national strategy was comprehensive to the extent of discussing major issues and challenges in improving financial literacy and describing initiatives in government, nonprofit, and private sectors. However, it did not include a plan for implementation and only partially addressed some of the characteristics we had previously identified as desirable for any effective national strategy. For example, although it provided a clear purpose, scope, and methodology, it did not go far enough to provide a detailed discussion of problems and risks; establish specific goals, performance measures, and milestones; discuss the resources that would be needed to implement the strategy; or discuss, assign, or recommend roles and responsibilities for achieving its mission. However, in December 2010, the Commission released a new national strategy that identifies five action areas—policy, education, practice, research, and coordination––and clearly lays out a series of goals and related objectives intended to help guide financial literacy efforts over the next several years. To supplement this national strategy, the Commission has said it will be releasing an implementation plan for the strategy by the end of this fiscal year. While the new national strategy clearly identifies action areas and related goals and objectives, it still needs to incorporate specific provisions for performance measures, resource needs, and roles and responsibilities, which we believe to be essential for an effective strategy. The new strategy will benefit if the forthcoming implementation plan incorporates these elements, as well as addresses the fragmentation of federal financial literacy efforts. More recently, the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) requires the establishment of an Office of Financial Education within the new Bureau of Consumer Financial Protection, further underscoring the need for coordination among federal agencies on this topic. The Dodd-Frank Act charges the new office within the bureau with developing and implementing a strategy to improve financial literacy through activities including opportunities for consumers to access, among other things, financial counseling; information to assist consumers with understanding credit products, histories, and scores; information about saving and borrowing tools; and assistance in developing long-term savings strategies. This new office presents an opportunity to further promote awareness, coordinate efforts, and fill gaps related to financial literacy. At the same time, the duties this office is charged with fulfilling are in some ways similar to those of the separate Office of Financial Education and Financial Access within the Department of the Treasury. As noted above, the Dodd-Frank Act charges the Bureau of Consumer Financial Protection with developing and implementing a strategy on improving the financial literacy of consumers—one that is consistent with, but separate from, the strategy required of the Commission. Thus, these entities will need to coordinate their roles and activities closely to avoid unnecessary overlap and make the most productive use of resources. Coordination and partnership among federal, state, nonprofit, and private sectors are also essential in addressing financial literacy, and there have been positive developments in these areas in recent years. For example, a recent partnership between the National Credit Union Administration, the Department of Education, and the Federal Deposit Insurance Corporation aims to improve the financial education of millions of students. These three agencies are coordinating to facilitate partnerships among schools, financial institutions, federal grantees, and other stakeholders to provide effective financial education. Additionally, the National Financial Education Network, the President’s Advisory Council on Financial Capability, and the Community Financial Access Pilot all represent examples of progress in fostering partnerships among participants in financial education. For example, our review in 2009 found that the establishment of the National Financial Education Network was a useful initial action to facilitate and advance financial education at the state and local levels. Similarly, the President’s Advisory Council on Financial Capability facilitates strategic alliances among federal, private, and nonprofit enterprises. Although numerous financial literacy initiatives are conducted by federal, state, local, nonprofit, and private entities throughout the country, there is little definitive evidence available on what specific programs and approaches are most effective. As part of ongoing work we are performing in response to a mandated study in the Dodd-Frank Act, we are conducting a review of studies that have evaluated the effectiveness of financial literacy efforts. More than 100 articles, papers, and studies have been published on the general topic of financial literacy since 2000, but our preliminary findings have identified only about 20 papers that constitute empirically based evaluations on the effectiveness of specific financial education programs. In addition, only about 10 of these studies actually measured the impact of a program on participants’ behavior rather than simply identifying a change in the consumer’s knowledge, understanding, or intent. This distinction is important because a change in behavior is typically the ultimate goal of any financial literacy program, and changes in behavior do not necessarily follow from changes in knowledge or understanding. We are currently in the process of analyzing the results of these studies and look forward to reporting more fully on our findings this summer. But in general, the consensus among a wide variety of stakeholders in the field of financial literacy is that relatively little is known about what financial literacy approaches are most effective in meaningfully changing consumers’ financial behavior. The limited number of rigorous, outcome-based evaluations of financial literacy programs is likely the result of several factors. Because the field of financial literacy is relatively new, many programs have not been in place long enough to allow for a long-term study of their effectiveness; many of the key federal financial literacy initiatives were created only within the past 10 years. In addition, experts in financial literacy and program evaluation have cited many significant challenges to conducting rigorous and definitive evaluations of financial literacy programs. For example, measuring a change in participant behavior is much more difficult than measuring a gain in knowledge, which can often be captured through a simple post-course survey. Similarly, financial literacy programs often seek to effect change over the long term, which means that effective evaluation can require ongoing follow up with participants—a complex and expensive process. In addition, discerning the impact of the financial literacy program as distinct from other influences, such as changes in the overall economy, can often be difficult. Nonetheless, given that federal agencies have limited resources, focusing federal financial literacy resources on initiatives that work is important. Some federal financial literacy programs, such as the Federal Deposit Insurance Corporation’s Money Smart, have included a strong evaluation component, while others have not. The Financial Literacy and Education Commission and many federal agencies have recognized the need for a greater understanding of which programs are most effective in improving financial literacy. The Commission’s original national strategy in 2006 noted, for example, that more research and program evaluation are needed so that organizations are able to validate or improve their efforts and measure the impact of their work. In response, in October 2008, the Department of the Treasury and the Department of Agriculture convened, on behalf of the Commission, the National Research Symposium on Financial Literacy and Education, which discussed academic research priorities related to financial literacy. Moreover, we are pleased to see that the Commission’s new 2011 national strategy sets as one of its four goals to “identify, enhance, and share effective practices.” The new strategy sets objectives for reaching this goal that include, among other things, (1) encouraging research on financial literacy strategies that affect consumer behavior, (2) establishing a clearinghouse for evidence-based research and evaluation studies, (3) developing and disseminating tools and strategies to encourage and support program evaluation, and (4) forming a network for sharing research and best practices. These measures are positive steps in helping ensure that, in the long term, scarce resources are focused efficiently and effectively. At the same time, as we have noted in the past, an effective national strategy goes beyond simply setting objectives; it also must describe the specific actions needed to accomplish goals, identify the resources required, and discuss appropriate roles and responsibilities for the players involved. We encourage the Commission and its participating agencies to incorporate these elements into the national strategy’s implementation plan, which is slated to be released later this year. In addition, it is important to note that financial education is not the only approach—or necessarily always the best approach—for improving consumers’ financial behavior. Alternative strategies or mechanisms, sometimes in conjunction with financial education, have also been successful in improving financial behavior. In particular, insights from behavioral economics that recognize the realities of human psychology have been used effectively to design strategies to assist consumers in reaching financial goals without compromising their ability to choose among different products or approaches. For example, one strategy has been to use what are referred to as commitment mechanisms, such as having individuals commit well in advance to allocating a portion of their future salary increases toward a savings plan. Another strategy for encouraging consumers to increase their savings has been to use incentives with tangible benefits, such as matching funds. In addition, changing the default option for enrollment in retirement plans—that is, automatically enrolling new employees while giving them the opportunity to opt out—has led to significant increases in plan participation rates among some organizations. The most effective approach to improving consumers’ financial decision making and behavior may be to use a variety of these types of strategies in conjunction with financial education. As I noted during my confirmation hearing, financial literacy is an area of priority for me as Comptroller General, and during my tenure, I hope to draw additional attention to this important issue. Improving financial literacy involves many stakeholders and must be a partnership between the federal government, state and local governments, the private and nonprofit sectors, and academia. My hope is that GAO can play a role in facilitating knowledge transfer among these different entities, as well as working with other organizations in the accountability community, such as the American Institute of Certified Public Accountants. Almost 7 years ago we hosted a forum on the role of the federal government in improving financial literacy. At that forum, public and private sector experts highlighted, among other things, the need for the federal government to serve as a leader in this area, but they also stressed the importance of public-private partnerships. We will host another forum on financial literacy later this year to bring together experts in financial literacy and education from federal and state agencies, nonprofit organizations representing consumers, educational and academic institutions, and private sector employers. This forum will address the gaps that exist in financial literacy efforts, challenges that federal agencies may face in addressing these gaps, and opportunities for improving the federal government’s approach to financial literacy. In addition, as part of our audit and oversight function, we will continue to conduct evaluations of the efficiency and effectiveness of federal financial literacy efforts. Financial literacy plays a role in a wide variety of areas that GAO regularly reviews—including student loans, retirement savings, banking and investment products, and homebuyer assistance programs, to name a few. For example, in work we have done on retirement savings, we have made recommendations intended to facilitate consumers’ understanding of retirement plans, disclosures, and any associated fees. Additionally, our reviews of financial products will continue to focus on consumer understanding of these products, as well as strategies for encouraging consumers to make sound decisions about them. Moreover, we will continue our body of work evaluating various consumer protections, which in conjunction with financial education are a key component in helping consumers avoid abusive or misleading financial products, services, or practices. Financial education has its limitations, of course, but it does represent an important tool that can benefit both individuals and our economy as a whole. On an individual level, better money management and financial decisions can play an important role in improving families’ standard of living and helping them achieve long-term financial goals. While personal financial decisions are made by individuals and their families, the federal government can play a role in helping ensure that its citizens have easy access to financial information and the tools they need to make sound decisions. Moreover, improving consumer financial literacy can be beneficial to our national economy as a whole. Financial markets function best when consumers understand how financial service providers and products work and know how to choose among them. Our income tax system requires citizens to have an adequate understanding of both the tax system itself and financial matters in general. Educated citizens are also important to well-functioning retirement systems—for example, workers should understand the benefit of saving for their retirement to supplement any benefits received from Social Security. Finally, our nation faces a challenging long-term fiscal outlook, and it is important that our citizens understand and are attentive to the fact that the federal government faces hard choices that will affect their own, and our nation’s, economic future. Chairman Akaka, Ranking Member Johnson, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact Alicia Puente Cackley at (202) 512-8678 or at cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Alicia Puente Cackley (Director), Jason Bromberg (Assistant Director), Tania Calhoun, Beth Ann Faraguna, Jennifer Schwartz, and Andrew Stavisky. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Consumer Finance: Factors Affecting the Financial Literacy of Individuals with Limited English Proficiency. GAO-10-518. Washington, D.C.: May 21, 2010. Financial Literacy and Education Commission: Progress Made in Fostering Partnerships, but National Strategy Remains Largely Descriptive Rather Than Strategic. GAO-09-638T. Washington, D.C.: April 29, 2009. Financial Literacy and Education Commission: Further Progress Needed to Ensure an Effective National Strategy. GAO-07-100. Washington, D.C.: December 4, 2006. Highlights of a GAO Forum: The Federal Government’s Role in Improving Financial Literacy. GAO-05-93SP. Washington, D.C.: November 15, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Financial literacy plays an important role in helping ensure the financial health and stability of individuals, families, and our broader national economy. Economic changes in recent years have highlighted the need to empower Americans to make informed financial decisions, yet evidence indicates that many U.S. consumers could benefit from a better understanding of financial matters. For example, recent surveys indicate that many consumers have difficulty with basic financial concepts and do not budget. This testimony discusses (1) the state of the federal government's approach to financial literacy, (2) observations on overall strategies for addressing financial literacy, and (3) the role GAO can play in addressing and raising awareness on this issue. This testimony is based largely on prior and ongoing work, for which GAO conducted a literature review; interviewed representatives of organizations that address financial literacy within the federal, state, private, nonprofit, and academic sectors; and reviewed materials of the Financial Literacy and Education Commission. While this statement includes no new recommendations, in the past GAO has made a number of recommendations aimed at improving financial literacy efforts.. Federal financial literacy efforts are spread among more than 20 different agencies and more than 50 different programs and initiatives, raising concerns about fragmentation and potential duplication of effort. The multiagency Financial Literacy and Education Commission, which coordinates federal efforts, has acted on recommendations GAO made in 2006 related to public-private partnerships, studies of duplication and effectiveness, and the Commission's MyMoney.gov Web site. While GAO's 2006 review of the Commission's initial national strategy for financial literacy found that it was a useful first step in focusing attention on financial literacy, it was largely descriptive rather than strategic. The Commission recently released a new strategy for 2011, which laid out clear goals and objectives, but it still needs to incorporate specific provisions for performance measures, resource needs, and roles and responsibilities, all of which GAO believes to be essential for an effective strategy. However, the Commission will be issuing an implementation plan to accompany the strategy later this year and the strategy will benefit if the plan incorporates these elements. The new Bureau of Consumer Financial Protection will also have a role in financial literacy, further underscoring the need for coordination among federal entities. Coordination and partnership among federal, state, nonprofit, and private sectors is also essential in addressing financial literacy, and there have been some positive developments in fostering such partnerships in recent years. There is little definitive evidence available on what specific programs and approaches are most effective in improving financial literacy, and relatively few rigorous studies have measured the impact of specific financial literacy programs on consumer behavior. Given that federal agencies have limited resources for financial literacy, it is important that these resources be focused on initiatives that are effective. To this end, the Commission's new national strategy on financial education sets as one of its four goals identifying, enhancing, and sharing effective practices. However, financial education is not the only approach for improving consumers' financial behavior. Several other mechanisms and strategies have also been shown to be effective, including financial incentives or changes in the default option, such as automatic enrollment in employer retirement plans. The most effective approach may involve a mix of financial education and these other strategies. GAO will continue to play a role in supporting and facilitating knowledge transfer on financial literacy. GAO will host a forum on financial literacy later this year to bring together experts from federal and state agencies and nonprofit, educational, and private sector organizations. The forum will address gaps, challenges, and opportunities related to federal financial literacy efforts. In addition, as part of GAO's audit and oversight function, GAO will continue to evaluate the effectiveness of federal financial literacy programs, as well as identify opportunities to improve the efficient and cost-effective use of these resources.
You are an expert at summarizing long articles. Proceed to summarize the following text: National security challenges covering a broad array of areas, ranging from preparedness for an influenza pandemic to Iraqi governance and reconstruction, have necessitated using all elements of national power— including diplomatic, military, intelligence, development assistance, economic, and law enforcement support. These elements fall under the authority of numerous U.S. government agencies, requiring overarching strategies and plans to enhance agencies’ abilities to collaborate with each other, as well as with foreign, state, and local governments and nongovernmental partners. Without overarching strategies, agencies often operate independently to achieve their own objectives, increasing the risk of duplication or gaps in national security efforts that may result in wasting scarce resources and limiting program effectiveness. Strategies can enhance interagency collaboration by helping agencies develop mutually reinforcing plans and determine activities, resources, processes, and performance measures for implementing those strategies. Strategies can be focused on broad national security objectives, like the National Security Strategy issued by the President, or on a specific program or activity, like the U.S. strategy for Iraq. Strategies have been developed by the Homeland Security Council, such as the National Strategy for Homeland Security; jointly with multiple agencies, such as the National Strategy for Maritime Security, which was developed jointly by the Secretaries of Defense and Homeland Security; or by an agency that is leading an interagency effort, such as the National Intelligence Strategy, which was developed under the leadership of the Office of the Director of National Intelligence. Congress recognized the importance of overarching strategies to guide interagency efforts, as shown by the requirement in the fiscal year 2009 National Defense Authorization Act for the President to submit to the appropriate committees of Congress a report on a comprehensive interagency strategy for public diplomacy and strategic communication of the federal government, including benchmarks and a timetable for achieving such benchmarks, by December 31, 2009. Congress and the administration will need to examine the ability of the executive branch to develop and implement overarching strategies to enhance collaboration for national security efforts. Although some U.S. government agencies have developed or updated overarching strategies since September 11, 2001, the lack of information on roles and responsibilities and lack of coordination mechanisms in these strategies can hinder interagency collaboration. Our prior work, as well as that by national security experts, has found that strategic direction is required as the basis for collaboration toward national security goals. Overarching strategies can help agencies overcome differences in missions, cultures, and ways of doing business by providing strategic direction for activities and articulating a common outcome to collaboratively work toward. As a result, agencies can better align their activities, processes, and resources to collaborate effectively to accomplish a commonly defined outcome. Without having the strategic direction that overarching strategies can provide, agencies may develop their own individual efforts that may not be well-coordinated with that of interagency partners, thereby limiting progress in meeting national security goals. Defining organizational roles and responsibilities and mechanisms for coordination—one of the desirable characteristics for strategies that we have identified in our prior work—can help agencies clarify who will lead or participate in which activities, organize their joint activities and individual efforts, facilitate decision making, and address how conflicts would be resolved. The lack of overarching strategies that address roles and responsibilities and coordination mechanisms—among other desirable characteristics that we have identified in our prior work—can hinder interagency collaboration for national security programs at home and abroad. We have testified and reported that in some cases U.S. efforts have been hindered by multiple agencies pursuing individual efforts without overarching strategies detailing roles and responsibilities of organizations involved or coordination mechanisms to integrate their efforts. For example, we have found the following: Since 2005, multiple U.S. agencies—including the State Department, U.S. Agency for International Development (USAID), and Department of Defense (DOD)—had led separate efforts to improve the capacity of Iraq’s ministries to govern without overarching direction from a lead entity to integrate their efforts. As we have testified and reported, the lack of an overarching strategy contributed to U.S. efforts not meeting their goal of key Iraqi ministries having the capacity to effectively govern and assume increasing responsibility for operating, maintaining, and further investing in reconstruction projects. In July 2008 we reported that agencies involved in the Trans-Sahara Counterterrorism Partnership had not developed a comprehensive, integrated strategy for the program’s implementation. The State Department, USAID, and DOD had developed separate plans related to their respective program activities that reflect some interagency collaboration, for example, in assessing country needs for development assistance. However, these plans did not incorporate all of the desirable characteristics for strategies that we have previously identified. For example, we found that roles and responsibilities—particularly between the State Department and DOD—were unclear with regard to authority over DOD personnel temporarily assigned to conduct certain program activities in African countries, and DOD officials said that disagreements affected implementation of DOD’s activities in Niger. DOD suspended most of its program activities in Niger in 2007 after the ambassador limited the number of DOD personnel allowed to enter the country. State Department officials said these limits were set in part because of embassy concerns about the country’s fragile political environment as well as limited space and staff available to support DOD personnel deployed to partner countries. At the time of our May 2007 review, we found that the State Department office responsible for coordinating law enforcement agencies’ role in combating terrorism had not developed or implemented an overarching plan to use the combined capabilities of U.S. law enforcement agencies to assist foreign nations to identify, disrupt, and prosecute terrorists. Additionally, the national strategies related to this effort lacked clearly defined roles and responsibilities. In one country we visited for that review, the lack of clear roles and responsibilities led two law enforcement agencies, which were unknowingly working with different foreign law enforcement agencies, to move in on the same subject. According to foreign and U.S. law enforcement officials, such actions may have compromised other investigations. We also reported that because the national strategies related to this effort did not clarify specific roles, among other issues, law enforcement agencies were not being fully used abroad to protect U.S. citizens and interests from future terrorist attacks. In our work on the federal government’s pandemic influenza preparedness efforts, we noted that the Departments of Homeland Security and Health and Human Services share most federal leadership roles in implementing the pandemic influenza strategy and supporting plans; however, we reported that it was not clear how this would work in practice because their roles were unclear. The National Strategy for Pandemic Influenza and its supporting implementation plan described the Secretary of Health and Human Services as being responsible for leading the medical response in a pandemic, while the Secretary of Homeland Security would be responsible for overall domestic incident management and federal coordination. However, since a pandemic extends well beyond health and medical boundaries, to include sustaining critical infrastructure, private- sector activities, the movement of goods and services across the nation and the globe, and economic and security considerations, it is not clear when, in a pandemic, the Secretary of Health and Human Services would be in the lead and when the Secretary of Homeland Security would lead. This lack of clarity on roles and responsibilities could lead to confusion or disagreements among implementing agencies that could hinder interagency collaboration, and a federal response could be slowed as agencies resolve their roles and responsibilities following the onset of a significant outbreak. In March 2008, we reported that DOD and the intelligence community had not developed, agreed upon, or issued a national security space strategy. The United States depends on space assets to support national security activities, among other activities. Reports have long recognized the need for a strategy to guide the national security space community’s efforts in space and better integrate the activities of DOD and the intelligence community. Moreover, Congress found in the past that DOD and the intelligence community may not be well-positioned to coordinate certain intelligence activities and programs to ensure unity of effort and avoid duplication of efforts. We reported that a draft strategy had been developed in 2004, but according to the National Security Space Office Director, the National Security Council requested that the strategy not be issued until the revised National Space Policy directive was released in October 2006. However, once the policy was issued, changes in leadership at the National Reconnaissance Office and Air Force, as well as differences in opinion and organizational differences between the defense and intelligence communities further delayed issuance of the strategy. Until a national security space strategy is issued, the defense and intelligence communities may continue to make independent decisions and use resources that are not necessarily based on national priorities, which could lead to gaps in some areas of space operations and redundancies in others. We testified in March 2009 that as the current administration clarifies its new strategy for Iraq and develops a new comprehensive strategy for Afghanistan, these strategies should incorporate the desirable characteristics we have previously identified. This includes, among other issues, the roles and responsibilities of U.S. government agencies, and mechanisms and approaches for coordinating the efforts of the wide variety of U.S. agencies and international organizations—such as DOD, the Departments of State, the Treasury, and Justice, USAID, the United Nations, and the World Bank—that have significant roles in Iraq and Afghanistan. Clearly defining and coordinating the roles, responsibilities, commitments, and activities of all organizations involved would allow the U.S. government to prioritize the spending of limited resources and avoid unnecessary duplication. In recent years we have issued reports recommending that U.S. government agencies, including DOD, the State Department, and others, develop or revise strategies to incorporate desirable characteristics for strategies for a range of programs and activities including humanitarian and development efforts in Somalia, the Trans-Sahara Counterterrorism Partnership, foreign assistance strategy, law enforcement agencies’ role in assisting foreign nations in combating terrorism, and meeting U.S. national security goals in Pakistan’s Federally Administered Tribal Areas. In commenting on drafts of those reports, agencies generally concurred with our recommendations. Officials from one organization—the National Counterterrorism Center—noted that at the time of our May 2007 report on law enforcement agencies’ role in assisting foreign nations in combating terrorism, it had already begun to implement our recommendations. What steps are agencies taking to develop joint or mutually supportive strategies to guide interagency activities? What obstacles or impediments exist to developing comprehensive strategies or plans that integrate multiple agencies’ efforts? What specific national security challenges would be best served by overarching strategies? Who should be responsible for determining and overseeing these overarching strategies? Who should be responsible for developing the shared outcomes? How will agencies ensure effective implementation of overarching strategies? To what extent do strategies developed by federal agencies clearly identify priorities, milestones, and performance measures to gauge results? What steps are federal agencies taking to ensure coordination of planning and implementation of strategies with state and local governments when appropriate? U.S. government agencies, such as the Department of State, the U.S. Agency for International Development (USAID), and the Department of Defense (DOD), among others, spend billions of dollars annually on various diplomatic, development, and defense missions in support of national security. At a time when our nation faces increased fiscal constraints, it is increasingly important that agencies use their resources efficiently and effectively. Achieving meaningful results in many national security–related interagency efforts requires coordinated efforts among various actors across federal agencies; foreign, state, and local governments; nongovernment organizations; and the private sector. Given the number of agencies involved in U.S. government national security efforts, it is particularly important that there be mechanisms to coordinate across agencies. However, differences in agencies’ structures, processes, and resources can hinder successful collaboration in national security, and adequate coordination mechanisms to facilitate collaboration during national security planning and execution are not always in place. Congress and the administration will need to consider the extent to which agencies’ existing structures, processes, and funding sources facilitate interagency collaboration and whether changes could enhance collaboration. Based on our prior work, organizational differences—including differences in organizational structures, planning processes, and funding sources—can hinder interagency collaboration, resulting in a patchwork of activities that can waste scarce funds and limit the overall effectiveness of federal efforts. Differences in organizational structures can hinder collaboration for national security efforts. Agencies involved in national security activities define and organize their regions differently. For example, DOD’s regional combatant commands and the State Department’s regional bureaus are aligned differently, as shown in figure 1. In addition to regional bureaus, the State Department is organized to interact bilaterally through U.S. embassies located within other countries. As a result of these differing structures, our prior work and that of national security experts has found that agencies must coordinate with a large number of organizations in their regional planning efforts, potentially creating gaps and overlaps in policy implementation and leading to challenges in coordinating efforts among agencies. For example, as the recent report by the Project on National Security Reform noted, U.S. government engagement with the African Union requires two of the State Department’s regional bureaus, one combatant command (however, before October 2008, such efforts would have required coordination with three combatant commands), two USAID bureaus, and the U.S. ambassador to Ethiopia. Similarly, in reporting on the State Department’s efforts to develop a framework for planning and coordinating U.S. reconstruction and stabilization operations, the State Department noted that differences between the organizational structure of civilian agencies and that of the military could make coordination more difficult, as we reported in November 2007. Agencies also have different planning processes that can hinder interagency collaboration efforts. Specifically, in a May 2007 report on interagency planning for stability operations, we noted that some civilian agencies, like the State Department, focus their planning efforts on current operations. In contrast, DOD is required to plan for a wide range of current and potential future operations. Such differences are reflected in their planning processes: we reported that the State Department does not allocate its planning resources in the same way as DOD and, as such, does not have a large pool of planners to engage in DOD’s planning process. We found almost universal agreement among all organizations included in that review—including DOD, the State Department, and USAID—that there needed to be more interagency coordination in planning. However, we have previously reported that civilian agencies generally did not receive military plans for comment as they were developed, which restricted agencies’ ability to harmonize plans. Interagency collaboration during plan development is important to achieving a unified government approach in plans; however, State Department officials told us during our May 2007 review that DOD’s hierarchical approach, which required Secretary of Defense approval to present aspects of plans to the National Security Council for interagency coordination, limited interagency participation in the combatant commands’ plan development and had been a significant obstacle to achieving a unified governmentwide approach in those plans. DOD has taken some steps to involve other agencies in its strategic planning process through U.S. Africa Command. As we reported in February 2009, in developing its theater campaign plan, U.S. Africa Command was one of the first combatant commands to employ DOD’s new planning approach, which called for collaboration among federal agencies to ensure activities are integrated and synchronized in pursuit of common goals. U.S. Africa Command officials met with representatives from 16 agencies at the beginning of the planning process to gain interagency input on its plan. While a nascent process, involving other U.S. government agencies at the beginning of the planning process may result in a better informed plan for DOD’s activities in Africa. Moreover, agencies have different funding sources for national security activities. Funding is budgeted for and appropriated by agency, rather than by functional area (such as national security or foreign aid). The Congressional Research Service reported in December 2008 that because of this agency focus in budgeting and appropriations, there is no forum to debate which resources or combination of resources to apply to efforts, like national security, that involve multiple agencies and, therefore, the President’s budget request and congressional appropriations tend to reflect individual agency concerns. As we have previously testified, the agency-by-agency focus of the budget does not provide for the needed integrated perspective of government performance envisioned by the Government Performance and Results Act. Moreover, we reported in March 2008 that different funding arrangements for defense and national intelligence activities may complicate DOD’s efforts to incorporate intelligence, surveillance, and reconnaissance activities. While DOD develops the defense intelligence budget, some DOD organizations also receive funding through the national intelligence budget, which is developed by the Office of the Director of National Intelligence, to provide support for national intelligence efforts. According to a DOD official, disagreement about equitable funding from each budget led to the initial operating capability date being pushed back 1 year for a new space radar system. In an April 2008 Comptroller General forum on enhancing partnerships for countering transnational terrorism, some participants suggested that funding overall objectives—such as counterterrorism— rather than funding each agency would provide flexibility to allocate funding where it was needed and would have the most effect. Similarly, as part of the national security reform debate, some have recommended instituting budgeting and appropriations processes—with corresponding changes to oversight processes—based on functional areas to better ensure that the U.S. national security strategy aligns with resources available to implement it. Agencies receive different levels of appropriations that are used to fund all aspects of an agency’s operations, to include national security activities. As shown in figure 2, DOD receives significantly more funding than other key agencies involved in national security activities, such as the Departments of State and Homeland Security. As shown in figure 3, DOD also has a significantly larger workforce than other key agencies involved in national security activities. As of the end of fiscal year 2008, DOD reported having 1.4 million active duty military personnel and about 755,000 government employees, while the State Department and Department of Homeland Security reported having almost 31,000 government employees and almost 219,000 government employees and military personnel, respectively. Because of its relatively large size—in terms of appropriations and personnel—DOD has begun to perform more national security–related activities than in the past. For example, as the Congressional Research Service reported in January 2009, the proportion of DOD foreign assistance funded through the State Department has increased from 7 percent of bilateral official development assistance in calendar year 2001 to an estimated 20 percent in 2006, largely in response to stabilization and reconstruction activities in Iraq and Afghanistan. The Secretaries of Defense and State have testified and stated that successful collaboration among civilian and military agencies requires confronting the disparity in resources, including providing greater capacity in the State Department and USAID to allow for effective civilian response and civilian-military partnership. In testimonies in April 2008 and May 2009, the former and current Secretaries of State, respectively, explained that the State Department was taking steps to become more capable and ready to handle reconstruction and development tasks in coordination with DOD. Specifically, former Secretary of State Rice explained that the State Department had redeployed diplomats from European and Washington posts to countries of greater need; sought to increase the size of the diplomatic corps in the State Department and USAID; and was training diplomats for nontraditional roles, especially stabilization and reconstruction activities. Additionally, the current Secretary of State noted in testimonies before two congressional committees that the State Department is working with DOD and will be taking back the resources to do the work that the agency should be leading, but did not elaborate on which activities this included. Enclosure III of this report further discusses the human capital issues related to interagency collaboration for national security. Some agencies have established mechanisms to facilitate interagency collaboration—a critical step in achieving integrated approaches to national security—but challenges remain in collaboration efforts. We have found in our prior work on enhancing interagency collaboration that agencies can enhance and sustain their collaborative efforts by establishing compatible policies, procedures, and other means to operate across agency boundaries, among other practices. Some agencies have established and formalized coordination mechanisms to facilitate interagency collaboration. For example: At the time of our review, DOD’s U.S. Africa Command had undertaken efforts to integrate personnel from other U.S. government agencies into its command structure because the command is primarily focused on strengthening security cooperation with African nations and creating opportunities to bolster the capabilities of African partners, which are activities that traditionally require coordination with other agencies. DOD’s other combatant commands have also established similar coordination mechanisms. National security experts have noted that U.S. Southern Command has been relatively more successful than some other commands in its collaboration efforts and attributed this success, in part, to the command’s long history of interagency operations related to domestic disaster response and counterdrug missions. As we reported in March 2009, an intelligence component of the Drug Enforcement Administration rejoined the intelligence community in 2006 to provide a link to coordinate terrorism and narcotics intelligence with all intelligence community partners. According to a Department of Justice Office of the Inspector General report, intelligence community partners found the Drug Enforcement Administration’s intelligence valuable in their efforts to examine ongoing threats. DOD, State Department, and USAID officials have established processes to coordinate projects related to humanitarian relief and reconstruction funded through the Commander’s Emergency Response Program and Section 1206 program. We reported in June 2008 that Multinational Corps–Iraq guidance required DOD commanders to coordinate Commander’s Emergency Response Program projects with various elements, including local government agencies, civil affairs elements, and Provincial Reconstruction Teams. DOD, State Department, and USAID officials we interviewed for that review said that the presence of the Provincial Reconstruction Teams, as well as embedded teams, had improved coordination among programs funded by these agencies and the officials were generally satisfied with the coordination that was taking place. Similarly, Section 1206 of the National Defense Authorization Act of 2006 gave DOD the authority to spend a portion of its own appropriations to train and equip foreign militaries to undertake counterterrorism and stability operations. The State Department and DOD must jointly formulate all projects and coordinate their implementation and, at the time of our review, the agencies had developed a coordinated process for jointly reviewing and selecting project proposals. We found that coordination in formulating proposals did not occur consistently between DOD’s combatant commands and the State Department’s embassy teams for those projects formulated in fiscal year 2006; however, officials reported better coordination in the formulation of fiscal year 2007 proposals. While some agencies have established mechanisms to enhance collaboration, challenges remain in facilitating interagency collaboration. We have found that some mechanisms are not formalized, may not be fully utilized, or have difficulty gaining stakeholder support, thus limiting their effectiveness in enhancing interagency collaboration. Some mechanisms may be informal. In the absence of formal coordination mechanisms, some agencies have established informal coordination mechanisms; however, by using informal coordination mechanisms, agencies could end up relying on the personalities of officials involved to ensure effective collaboration. At DOD’s U.S. Northern Command, for example, we found that successful collaboration on the command’s homeland defense plan between the command and an interagency planning team was largely based on the dedicated personalities involved and the informal meetings and teleconferences they instituted. In that report we concluded that without institutionalizing the interagency planning structure, efforts to coordinate with agency partners may not continue when personnel move to their next assignments. Some mechanisms may not be fully utilized. While some agencies have put in place mechanisms to facilitate coordination on national security activities, some mechanisms are not always fully utilized. We reported in October 2007 that the industry-specific coordinating councils that the Department of Homeland Security established to be the primary mechanism for coordinating government and private-sector efforts could be better utilized for collaboration on pandemic influenza preparedness. Specifically, we noted that these coordinating councils were primarily used to coordinate in a single area, sharing information across sectors and government, rather than to address a range of other challenges, such as unclear roles and responsibilities between federal and state governments in areas such as state border closures and vaccine distribution. In February 2009, Department of Homeland Security officials informed us that the department was working on initiatives to address potential coordination challenges in response to our recommendation. Some mechanisms have limited support from key stakeholders. While some agencies have implemented mechanisms to facilitate coordination, limited support from stakeholders can hinder collaboration efforts. Our prior work has shown that agencies’ concerns about maintaining jurisdiction over their missions and associated resources can be a significant barrier to interagency collaboration. For example, DOD initially faced resistance from key stakeholders in the creation of the U.S. Africa Command, in part due to concerns expressed by State Department officials that U.S. Africa Command would become the lead for all U.S. government activities in Africa, even though embassies lead decision making on U.S. government noncombat activities conducted in a country. In recent years we have issued reports recommending that the Secretaries of Defense, State, and Homeland Security and the Attorney General take a variety of actions to address creating collaborative organizations, including taking actions to provide implementation guidance to facilitate interagency participation and develop clear guidance and procedures for interagency efforts, develop an approach to overcome differences in planning processes, create coordinating mechanisms, and clarify roles and responsibilities. In commenting on drafts of those reports, agencies generally concurred with our recommendations. In some cases, agencies identified planned actions to address the recommendations. For example, in our April 2008 report on U.S. Northern Command’s plans, we recommended that clear guidance be developed for interagency planning efforts and DOD stated that it had begun to incorporate such direction in its major planning documents and would continue to expand on this guidance in the future. What processes, including internal agency processes, are hindering further interagency collaboration and what changes are needed to address these challenges? What are the benefits of and barriers to instituting a function-based budgeting and appropriations process? What resources or authorities are needed to further support integrated or mutually supportive activities across agencies? What steps are being taken to create or utilize structures or mechanisms to develop integrated or mutually supportive plans and activities? What is the appropriate role for key agencies in various national security– related activities? What strategies might Congress and agencies use to address challenges presented by the various funding sources? As the threats to national security have evolved over the past decades, so have the skills needed to prepare for and respond to those threats. To effectively and efficiently address today’s national security challenges, federal agencies need a qualified, well-trained workforce with the skills and experience that can enable them to integrate the diverse capabilities and resources of the U.S. government. However, federal agencies do not always have the right people with the right skills in the right jobs at the right time to meet the challenges they face, to include having a workforce that is able to deploy quickly to address crises. Moreover, personnel often lack knowledge of the processes and cultures of the agencies with which they must collaborate. To help federal agencies develop a workforce that can enhance collaboration in national security, Congress and the administration may need to consider legislative and administrative changes needed to build personnel capacities, enhance personnel systems to promote interagency efforts, expand training opportunities, and improve strategic workforce planning, thereby enabling a greater ability to address national security in a more integrated manner. Collaborative approaches to national security require a well-trained workforce with the skills and experience to integrate the government’s diverse capabilities and resources, but some federal government agencies may lack the personnel capacity to fully participate in interagency activities. When we added strategic human capital management to our governmentwide high-risk list in 2001, we explained that “human capital shortfalls are eroding the ability of many agencies—and threatening the ability of others—to effectively, efficiently, and economically perform their missions.” We also have reported that personnel shortages can threaten an organization’s ability to perform missions efficiently and effectively. Moreover, some agencies also lack the capacity to deploy personnel rapidly when the nation’s leaders direct a U.S. response to crises. As a result, the initial response to a crisis could rely heavily on the deployment of military forces and require military forces to conduct missions beyond their core areas of expertise. Some federal government agencies have taken steps to improve their capacity to participate in interagency activities. For example, in response to a presidential directive and a State Department recommendation to provide a centralized, permanent civilian capacity for planning and coordinating the civilian response to stabilization and reconstruction operations, the State Department has begun establishing three civilian response entities to act as first responders to international crises. Despite these efforts, we reported in November 2007 that the State Department has experienced difficulties in establishing permanent positions and recruiting for one of these entities, the Active Response Corps. Similarly, we also reported that other agencies that have begun to develop a stabilization and reconstruction response capacity, such as the U.S. Agency for International Development (USAID) and the Department of the Treasury, have limited numbers of staff available for rapid responses to overseas crises. Moreover, some federal government agencies are experiencing personnel shortages that have impeded their ability to participate in interagency activities. For example, in February 2009 we reported that the Department of Defense’s (DOD) U.S. Africa Command was originally intended to have significant interagency representation, but that of the 52 interagency positions DOD approved for the command, as of October 2008 only 13 of these positions had been filled with experts from the State, Treasury, and Agriculture Departments; USAID; and other federal government agencies. Embedding personnel from other federal agencies was considered essential by DOD because these personnel would bring knowledge of their home agencies into the command, which was expected to improve the planning and execution of the command’s programs and activities and stimulate collaboration among U.S. government agencies. However, U.S. Africa Command has had limited interagency participation due in part to personnel shortages in agencies like the State Department, which initially could only staff 2 of the 15 positions requested by DOD because the State Department faced a 25 percent shortfall in mid-level personnel. In addition, in November 2007 we reported that the limited number of personnel that other federal government agencies could offer hindered efforts to include civilian agencies into DOD planning and exercises. Furthermore, some interagency coordination efforts have been impeded because agencies have been reluctant to detail staff to other organizations or deploy them overseas for interagency efforts due to concerns that the agency may be unable to perform its work without these employees. For example, we reported in October 2007 that in the face of resource constraints, officials in 37 state and local government information fusion centers—collaborative efforts intended to detect, prevent, investigate, and respond to criminal and terrorist activity—said they encountered challenges with federal, state, and local agencies not being able to detail personnel to their fusion center. Fusion centers rely on such details to staff the centers and enhance information sharing with other state and local agencies. An official at one fusion center said that, because of already limited resources in state and local agencies, it was challenging to convince these agencies to contribute personnel to the center because they viewed doing so as a loss of resources. Moreover, we reported in November 2007 that the State Department’s Office of the Coordinator for Reconstruction and Stabilization had difficulty getting the State Department’s other units to release Standby Response Corps volunteers to deploy for interagency stabilization and reconstruction operations because the home units of these volunteers did not want to become short-staffed or lose high-performing staff to other operations. In the same report, we also found that other agencies reported a reluctance to deploy staff overseas or establish on-call units to support interagency stabilization and reconstruction operations because doing so would leave fewer workers available to complete the home offices’ normal work requirements. In addition to the lack of personnel, many national security experts argue that federal government agencies do not have the necessary capabilities to support their national security roles and responsibilities. For example, in September 2009, we reported that 31 percent of the State Department’s Foreign Service generalists and specialists in language-designated positions worldwide did not meet both the language speaking and reading proficiency requirements for their positions as of October 2008, up from 29 percent in 2005. To meet these language requirements, we reported that the State Department efforts include a combination of language training, special recruitment incentives for personnel with foreign language skills, and bonus pay to personnel with proficiency in certain languages, but the department faces several challenges to these efforts, particularly staffing shortages that limit the “personnel float” needed to allow staff to take language training. Similarly, we reported in September 2008 that USAID officials at some overseas missions told us that they did not receive adequate and timely acquisition and assistance support at times, in part because the numbers of USAID staff were insufficient or because the USAID staff lacked necessary competencies. National security experts have expressed concerns that unless the full range of civilian and military expertise and capabilities are effective and available in sufficient capacity, decision makers will be unable to manage and resolve national security issues. In the absence of sufficient personnel, some agencies have relied on contractors to fill roles that traditionally had been performed by government employees. As we explained in October 2008, DOD, the State Department, and USAID have relied extensively on contractors to support troops and civilian personnel and to oversee and carry out reconstruction efforts in Iraq and Afghanistan. While the use of contractors to support U.S. military operations is not new, the number of contractors and the work they were performing in Iraq and Afghanistan represent an increased reliance on contractors to carry out agency missions. Moreover, as agencies have relied more heavily on contractors to provide professional, administrative, and management support services, we previously reported that some agencies had hired contractors for sensitive positions in reaction to a shortfall in the government workforce rather than as a planned strategy to help achieve an agency mission. For example, our prior work has shown that DOD relied heavily on contractor personnel to augment its in-house workforce. In our March 2008 report on defense contracting issues, we reported that in 15 of the 21 DOD offices we reviewed, contractor personnel outnumbered DOD personnel and constituted as much as 88 percent of the workforce. While use of contractors provides the government certain benefits, such as increased flexibility in fulfilling immediate needs, we and others have raised concerns about the federal government’s services contracting. These concerns include the risk of paying more than necessary for work, the risk of loss of government control over and accountability for policy and program decisions, the potential for improper use of personal services contracts, and the increased potential for conflicts of interest. Given the limited civilian capacity, DOD has tended to become the default responder to international and domestic events, although DOD does not always have all of the needed expertise and capabilities possessed by other federal government agencies. For example, we reported in May 2007 that DOD was playing an increased role in stability operations activities, an area that DOD directed be given priority on par with combat operations in November 2005. These activities required the department to employ an increasing number of personnel with specific skills and capabilities, such as those in civil affairs and psychological operations units. However, we found that DOD had encountered challenges in identifying stability operations capabilities and had not yet systematically identified and prioritized the full range of needed capabilities. While the services were each pursuing efforts to improve current capabilities, such as those associated with civil affairs and language skills, we stated that these initiatives may not reflect the comprehensive set of capabilities that would be needed to effectively accomplish stability operations in the future. Since then, DOD has taken steps to improve its capacity to develop and maintain capabilities and skills to perform tasks such as stabilization and reconstruction operations. For example, in June 2009, we noted the increased emphasis that DOD has placed on improving the foreign language and regional proficiency of U.S. forces. In February 2009, the Secretary of Defense acknowledged that the military and civilian elements of the United States’ national security apparatus have grown increasingly out of balance, and he attributed this problem to a lack of civilian capacity. The 2008 National Defense Strategy notes that greater civilian participation is necessary both to make military operations successful and to relieve stress on the military. However, national security experts have noted that while rhetoric about the importance of nonmilitary capabilities has grown, funding and capabilities have remained small compared to the challenge. As a result, some national security experts have expressed concern that if DOD continues in this default responder role, it could lead to the militarization of foreign policy and may exacerbate the lack of civilian capacity. Similarly, we reported in February 2009 that State Department and USAID officials, as well as many nongovernmental organizations, believed that the creation of the U.S. Africa Command could blur the traditional boundaries among diplomacy, development, and defense, regardless of DOD’s intention that this command support rather than lead U.S. efforts in Africa, thereby giving the perception of militarizing foreign policy and aid. Agencies’ personnel systems do not always facilitate interagency collaboration, with interagency assignments often not being considered career-enhancing or recognized in agency performance management systems, which could diminish agency employees’ interest in serving in interagency efforts. For example, in May 2007 we reported that the Federal Bureau of Investigation (FBI) had difficulty filling permanent overseas positions because the FBI did not provide career rewards and incentives to agents or develop a culture that promoted the importance and value of overseas duty. As a result, permanent FBI positions were either unfilled or staffed with nonpermanent staff on temporary, short-term rotations, which limited the FBI’s ability to collaborate with foreign nations to identify, disrupt, and prosecute terrorists. At the time of that review, the FBI had just begun to implement career incentives to encourage staff to volunteer for overseas duty, but we were unable to assess the effect of these incentives on staffing problems because the incentives had just been implemented. Moreover, in June 2009 we reviewed compensation policies for six agencies that deployed civilian personnel to Iraq and Afghanistan, and reported that variations in policies for such areas as overtime rate, premium pay eligibility, and deployment status could result in monetary differences of tens of thousands of dollars per year. OPM acknowledged that laws and agency policy could result in federal government agencies paying different amounts of compensation to deployed civilians at equivalent pay grades who are working under the same conditions and facing the same risks. In addition, we previously identified reinforcing individual accountability for collaborative efforts through agency performance management systems as a key practice that can help enhance and sustain collaboration among federal agencies. However, our prior work has shown that assignments that involve collaborating with other agencies may not be rewarded. For example, in April 2009 we reported that officials from the Departments of Commerce, Energy, Health and Human Services, and the Treasury stated that providing support for State Department foreign assistance program processes creates an additional workload that is neither recognized by their agencies nor included as a factor in their performance ratings. Furthermore, agency personnel systems may not readily facilitate assigning personnel from one agency to another, which could hinder interagency collaboration. For example, we testified in July 2008 that, according to DOD officials, personnel systems among federal agencies were incompatible, which did not readily facilitate the assignment of non-DOD personnel into the new U.S. Africa Command. Increased training opportunities and focusing on strategic workforce planning efforts are two tools that could facilitate federal agencies’ ability to fully participate in interagency collaboration activities. We have previously testified that agencies need to have effective training and development programs to address gaps in the skills and competencies that they identified in their workforces. Training and developing personnel to fill new and different roles will play a crucial part in the federal government’s endeavors to meet its transformation challenges. Some agencies have ongoing efforts to educate senior leaders about the importance of interagency collaboration. For example, we reported in February 2009 that DOD’s 2008 update to its civilian human capital strategic plan identifies the need for senior leaders to understand interagency roles and responsibilities as a necessary leadership capability. We explained that DOD’s new Defense Senior Leader Development Program focuses on developing senior leaders to excel in the 21st century’s joint, interagency, and multinational environment and supports the governmentwide effort to foster interagency cooperation and information sharing. Training can help personnel develop the skills and understanding of other agencies’ capabilities needed to facilitate interagency collaboration. A lack of understanding of other agencies’ cultures, processes, and core capabilities can hamper U.S. national security partners’ ability to work together effectively. However, civilian professionals have had limited opportunities to participate in interagency training or education opportunities. For example, we reported in November 2007 that the State Department did not have the capacity at that time to ensure that its Standby Response Corps volunteers were properly trained for participating in stabilization and reconstruction operations because the Foreign Service Institute did not have the capacity to train the 1,500 new volunteers the State Department planned to recruit in 2009. Efforts such as the National Security Professional Development Program, an initiative launched in May 2007, are designed to provide the training necessary to improve the ability of U.S. government personnel to address a range of interagency issues. When it is fully established and implemented, this program is intended to use intergovernmental training and professional education to provide national security professionals with a breadth and depth of knowledge and skills in areas common to international and homeland security. It is intended to educate national security professionals in capabilities such as collaborating with other agencies, and planning and managing interagency operations. A July 2008 Congressional Research Service report stated that many officials and observers have contended that legislation would be necessary to ensure the success of any interagency career development program because, without the assurance that a program would continue into the future, individuals might be less likely to risk the investment of their time, and agencies might be less likely to risk the investment of their resources. Some national security experts say that implementation of the program has lagged, but that the program could be reenergized with high-level attention. The Executive Director of the National Security Professional Development Integration Office testified in April 2009 that the current administration is in strong agreement with the overall intent for the program and was developing a way ahead to build on past successes while charting new directions where necessary. Agencies also can use strategic workforce planning as a tool to support their efforts to secure the personnel resources needed to collaborate in interagency missions. In our prior work, we have found that tools like strategic workforce planning and human capital strategies are integral to managing resources as they enable an agency to define staffing levels, identify critical skills needed to achieve its mission, and eliminate or mitigate gaps between current and future skills and competencies. In designating strategic human capital management as a governmentwide high-risk area in 2001, we explained that it is critically important that federal agencies put greater focus on workforce planning and take the necessary steps to build, sustain, and effectively deploy the skilled, knowledgeable, diverse, and performance-oriented workforce needed to meet the current and emerging needs of government and its citizens. Strategic human capital planning that is integrated with broader organizational strategic planning is critical to ensuring agencies have the talent they need for future challenges, which may include interagency collaboration. Without integrating strategic human capital planning with broader organizational strategic planning, agencies may lose experienced staff and talent. For example, in July 2009 we reported that the State Department could not determine whether it met its objective of retaining experienced staff while restructuring its Arms Control and Nonproliferation Bureaus because there were no measurable goals for retention of experienced staff. As a result, some offices affected by the restructuring experienced significant losses in staff expertise. Additionally, in March 2007 we testified that one of the critical needs addressed by strategic workforce planning is developing long-term strategies for acquiring, developing, motivating, and retaining staff to achieve programmatic goals. We also stated that agencies need to strengthen their efforts and use of available flexibilities to acquire, develop, motivate, and retain talent to address gaps in talent due to changes in the knowledge, skills, and competencies in occupations needed to meet their missions. For example, in September 2008 we reported that USAID lacked the capacity to develop and implement a strategic acquisition and assistance workforce plan that could enable the agency to better match staff levels to changing workloads because it had not collected comprehensive information on the competencies—including knowledge, skills, abilities, and experience levels—of its overseas acquisition and assistance specialists. We explained that USAID could use this information to better identify its critical staffing needs and adjust its staffing patterns to meet those needs and address workload imbalances. Furthermore, in December 2005 we reported that the Office of the U.S. Trade Representative, a small trade agency that receives support from other larger agencies (e.g., the Departments of Commerce, State, and Agriculture) in doing its work, did not formally discuss or plan human capital resources at the interagency level, even though it must depend on the availability of these critical resources to achieve its mission. Such interagency planning also would facilitate human capital planning by the other agencies that work with the Office of the U.S. Trade Representative, which stated that potential budget cuts could result in fewer resources being available to support the trade agency. As a result, since the Office of the U.S. Trade Representative did not provide the other agencies with specific resource requirements when the agencies were planning, it shifted the risk to the other agencies of having to later ensure the availability of staff in support of the trade agenda, potentially straining their ability to achieve other agency missions. In recent years we have recommended that the Secretaries of State and Defense, the Administrator of USAID, and the U.S. Trade Representative take a variety of actions to address the human capital issues discussed above, such as staffing shortfalls, training, and strategic planning. Specifically, we have made recommendations to develop strategic human capital management systems and undertake strategic human capital planning, include measurable goals in strategic plans, identify the appropriate mix of contractor and government employees needed and develop plans to fill those needs, seek formal commitments from contributing agencies to provide personnel to meet interagency personnel requirements, develop alternative ways to obtain interagency perspectives in the event that interagency personnel cannot be provided due to resource limitations, develop and implement long-term workforce management plans, and implement a training program to ensure employees develop and maintain needed skills. In commenting on drafts of those reports, agencies generally concurred with our recommendations. In some cases, agencies identified planned actions to address the recommendations. For example, in our April 2009 report on foreign aid reform, we recommended that the State Department develop a long-term workforce management plan to periodically assess its workforce capacity to manage foreign assistance. The State Department noted in its comments that it concurs with the idea of further improving employee skill sets and would work to encourage and implement further training. What incentives are needed to encourage agencies to share personnel with other agencies? How can agencies overcome cultural differences to enhance collaboration to achieve greater unity of effort? How can agencies expand training opportunities for integrating civilian and military personnel? What changes in agency personnel systems are needed to address human capital challenges that impede agencies’ ability to properly staff interagency collaboration efforts? What incentives are needed to encourage employees in national security agencies to seek interagency experience, training, and work opportunities? How can agencies effectively meet their primary missions and support interagency activities in light of the resource constraints they face? How can agencies increase staffing of interagency functions across the national security community? What are the benefits and drawbacks to enacting legislation to support the National Security Professional Development Program? What legislative changes might enable agencies to develop a workforce that can enhance collaboration in national security activities? The government’s single greatest failure preceding the September 11, 2001, attacks was the inability of federal agencies to effectively share information about suspected terrorists and their activities, according to the Vice Chair of the 9/11 Commission. As such, sharing and integrating national security information among federal, state, local, and private- sector partners is critical to assessing and responding to current threats to our national security. At the same time, agencies must balance the need to share information with the need to protect it from widespread access. Since January 2005, we have designated information sharing for homeland security as high risk because the government has faced serious challenges in analyzing key information and disseminating it among federal, state, local, and private-sector partners in a timely, accurate, and useful way. Although federal, state, local, and private-sector partners have made progress in sharing information, challenges still remain in sharing, as well as accessing, managing, and integrating information. Congress and the administration will need to ensure that agencies remain committed to sharing relevant national security information, increasing access to necessary information, and effectively managing and integrating information across multiple agencies. Our prior work has shown that agencies do not always share relevant information with their national security partners, including other federal government agencies, state and local governments, and the private sector. Information is a crucial tool in addressing national security issues and its timely dissemination is absolutely critical for maintaining national security. Information relevant to national security includes terrorism- related information, drug intelligence, and planning information for interagency operations. As a result of the lack of information sharing, federal, state, and local governments may not have all the information they need to analyze threats and vulnerabilities. More than 8 years after 9/11, federal, state, and local governments, and private-sector partners are making progress in sharing terrorism-related information. For example, we reported in October 2007 that most states and many local governments had established fusion centers— collaborative efforts to detect, prevent, investigate, and respond to criminal and terrorist activity—to address gaps in information sharing. In addition, in October 2008 we reported that the Department of Homeland Security was replacing its information-sharing system with a follow-on system. In our analysis of the follow-on system, however, we found that the Department of Homeland Security had not fully defined requirements or ways to better manage risks for the next version of its information- sharing system. Additionally, in January 2009 we reported that the Department of Homeland Security was implementing an information- sharing policy and governance structure to improve how it collects, analyzes, and shares homeland security information across the department and with state and local partners. Based on our prior work, we identified four key reasons that agencies may not always share all relevant information with their national security partners. Concerns about agencies’ ability to protect shared information or use that information properly. Since national security information is sensitive by its nature, agencies and private-sector partners are sometimes hesitant to share information because they are uncertain if that information can be protected by the recipient or will be used properly. For example, in March 2006, we reported that Department of Homeland Security officials expressed concerns about sharing terrorism-related information with state and local partners because such information had occasionally been posted on public Internet sites or otherwise compromised. Similarly, in April 2006, we reported that private-sector partners were reluctant to share critical-infrastructure information—such as information on banking and financial institutions, energy production, and telecommunications networks—due to concerns on how the information would be used and the ability of other agencies to keep that information secure. Cultural factors or political concerns. Agencies may not share information because doing so may be outside their organizational cultures or because of political concerns, such as exposing potential vulnerabilities within the agency. As we noted in enclosure II of this report, we stated in a May 2007 report on interagency planning for stability operations that State Department officials told us that the Department of Defense’s (DOD) hierarchical approach to sharing military plans, which required Secretary of Defense approval to present aspects of plans to the National Security Council for interagency coordination, limited interagency participation in the combatant commands’ plan development and had been a significant obstacle to achieving a unified governmentwide approach in those plans. Moreover, in our September 2009 report on DOD’s U.S. Northern Command’s (NORTHCOM) exercise program, we noted that inconsistencies with how NORTHCOM involved states in planning, conducting, and assessing exercises occurred in part because NORTHCOM officials lacked experience in dealing with the differing emergency management structures, capabilities, and needs of the states. Additionally, in our April 2008 report on NORTHCOM’s coordination with state governments, we noted that the legal and historical limits of the nation’s constitutional federal-state structure posed a unique challenge for NORTHCOM in mission preparation. That is, NORTHCOM may need to assist states with civil support, which means that NORTHCOM must consider the jurisdictions of 49 state governments and the District of Columbia when planning its missions. NORTHCOM found that some state and local governments were reluctant to share their emergency response plans with NORTHCOM for fear that DOD would “grade” their plans or publicize potential capability gaps, with an accompanying political cost. Lack of clear guidelines, policies, or agreements for coordinating with other agencies. Agencies have diverse requirements and practices for protecting their information, and thus may not share information without clearly defined guidelines, policies, or agreements for doing so. We reported in April 2008 that NORTHCOM generally was not familiar with state emergency response plans because there were no guidelines for gaining access to those plans. As a result, NORTHCOM did not know what state capabilities existed, increasing the risk that NORTHCOM may not be prepared with the resources needed to respond to homeland defense and civil support operations. We also reported in March 2009 about the lack of information sharing between the Drug Enforcement Administration (DEA) and Immigration and Customs Enforcement (ICE). Since 9/11, DEA has supported U.S. counterterrorism efforts by prioritizing drug-trafficking cases linked to terrorism. DEA partners with federal, state, and local agencies—including ICE—to leverage counternarcotics resources. However, at the time of that review, ICE did not fully participate in two multiagency intelligence centers and did not share all of its drug-related intelligence with DEA. In one center, ICE did not participate because they did not have an agreement on the types of data ICE would provide and how sensitive confidential source information would be safeguarded. Without ICE’s drug-related intelligence, DEA could not effectively target major drug-trafficking organizations due to the potential for overlapping investigations and officer safety concerns. Security clearance issues. Agencies often have different ways of classifying information and different security clearance requirements and procedures that pose challenges to effective information sharing across agencies. In some cases, some national security partners do not have the clearances required to access national security information. Specifically, we reported in May 2007 that non-DOD personnel could not access some DOD planning documents or participate in planning sessions because they may not have had the proper security clearances, hindering interagency participation in the development of military plans. Additionally, in October 2007 we reported that some state and local fusion center officials cited that the length of time needed to obtain clearances and the lack of reciprocity, whereby an agency did not accept a clearance granted by another agency, prevented employees from accessing necessary information to perform their duties. In other cases, access to classified information can be limited by one partner, which can hinder integrated national security efforts. For example, we reported that DOD established the National Security Space Office to integrate efforts between DOD and the National Reconnaissance Office, a defense intelligence agency jointly managed by the Secretary of Defense and the Director of National Intelligence. However, in 2005, the National Reconnaissance Office Director withdrew full access to a classified information-sharing network from the National Security Space Office, which inhibited efforts to further integrate defense and national space activities, including intelligence, surveillance, and reconnaissance activities. When agencies do share information, managing and integrating information from multiple sources presents challenges regarding redundancies in information sharing, unclear roles and responsibilities, and data comparability. As the Congressional Research Service reported in January 2008, one argument for fusing a broader range of data, including nontraditional data sources, is to help create a more comprehensive threat picture. The 9/11 Commission Report stated that because no one agency or organization holds all relevant information, information from all relevant sources needs to be integrated in order to “connect the dots.” Without integration, agencies may not receive all relevant information. Some progress had been made in managing and integrating information from multiple agencies by streamlining usage of the “sensitive but unclassified” designation. In March 2006, we reported that the large number of sensitive but unclassified designations used to protect mission- critical information and a lack of consistent policies for their use created difficulties in sharing information by potentially restricting material unnecessarily or disseminating information that should be restricted. We subsequently testified in July 2008 that the President had adopted “controlled unclassified information” to be the single categorical designation for sensitive but unclassified information throughout the executive branch and outlined a framework for identifying, marking, safeguarding, and disseminating this information. As we testified, more streamlined definition and consistent application of policies for designating “controlled but unclassified information” may help reduce difficulties in sharing information; however, monitoring agencies’ compliance will help ensure that the policy is employed consistently across the federal government. Based on our previous work, we identified three challenges posed by managing and integrating information drawn from multiple sources. Redundancies when integrating information. Identical or similar types of information are collected by or submitted to multiple agencies, so integrating or sharing this information can lead to redundancies. For example, we reported in October 2007 that in intelligence fusion centers, multiple information systems created redundancies of information that made it difficult to discern what was relevant. As a result, end users were overwhelmed with duplicative information from multiple sources. Similarly, we reported in December 2008 that in Louisiana, reconstruction project information had to be repeatedly resubmitted separately to state and Federal Emergency Management Agency officials during post– Hurricane Katrina reconstruction efforts because the system used to track project information did not facilitate the exchange of documents. Information was sometimes lost during this exchange, requiring state officials to resubmit the information, creating redundancies and duplication of effort. As a result, reconstruction efforts in Louisiana were delayed. Unclear roles and responsibilities. Agency personnel may be unclear about their roles and responsibilities in the information-sharing process, which may impede information-sharing efforts. For example, we reported in April 2005 that officials in Coast Guard field offices did not clearly understand their role in helping nonfederal employees through the security clearance process. Although Coast Guard headquarters officials requested that Coast Guard field officials submit the names of nonfederal officials needing a security clearance, some Coast Guard field officials did not clearly understand that they were responsible for contacting nonfederal officials about the clearance process and thought that Coast Guard headquarters was processing security clearances for nonfederal officials. As a result of this misunderstanding, nonfederal employees did not receive their security clearances in a timely manner and could not access important security-related information that could have aided them in identifying or deterring illegal activities. Data may not be comparable across agencies. Agencies’ respective missions drive the types of data they collect, and so data may not be comparable across agencies. For example, we reported in October 2008 that biometric data, such as fingerprints and iris images, collected in DOD field activities such as those in Iraq and Afghanistan, were not comparable with data collected by other units or with large federal databases that store biometric data, such as the Department of Homeland Security biometric database or the Federal Bureau of Investigation (FBI) fingerprint database. For example, if a unit collects only iris images, this data cannot be used to match fingerprints collected by another unit or agency, such as in the FBI fingerprint database. A lack of comparable data, especially for use in DOD field activities, prevents agencies from determining whether the individuals they encounter are friend, foe, or neutral, and may put forces at risk. Since 2005, we have recommended that the Secretaries of Defense, Homeland Security, and State establish or clarify guidelines, agreements, or procedures for sharing a wide range of national security information, such as planning information, terrorism-related information, and reconstruction project information. We have recommended that such guidelines, agreements, and procedures define and communicate how shared information will be protected; include provisions to involve and obtain information from nonfederal partners in the planning process; ensure that agencies fully participate in interagency information-sharing efforts; identify and disseminate practices to facilitate more effective communication among federal, state, and local agencies; clarify roles and responsibilities in the information-sharing process; and establish baseline standards for data collecting to ensure comparability across agencies. In commenting on drafts of those reports, agencies generally concurred with our recommendations. In some cases, agencies identified planned actions to address the recommendations. For example, in our December 2008 report on the Federal Emergency Management Agency’s public assistance grant program, we recommended that the Federal Emergency Management Agency improve information sharing within the public assistance process by identifying and disseminating practices that facilitate more effective communication among federal, state, and local entities. In comments on a draft of the report, the Federal Emergency Management Agency generally concurred with the recommendation and noted that it was making a concerted effort to improve collaboration and information sharing within the public assistance process. Moreover, agencies have implemented some of our past recommendations. For example, in our April 2006 report on protecting and sharing critical infrastructure information, we recommended that the Department of Homeland Security define and communicate to the private sector what information is needed and how the information would be used. The Department of Homeland Security concurred with our recommendation and, in response, has made available, through its public Web site, answers to frequently asked questions that define the type of information collected and what it is used for, as well as how the information will be accessed, handled, and used by federal, state, and local government employees and their contractors. Oversight Questions What steps are needed to develop and implement interagency protocols What steps are being taken to promote access to relevant databases? for sharing information? How do agencies balance the need to keep information secure and the need to share information to maximize interagency efforts? How can agencies encourage effective information sharing? What are ways in which the security clearance process can be streamlined and security clearance reciprocity among agencies can be ensured? In addition, the following staff contributed to the report: John H. Pendleton, Director; Marie Mak, Assistant Director; Hilary Benedict; Cathleen Berrick; Renee Brown; Leigh Caraher; Grace Cho; Joe Christoff; Elizabeth Curda; Judy McCloskey; Lorelei St. James; and Bernice Steinhardt. Military Training: DOD Needs a Strategic Plan and Better Inventory and Requirements Data to Guide Development of Language Skills and Regional Proficiency. GAO-09-568. Washington, D.C.: June 19, 2009. Influenza Pandemic: Continued Focus on the Nation’s Planning and Preparedness Efforts Remains Essential. GAO-09-760T. Washington, D.C.: June 3, 2009. U.S. Public Diplomacy: Key Issues for Congressional Oversight. GAO-09-679SP. Washington, D.C.: May 27, 2009. Military Operations: Actions Needed to Improve Oversight and Interagency Coordination for the Commander’s Emergency Response Program in Afghanistan. GAO-09-61. Washington, D.C.: May 18, 2009. Foreign Aid Reform: Comprehensive Strategy, Interagency Coordination, and Operational Improvements Would Bolster Current Efforts. GAO-09-192. Washington, D.C.: April 17, 2009. Iraq and Afghanistan: Security, Economic, and Governance Challenges to Rebuilding Efforts Should Be Addressed in U.S. Strategies. GAO-09-476T. Washington, D.C.: March 25, 2009. Drug Control: Better Coordination with the Department of Homeland Security and an Updated Accountability Framework Can Further Enhance DEA’s Efforts to Meet Post-9/11 Responsibilities. GAO-09-63. Washington, D.C.: March 20, 2009. Defense Management: Actions Needed to Address Stakeholder Concerns, Improve Interagency Collaboration, and Determine Full Costs Associated with the U.S. Africa Command. GAO-09-181. Washington, D.C.: February 20, 2009. Combating Terrorism: Actions Needed to Enhance Implementation of Trans-Sahara Counterterrorism Partnership. GAO-08-860. Washington, D.C.: July 31, 2008. Information Sharing: Definition of the Results to Be Achieved in Terrorism-Related Information Sharing Is Needed to Guide Implementation and Assess Progress. GAO-08-637T. Washington, D.C.: July 23, 2008. Highlights of a GAO Forum: Enhancing U.S. Partnerships in Countering Transnational Terrorism. GAO-08-887SP. Washington, D.C.: July 2008. Stabilization and Reconstruction: Actions Are Needed to Develop a Planning and Coordination Framework and Establish the Civilian Reserve Corps. GAO-08-39. Washington, D.C.: November 6, 2007. Homeland Security: Federal Efforts Are Helping to Alleviate Some Challenges Encountered by State and Local Information Fusion Centers. GAO-08-35. Washington, D.C.: October 30, 2007. Military Operations: Actions Needed to Improve DOD’s Stability Operations Approach and Enhance Interagency Planning. GAO-07-549. Washington, D.C.: May 31, 2007. Combating Terrorism: Law Enforcement Agencies Lack Directives to Assist Foreign Nations to Identify, Disrupt, and Prosecute Terrorists. GAO-07-697. Washington, D.C.: May 25, 2007. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005.
While national security activities, which range from planning for an influenza pandemic to Iraq reconstruction, require collaboration among multiple agencies, the mechanisms used for such activities may not provide the means for interagency collaboration needed to meet modern national security challenges. To assist the 111th Congress and the new administration in developing their oversight and management agendas, this report, which was performed under the Comptroller General's authority, addresses actions needed to enhance interagency collaboration for national security activities: (1) the development and implementation of overarching, integrated strategies; (2) the creation of collaborative organizations; (3) the development of a well-trained workforce; and (4) the sharing and integration of national security information across agencies. This report is based largely on a body of GAO work issued since 2005. Based on prior work, GAO has found that agencies need to take the following actions to enhance interagency collaboration for national security: Develop and implement overarching strategies. Although some U.S. government agencies have developed or updated overarching strategies on national security issues, GAO has reported that in some cases, such as U.S. government efforts to improve the capacity of Iraq's ministries to govern, U.S. efforts have been hindered by multiple agencies pursuing individual efforts without an overarching strategy. In particular, a strategy defining organizational roles and responsibilities and coordination mechanisms can help agencies clarify who will lead or participate in activities, organize their joint and individual efforts, and facilitate decision making. Create collaborative organizations. Organizational differences--including differences in agencies' structures, planning processes, and funding sources--can hinder interagency collaboration, potentially wasting scarce funds and limiting the effectiveness of federal efforts. For example, defense and national intelligence activities are funded through separate budgets. Disagreement about funding from each budget led to the initial operating capability date being pushed back 1 year for a new space radar system. Coordination mechanisms are not always formalized or not fully utilized, potentially limiting their effectiveness in enhancing interagency collaboration. Develop a well-trained workforce. Collaborative approaches to national security require a well-trained workforce with the skills and experience to integrate the government's diverse capabilities and resources, but some federal government agencies lack the personnel capacity to fully participate in interagency activities. Some federal agencies have taken steps to improve their capacity to participate in interagency activities, but personnel shortages have impeded agencies' ability to participate in these activities, such as efforts to integrate personnel from other federal government agencies into the Department of Defense's (DOD) new U.S. Africa Command. Increased training opportunities and strategic workforce planning efforts could facilitate federal agencies' ability to fully participate in interagency collaboration activities. Share and integrate national security information across agencies. Information is a crucial tool in national security and its timely dissemination is critical for maintaining national security. However, despite progress made in sharing terrorism-related information, agencies and private-sector partners do not always share relevant information with their national security partners due to a lack of clear guidelines for sharing information and security clearance issues. For example, GAO found that non-DOD personnel could not access some DOD planning documents or participate in planning sessions because they may not have had the proper security clearances. Additionally, incorporating information drawn from multiple sources poses challenges to managing and integrating that information.
You are an expert at summarizing long articles. Proceed to summarize the following text: The USTP is one of several federal agencies involved in the U.S. bankruptcy system, but is the only executive branch agency responsible for providing oversight of bankruptcy cases. The agency consists of the EOUST, which provides general policy and legal guidance, oversees operations, and handles administrative functions; and 21 USTs who oversee 93 field office locations and supervise the administration of federal bankruptcy cases. Each of the field offices is managed by an AUST, who is responsible for day-to-day oversight of federal bankruptcy cases. In addition to fee review activity, the USTP also investigates and civilly prosecutes bankruptcy fraud and abuse; refers suspected criminal activity to the U.S. Attorney and other law enforcement partners; monitors and takes action to address the conduct of debtors, creditors, attorneys, credit counselors, and others; oversees private trustees; and ensures compliance with applicable laws and regulations in all bankruptcy cases, from individual consumer filings to large corporate reorganizations. Six basic types of bankruptcy are provided for under chapters of the Bankruptcy Code, depending on factors such as whether the debtor is an individual, corporation, or municipality, and whether the debtor seeks to reorganize or liquidate existing assets and liabilities. Table 1 shows the differences between the types of bankruptcy and whether the USTP is responsible for reviewing professional fee applications for that bankruptcy type. Per the Bankruptcy Code, the USTP is responsible for reviewing fee applications for five types of cases–cases under Chapters 7, 11, 12, 13, and 15 of the Bankruptcy Code. According to USTP officials, the USTP’s fee review responsibilities for Chapter 11 bankruptcy cases compose approximately 5 to 10 percent of the agency’s activities, though the amount of time varies by office and the type of case. Figure 1 provides an overview of the fee application and review process in Chapter 11 bankruptcy, including cases subject to the 2013 guidelines. After a bankruptcy petition is filed, attorneys seeking to represent debtors or others must submit retention applications for approval by the court. Once their retention is approved, attorneys seeking to be compensated from the estate (the pool of assets and monies available to pay creditors) may submit interim fee applications every 120 days, or more often if permitted by the court. The interim fee application allows attorneys to receive compensation for their work before the conclusion of the bankruptcy proceeding. At the conclusion of the case, attorneys submit a final fee application to the court. Both the USTP and the bankruptcy judge are responsible for ensuring that fees are reasonable and necessary, and may review any submitted documentation associated with fees to assist in that determination. While the USTP has standing to object to requested fees, the bankruptcy judge is responsible for determining the final fee amount to be awarded. According to USTP officials, if the USTP identifies a concern associated with a fee application it has reviewed, such as staffing inefficiencies including duplication of work or requesting compensation for first-class travel, the USTP may seek to resolve it by informally contacting the firm submitting the application through an inquiry or by filing an objection to the fee application for the bankruptcy court to consider. Several factors influence how many inquiries and objections are made by the USTP offices in a given case with regard to professional fee applications. According to USTP officials, these can include the complexity of the case, the preferences of the court or USTP, or the experience of the firm filing the application. For instance, an experienced firm may be familiar with the USTP’s expectations for fee applications, reducing the need for the USTP to request additional information. The goals of the 2013 guidelines are to, among other things, help ensure attorneys’ fees in bankruptcy cases are comparable to those charged for nonbankruptcy activities, increase the transparency and efficiency of the fee application and review process, and increase public confidence in the integrity of the process. As discussed previously, the USTP established fee review guidelines in 1996. The 1996 guidelines detailed the type of information and disclosures that the USTP expects professionals to include in their fee applications. Like the 1996 guidelines, the 2013 guidelines are policies and procedures for USTP staff to follow when reviewing fee applications. Unlike the 1996 guidelines, however, the 2013 guidelines apply only to attorneys’ fee applications in Chapter 11 cases where the debtor’s bankruptcy petition lists assets and liabilities each of $50 million or more. In addition to the provisions detailed in the 1996 guidelines, the 2013 guidelines outline the USTP’s expectations that attorneys provide the following information in their fee applications: information about the firm’s blended hourly rates for nonbankruptcy activities (comparable billing rates); budgets and staffing plans, to include explanations when fees requested exceed the budget by 10 percent; electronic billing data, generally in the form of LEDES (legal electronic data exchange standard) data; client and applicant statements on issues including customary billing rates, fees, or terms of service; and disclosures regarding rate increases. According to senior USTP officials, the USTP began to develop the 2013 guidelines in 2010 in an effort to address concerns about the size of attorneys’ fees in large Chapter 11 cases. The 2013 guidelines were also intended to update the USTP’s fee review practices to better reflect advancements in law firm billing practices and technology. To develop the 2013 guidelines, officials created an internal working group and sought input from bankruptcy stakeholders, such as judges, legal industry groups, and attorneys. The USTP published drafts of the guidelines on its public website for public comment in November 2011 and November 2012, and received more than 30 comment letters in response. The USTP also held a public meeting on June 4, 2012, on the draft guidelines. Two of the newly proposed provisions—the comparable billing rate provision and the budgeting and staffing plan provision—were frequently discussed in the comment letters and during the public meeting. According to an EOUST official, the comparable billing rate provision received the most commentary during the public comment period and meeting and was therefore revised more extensively than other provisions. For example, the comment letters from the National Bankruptcy Conference (NBC) and a letter signed by 119 bankruptcy law firms identified several concerns with the draft guidelines’ proposal requesting the “highest, lowest, and average hourly rates” charged by firms for all activities. Among these was the concern that such detailed information about the highest and lowest rates billed does not accurately compare with a firm’s “customary compensation charged,” which is the standard identified by the Bankruptcy Code. The NBC also submitted a separate comment letter proposing multiple alternatives for the USTP to consider. Similarly, comment letters from both the NBC and the 119 law firms expressed concerns regarding the draft budgeting provision, particularly with regard to the public disclosure of budgets. After incorporating changes and clarifying provisions based on the public comments, including allowing redaction of sensitive budgetary information and removing the requirement to disclose the highest and lowest rates billed, the USTP published the final guidelines in the Federal Register in June 2013 and they went into effect in November 2013. Our analysis of USTP data and interviews with bankruptcy stakeholders (AUSTs, judges, and attorneys) indicate that attorneys’ fee applications for guidelines cases have generally contained the information requested by the 2013 guidelines. Bankruptcy stakeholders we interviewed had mixed perspectives on the overall value of the guidelines and on their potential effect on the efficiency and transparency of the Chapter 11 bankruptcy process, or the fees awarded. Similarly, opinions regarding the effect of specific provisions of the 2013 guidelines—including provisions on electronic billing, budgeting and staffing plans, and comparable billing rates—also varied by group. Analysis of USTP data and interviews with bankruptcy stakeholders indicate that attorneys’ fee applications for guidelines cases (those cases with assets and liabilities each of $50 million or more) have generally contained the information requested by the 2013 guidelines. For the 94 guidelines cases filed from November 2013 through March 2015, our analysis of USTP data found that the USTP did not make any inquiries or objections related to the guidelines for fee applications in half (47) of the cases. For the other 47 cases, the USTP identified issues in submitted fee applications that were associated with the guidelines, almost all of which were related to three provisions: (1) budgeting and staffing plans, (2) comparable billing data, and (3) electronic billing records. Specifically, in 36 guidelines cases, the USTP made 98 inquiries and objections, 90 of which were related to one or more of these three provisions in the 2013 guidelines. For example, in one fee application, a firm requested an hourly rate that was higher than the comparable rate it charged in nonbankruptcy activities and exceeded its budget in certain project categories. Because the firm did not provide any explanation in its original fee application, the USTP informally raised the issues with the firm, which then provided an explanation in supplementary information filed with the USTP and the court. In total, attorneys were able to resolve 92 of the 98 inquiries and objections to the satisfaction of the USTP. Almost all were resolved by the attorney providing an oral or written explanation or by filing supplementary information. An internal spreadsheet maintained by the EOUST to provide additional oversight of the 2013 guidelines’ implementation noted compliance issues in 11 other guidelines cases. Attorneys in 10 of these 11 cases also addressed the USTP’s concerns by providing supplemental information or agreeing to do so in future cases. Bankruptcy stakeholders we interviewed who had participated in at least 1 guidelines case also reported that fee applications filed by attorneys in those cases generally contained information requested by the 2013 guidelines. Of the 57 bankruptcy stakeholders we interviewed, 38 had participated in at least 1 guidelines case. Of these 38, 29 reported that fee applications filed by attorneys in the case or cases they were involved in had at least partially observed the 2013 guidelines. Two judges we interviewed stated that in the fee applications they reviewed for guidelines cases they presided over, attorneys did not observe the guidelines’ provisions. Seven stakeholders could not comment on whether fee applications in the guidelines cases they participated in had observed the guidelines. As discussed earlier in this report, the 2013 guidelines are not law, and accordingly, are not binding on courts, debtors, or attorneys. Of the 25 judges we interviewed, 6 noted they were likely to use some, but not all, of the 2013 guidelines’ provisions when reviewing fee applications. For example, 1 judge stated that she would not require attorneys to provide budgets or staffing plans in their fee applications, but does expect that other provisions of the guidelines, such as electronic billing data, will make her review of fee applications easier. Another 9 judges said they did not intend to use the guidelines when reviewing fee applications. Specifically, 6 of these judges noted that their fee reviews would instead be guided by their courts’ local rules. As 1 judge explained, his court’s local rules already require all the information he needs to review fee applications. Our analysis of the local rules for the 15 jurisdictions in our scope shows that 1 jurisdiction, the District of Nevada, generally incorporates USTP fee guidelines, while 14 jurisdictions do not. EOUST officials reported that as of December 2014, another 14 bankruptcy courts outside the scope of our report have at least partially adopted the 2013 guidelines. However, whether or not a jurisdiction has adopted the guidelines or a judge has chosen to use them to inform his or her own fee review does not preclude the USTP from implementing the guidelines’ provisions and commenting or objecting to fee applications, as the USTP deems appropriate. As the EOUST Director explained, the 2013 guidelines communicate to professionals and the general public the criteria used by the USTP in the review of fee applications and the USTP’s expectations of professionals. According to the USTP’s implementing guidance provided to staff, in most cases, the agency should not object to an attorney’s fee application based on noncompliance with the 2013 guidelines, and should instead reference the relevant provision in the Bankruptcy Code. However, the USTP can reference the provisions of the guidelines when reaching out to attorneys through an informal inquiry. Further, despite the reluctance of some judges to use the 2013 guidelines in their own review of fee applications, all 14 attorneys said that they included the information requested by the guidelines in relevant fee applications they submitted. Specifically, 5 of the attorneys noted that they try to maintain a good relationship with the USTP, and there is no reason not to comply with the 2013 guidelines. Bankruptcy stakeholders’ perspectives on the overall value of the 2013 guidelines varied according to stakeholder group. Fourteen of 18 AUSTs, who oversee day-to-day activities associated with implementing the 2013 guidelines, viewed the guidelines positively, noting, for example, that the additional information requested by the guidelines will allow them to more easily determine whether fees are reasonable. The remaining 4 AUSTs stated it was too early for them to have an opinion about the guidelines overall. In contrast, of the 14 attorneys we interviewed, 6 held negative, 6 held neutral opinions of the guidelines, and 2 expressed a positive opinion of the guidelines. The six attorneys with negative opinions of the guidelines commented that the 2013 guidelines impose significant additional work for them to prepare fee applications without providing a commensurate benefit to the bankruptcy process. One attorney with a neutral opinion explained that while he was not sure whether the guidelines would be able to accomplish their objectives, complying with them had not required much additional work. One attorney with a positive opinion said the 2013 guidelines are generally straightforward and easy to follow. Judges’ opinions regarding the overall value of the guidelines were also mixed. Of the 25 judges we interviewed, 12 stated that their overall opinion of the guidelines was positive, 7 stated that their overall opinion was negative, 3 stated it was neutral, and 3 stated that it was too early to have an opinion. Of the 12 judges who expressed a positive opinion of the guidelines, 4 noted that the guidelines will be useful in reviewing fee applications. In contrast, 5 of the 7 judges who had a negative opinion of the guidelines commented that they believed the guidelines were unnecessary. As one of these judge said, the new provisions do not address the issues that create problems with professional fees. Specifically, he explained that the guidelines do not prevent cases from “going off the rails,” or experiencing unexpected setbacks, which is when problems with professional fees can arise. Figure 2 shows the opinions of each stakeholder group. Bankruptcy stakeholders also had mixed opinions, which varied by stakeholder group, regarding the effect of the guidelines on transparency, efficiency, or fees. Specifically, 15 of 18 AUSTs reported that they thought the guidelines were likely to have a positive effect on transparency, efficiency, or fees. In contrast, 9 of the 14 attorneys we interviewed said the guidelines were unlikely to have an effect on transparency, efficiency, or fees, in part because they believed the process that existed before the 2013 guidelines was already very transparent. Four of 14 attorneys stated that they believed the guidelines would increase transparency, but 3 of the 4 noted that the guidelines would not improve the efficiency of cases or reduce the fees awarded. For example, 1 attorney explained that the best way to reduce fees in bankruptcy is to improve efficiency to reduce the overall time a case takes to complete, and he was not sure whether the 2013 guidelines would be able to do so. The judges were split in their opinions—11 of 25 responded that the guidelines were likely to have a positive effect on transparency, efficiency, or fees, and 10 of 25 responded that they were not likely to have an effect. For example, 1 judge said the 2013 guidelines would improve efficiency and transparency, making it easier for her to review and analyze fee applications. In contrast, another judge said she did not believe the guidelines would have an effect on the status quo, because attorneys already bill at market rates. The effects or potential effects of three provisions of the 2013 guidelines—(1) electronic billing records, (2) comparable billing rate information, and (3) budgeting and staffing plans—were discussed most frequently by bankruptcy stakeholders during our interviews. Similar to the overall opinions of the guidelines, perspectives on the effects of these three provisions varied by stakeholder group. 1. Electronic billing records: This provision was developed to bring the USTP’s procedures into line with modern nonbankruptcy billing technology and practice. The electronic billing records provision was cited by 22 of 43 AUSTs and judges as a provision likely to have a positive effect on the fee review process, in part because it will make the fee review process easier or more efficient. Only 2 attorneys mentioned the electronic billing record provision, and they noted that it was unlikely to have an effect. 2. Comparable billing rate information: This provision was developed to provide specific information about rates, in an effort to increase transparency and improve comparability between rates charged for bankruptcy and nonbankruptcy activities. The comparable billing rate information was cited by 20 of 57 bankruptcy stakeholders we interviewed as a provision likely to have an effect on the fee review process, with 8 of these stakeholders noting that this provision was likely to have a positive effect by increasing transparency. In contrast, 19 of 39 attorneys and judges reported that the comparable billing rate provision was unlikely to have an effect on the fee review process. For example, 5 judges and attorneys explained that the comparable billing information is unnecessary because rates charged in bankruptcy are already comparable to rates charged for nonbankruptcy activities, while another 6 said that trying to compare rates charged in bankruptcy with nonbankruptcy rates is “comparing apples to oranges” and is not a meaningful way to determine reasonable fees. See figure 3 for a breakdown of responses by stakeholder group. 3. Budgeting and staffing plans: This provision was developed to encourage firms to apply standard management and planning tools to bankruptcy cases, as is done in other types of cases, in an effort to increase transparency. The budget and staffing plan provision of the 2013 guidelines was cited by 26 of the 57 bankruptcy stakeholders we interviewed as a provision likely to have a positive effect on the fee review process. For example, stakeholders noted that the primary benefit of the budgeting and staffing provision is that it encourages attorneys to communicate, early in the case, information about the potential costs. In contrast, 16 of the 20 attorneys and judges who noted that the budgeting provision is unlikely to have an effect explained that bankruptcy cases are unpredictable, a fact that limits the value of a budget. See figure 4 for a breakdown of responses by stakeholder group. Bankruptcy attorneys and judges we interviewed, and academic research we reviewed, identified several factors that contribute to venue selection. The most frequently cited factors—including prior court rulings, the preferences of lenders, and judge experience—all contribute to overall predictability in a case and can provide some insights into what to expect from a court as a case proceeds through the bankruptcy process. Bankruptcy attorneys and judges we interviewed and academic research we reviewed also identified both positive and negative effects of the concentration of cases in the SDNY and Delaware. Bankruptcy attorneys and judges identified several factors that contribute to venue selection, or why a case may be filed in one court versus another. As discussed previously, companies filing for bankruptcy have several options available to them when determining the court, or venue, in which to file their case, including their place of incorporation, principal place of business or assets, or where an affiliate has filed a Chapter 11 bankruptcy case. The factors cited most frequently by the 39 attorneys and judges we interviewed as significant to venue selection include prior court rulings, the preferences of lenders, and judge experience. While no two cases are exactly alike, these factors were frequently discussed as important because of how they contribute to predictability in a case. For example, knowing a judge’s level of experience with large cases and how a court has ruled on certain matters can help an attorney advise a client about how a court is likely to respond to issues in a specific case. Attorneys and judges also mentioned other factors that can contribute to venue selection, such as perceived court attitudes on professional fees, convenience or proximity of the parties involved in a case to the court, and court administrative capacity, though the majority said that professional fees and court administrative capacity were either minor factors or not a factor in venue selection. See figure 5 for the frequency with which the various factors were cited. Thirty-three of the 39 attorneys and judges we interviewed cited prior local or circuit court rulings as a significant factor when selecting the venue in which to file a large Chapter 11 bankruptcy case, in part because understanding prior court rulings allows attorneys to better predict and advise clients on the possible outcomes of a case. For example, 1 attorney explained the importance of prior court rulings, noting that attorneys look at district and circuit court opinions to determine what is favorable for their particular client and what will matter for the particular case. Specifically, 6 of the 33 attorneys and judges stated that it is the attorney’s responsibility to act in the best interest of his or her client, and understanding prior court rulings and how they relate to venue selection is part of that. Two of these attorneys further commented that filing in a court without taking into account prior rulings could harm a company’s ability to successfully emerge from bankruptcy and could be considered malpractice. Twenty of the 39 attorneys and judges cited the preferences of lenders as a significant factor. As 3 attorneys we interviewed explained, lenders who provide financing to a company in distress may incorporate clauses in their financing agreements requiring the company to file in a certain jurisdiction. Nine of the 20 attorneys and judges noted that lenders incorporate such clauses because they prefer the predictability offered by certain courts. For example, 1 attorney stated that lenders may prefer the certainty of providing debtor financing in courts that have done hundreds of similar financing arrangements so that they know what motions are likely to be approved. According to 4 attorneys and judges we interviewed, other reasons that lenders may prefer certain jurisdictions can include the proximity of the court to their business operations. Sixteen of the 39 attorneys and judges cited judge experience as a significant factor in venue selection. Of these, 6 attorneys and judges stated that judge experience plays a key role in providing predictability in a case. As 1 judge explained, speaking from experience as a bankruptcy attorney, judge experience is very much a factor in venue selection and attorneys consider the competence and experience of judges on Chapter 11 matters when advising their clients. However, 6 other attorneys and judges said that in their opinion, all bankruptcy court judges are capable of handling large cases. Eight of the 39 attorneys and judges we interviewed identified perceived court attitudes on professional fees as a significant factor in venue selection. For example, 1 judge said that certain courts have a reputation for not approving fees over a certain amount and attorneys know which judges are “hard on fees.” In addition to the 1 attorney and 7 judges who identified fees as a significant factor, another 3 attorneys and 11 judges identified fees as a minor factor. In contrast, the majority of attorneys (10 of 14) and 7 of 25 judges said that professional fees were not a factor in venue selection. One attorney noted that he has tried cases in venues around the country and has not seen a difference in fees awarded. Five attorneys and judges said that professional fees may have played a bigger role in venue selection in the past. Twelve of 39 attorneys and judges cited convenience or proximity of the parties involved to the court as a factor in venue selection. Five attorneys and judges also discussed the administrative capacity of the court, or the court’s ability to process large cases, as a significant factor. However, the majority of respondents (29 of 39) said that administrative capacity was either a minor factor or not a factor. Academic studies we reviewed on venue selection used various methodological approaches to identify several of the factors raised by the stakeholders we interviewed, though the importance of these factors varied across the studies. For example, four studies, from 2002, 2004, and 2005, cited judge experience or the perceived expertise of the judge, or prevailing court rulings, as key factors in venue selection. According to one academic expert we interviewed, judge expertise was likely a consideration for General Motors and Chrysler in choosing to file for bankruptcy in New York because they were more likely to be assigned an experienced judge in New York than in Detroit, where their headquarters were located. The 2002 study also identified predictability and speed (how fast a case moves through the court) as the most prevalent factors, and lender preference as a less prevalent factor in venue selection. One study from 2004 suggested that firms with secured lenders exhibited a strong preference for filing in Delaware. Two studies from 2002 and 2004 cited attitudes towards fees as a factor in venue selection; however, one noted that fees were not the most prevalent factor. In contrast to this finding and the results of our interviews, one study we reviewed identified the perceived scrutiny of professional fees as a key factor in venue selection. In this study and in subsequent work, the author contends that attorneys chose to file in venues where they believed their fee requests would be approved and theorized that some courts may have relaxed their scrutiny of fees in order to attract more large cases. Bankruptcy attorneys and judges we interviewed and academic research we reviewed identified both positive and negative effects of the concentration of cases in the SDNY and Delaware. Of the 94 guidelines cases filed from November 2013 through March 2015, 64 percent (60 of 94) were filed in the SDNY or Delaware, with the majority filed in Delaware. Our analysis of data related to these cases found that in 5 of the 14 guidelines cases filed in the SDNY, and in all of the 46 guidelines cases filed in Delaware, the venue selected differed from the company’s headquarters address. See appendix III for a more detailed analysis of the venue rules used as the basis for selecting the filing location in guidelines cases. Thirty-two of the 39 attorneys and judges we spoke with cited at least one effect of the concentration of cases in the SDNY and Delaware, with 24 identifying positive effects and 29 identifying negative effects. The positive effect most commonly cited by attorneys (5 of 14) and judges (10 of 25) we interviewed was the significant large case experience developed by judges in the SDNY and Delaware. For example, 1 judge noted that the concentration of cases in the SDNY and Delaware has resulted in experienced judges with vetted, expedited procedures and processes. Six stakeholders noted that the body of court rulings the SDNY and Delaware courts have developed is another positive effect. Of the 18 stakeholders who mentioned one of these two benefits of case concentration, 8 explained that as a result, the SDNY and Delaware courts offer a degree of certainty in terms of case outcomes that does not exist in other courts. For example, 1 attorney said that he always advises clients on potential case outcomes, and the precedents in the SDNY and Delaware allow him to provide more predictability for his clients. Twenty-nine attorneys and judges identified negative effects of case concentration in the SDNY and Delaware, but they differed in their opinions of the most significant negative effects. The negative effects most commonly cited by attorneys (9 of 14) were the difficulty local bankruptcy firms face in maintaining a bankruptcy practice outside of the SDNY and Delaware and the lack of opportunity for courts to develop precedent and expertise outside of these jurisdictions. For example, 1 attorney noted that firms in other parts of the country lose bankruptcy clients to SDNY and Delaware firms. The negative effect most commonly cited by judges (12 of 25) was the challenges small creditors face when a case is filed far from a company’s headquarters location. Four judges said that moving a case from the company’s home jurisdiction can disenfranchise small businesses and individuals who may want to participate in a case but are unable to do so because of the expense of travelling to a court in another jurisdiction. Additionally, 4 judges and 1 attorney said that the concentration of cases has a negative effect on the bankruptcy system as a whole. For example, 1 judge noted that when money is spent hiring local counsel and traveling to distant courts instead of filing where a company’s assets and employees are located, people view the system as corrupt. Similarly, the academic studies we reviewed identified both positive and negative effects of case concentration in the SDNY and Delaware. As one academic expert we spoke with noted, there is a divergence in the academic findings and opinions related to venue. For example, two empirical studies from 1997 and 2004 suggested that cases filed in Delaware moved through the bankruptcy system faster than those filed in other courts. Other studies suggested that cases filed in the SDNY and Delaware had a higher refiling rate than cases filed in other courts and that cases filed in a venue outside the company’s headquarters, such as the SDNY or Delaware, cost more than cases filed in home jurisdictions. In contrast, one academic expert argued that some cases in Delaware did not have a higher refiling rate than other courts with a similar caseload, and more recent work by another academic suggests that cases in the SDNY and Delaware do not cost more. Additionally, a 2015 study found that having a case assigned to an experienced judge is a key factor in the success of a bankruptcy filing and that there is a high correlation between judge experience and jurisdiction, with highly experienced judges presiding over most SDNY and Delaware cases. However, as one expert we interviewed noted, because fewer cases are filed in venues other than the SDNY and Delaware, it is difficult for judges outside of those districts to gain experience with large Chapter 11 cases. We provided a draft of this report to the Executive Office for U.S. Trustees (EOUST) and the Administrative Office of the U.S. Courts (AOUSC) on August 21, 2015, for review and comment. In its written comments, reproduced in appendix IV, the EOUST generally agreed with our findings. We also received technical comments from EOUST and AOUSC, which we incorporated as appropriate. We are sending copies of this report to the Senate Committee on the Judiciary, the Director of the Executive Office for U.S. Trustees, the Director of the Administrative Office of the U.S. Courts, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report addresses the following questions: 1. To what extent have bankruptcy attorneys observed the 2013 guidelines in fee applications, and what are the opinions of bankruptcy stakeholders, including attorneys, judges, and U.S. Trustee Program (USTP) officials, regarding the guidelines’ key provisions and their effects? 2. What do bankruptcy attorneys, judges, and available research identify as factors that contribute to venue selection and the effects, if any, of venue selection in large Chapter 11 cases? To obtain background information and answer both questions, we reviewed the Bankruptcy Code and relevant bankruptcy filings related to professional fees in Chapter 11 bankruptcy cases. We also reviewed the 1996 USTP guidelines and the 2013 USTP guidelines on the compensation of attorneys in large Chapter 11 cases, or those with assets and liabilities each of $50 million or more. We analyzed USTP data related to Chapter 11 cases filed from October 2009 through March 2015 to identify the number of large cases filed during this time period in each USTP office. We selected this time period because it enabled us to identify case activity in the years both before and after the 2013 guidelines went into effect. According to USTP data, 765 cases with assets and liabilities each of $50 million or more were filed from October 2009 through March 2015. Of these cases, 94 were filed after the 2013 guidelines went into effect (guidelines cases). To assess the reliability of these data, we interviewed USTP officials responsible for collecting and reviewing the information and cross-checked the names of the guidelines cases against lists of the 20 largest cases filed in 2014 as identified by the bankruptcy research website New Generations (BankruptcyData.com). We determined that the data were sufficiently reliable for our purposes. As discussed in detail later in this appendix, we used the case information to inform our selection of a nongeneralizable sample of bankruptcy stakeholders for semistructured interviews. While the views expressed in these interviews do not represent those of all bankruptcy stakeholders, they provide valuable insights from stakeholders who have experience with large Chapter 11 bankruptcy cases and the 2013 USTP guidelines. We also conducted interviews with Executive Office for U.S. Trustees (EOUST) officials, U.S. Trustees (UST), academics, and industry stakeholders. Finally, we reviewed relevant academic literature on professional fees and venue selection in Chapter 11 bankruptcy cases. Because fee reduction is not a stated goal of the 2013 guidelines, we did not attempt to determine whether or not the guidelines have led to an actual reduction in professional fees awarded in large Chapter 11 cases. To further address question 1, we reviewed data from the USTP’s Significant Accomplishments Reporting System (SARS) on objection and inquiry activities related to professional fees in guidelines cases filed from November 2013 through March 2015. We analyzed the objections and inquiry data, including narrative descriptions, to identify USTP actions related to the 2013 guidelines provisions. To assess the reliability of these data, we interviewed officials responsible for entering and maintaining the data and reviewed internal documentation and guidance associated with data entry and internal review processes. We determined that the data were sufficiently reliable for our purposes. We also reviewed information collected by the EOUST on whether the provisions of the 2013 guidelines were observed in cases filed from November 2013 through March 2015 to determine the extent to which attorneys filing fee applications included information related to the 2013 guidelines’ provisions. To determine whether selected local court rules incorporated the key provisions of the 2013 guidelines, we reviewed local court rules and guidelines on fee applications and the compensation of professionals in bankruptcy cases for the 15 bankruptcy courts in our scope. These courts were selected because they had 5 or more large Chapter 11 cases filed from fiscal year 2010 through fiscal year 2014. Additional information regarding the jurisdictions in the scope of this report is provided later in this appendix. To further address question 2, we reviewed relevant academic literature on the factors that contribute to venue selection and the effects of the concentration of cases in the Southern District of New York (SDNY) and the District of Delaware (Delaware). We also interviewed six academic experts in the Chapter 11 bankruptcy field to better understand their research findings. We conducted a literature search of various databases, such as ProQuest and Academic OneFile, and asked the academic experts we interviewed to recommend additional studies. From these sources, we identified 10 studies published between 1997 and 2015 that were relevant to our question on venue selection. We reviewed the methodologies of these studies to ensure they were sound and determined they were sufficiently reliable to identify factors related to venue selection and the effects of case concentration. To identify the basis for venue selection companies used when filing cases, we reviewed bankruptcy filing documents for each of the 94 guidelines cases filed from November 2013 through March 2015. We also reviewed the first day declarations for these cases to identify each company’s place of incorporation and headquarters location. In cases where we were not able to identify this information, we reviewed Securities and Exchange Commission filings or filings with state secretary of state offices, as necessary. To obtain perspectives from bankruptcy stakeholders on the 2013 guidelines and issues related to venue selection, we conducted 57 semistructured interviews with a nongeneralizable sample of 18 Assistant U.S. Trustees (AUST), 25 U.S. Bankruptcy Court judges, and bankruptcy attorneys from 14 law firms. Our process for selecting each group of stakeholders in our sample is discussed below: AUSTs: We chose to interview AUSTs because they are responsible for the day-to-day oversight of federal bankruptcy cases and for reviewing fee applications. There are 21 UST regions with 93 field offices. Each field office is managed by an AUST. To determine the scope from which to select our sample of AUSTs for interviews, we reviewed USTP data on large Chapter 11 cases (cases with assets and liabilities each of $50 million or more) filed from fiscal year 2010 through fiscal year 2014. From this, we identified 18 USTP offices that had a total of five or more large cases during this time period. Of these offices, 12 offices had been assigned a guidelines case as of October 2014, while 6 offices had not. We interviewed 18 AUSTs assigned to each of the 18 offices (see table 2). At the time our interviews were conducted, 13 of the 18 AUSTs had participated in at least one guidelines case. Judges: Bankruptcy judges have the final authority to award professional fees and to determine whether or not they are reasonable and necessary, under the Bankruptcy Code. To select our sample of bankruptcy judges to interview, we relied on the same selection criteria we used to select AUSTs. We matched each UST office/city with 5 or more large Chapter 11 cases from fiscal year 2010 through fiscal year 2014 to the corresponding judicial district for the U.S. Bankruptcy Courts. We identified 15 judicial districts covering the 18 cities in our AUST sample. To select individual judges for our interviews, we contacted the Chief Judge of each of the 15 bankruptcy courts and asked him or her to identify judges in those courts with experience in our topic areas who were available to be interviewed. To ensure that we spoke with judges with a range of experience in guidelines cases, we initially selected two judges in the each of the courts with 15 or more large Chapter 11 cases and one judge in each of the remaining courts (see table 2). We interviewed a total of 25 judges. Of these, 11 judges presided over one or more cases subject to the guidelines, while 14 judges had not yet been involved with a guidelines case. With the exception of one judge who had been appointed to the bench in 2013, all judges interviewed had presided over at least one case between October 2009 and October 2013 that met the guidelines thresholds of assets and liabilities each of $50 million or more. Bankruptcy attorneys: To select attorneys to interview, we reviewed the guidelines case data provided by the USTP for all cases subject to the 2013 guidelines in fiscal year 2014. We identified selected guidelines cases filed in bankruptcy courts in 12 of the 18 cities listed in table 2 and reviewed the related bankruptcy filing petitions to identify law firms that represented the debtor in these cases. We selected cases to provide variation in geographic location and in case size. To ensure that we included attorneys from firms with substantial experience with large Chapter 11 cases, we cross-checked the list of firms with those identified as the top bankruptcy firms in the United States by two bankruptcy research websites, the University of California Los Angeles’ (UCLA- LoPucki) Bankruptcy Research Database and New Generations (BankrutpcyData.com). Firms were not excluded if they did not appear on these lists, but we added several major firms that did not appear on our original list. See table 3 for a list of the 14 firms from which we interviewed attorneys. Each of the firms represented companies in at least one 2013 guidelines case. In 4 of the 14 interviews we conducted, more than one attorney participated in the interview. However, because the opinions offered by attorneys in each of these individual interviews did not conflict or contradict one another, we counted each interview as one and refer to this group of stakeholders as 14 attorneys throughout this report. We asked all three groups about their general opinions of the 2013 guidelines and their opinions regarding the effects or potential effects, if any, of the 2013 guidelines on efficiency, transparency, or the professional fees awarded in Chapter 11 cases. We focused on these three aspects because two of the goals of the 2013 guidelines are to increase the transparency and efficiency of the fee review process, and, as discussed earlier in this report, EOUST officials reported that concerns about professional fees in large cases such as Lehman Brothers provided the impetus for developing the new guidelines. While we did ask stakeholders to identify any provisions they believed were likely to have an effect or to have no effect, we did not specifically ask stakeholders about their opinions on each provision of the 2013 guidelines. We also asked all three groups about the extent to which the provisions of the guidelines were observed by attorneys in fee applications submitted for guidelines cases. In addition, we asked judges and attorneys their opinions on the factors that contribute to venue selection and the effects, if any, of the concentration of cases in the SDNY and Delaware. Because the AUSTs do not have a role in venue selection, we did not ask them questions related to venue selection. We asked attorneys and judges for their opinion on specific venue selection factors. For example, we asked attorneys and judges about the degree to which the following factors to contribute to venue selection: (1) judge experience, (2) court administrative capacity, (3) prior court rulings, and (4) perceptions of court attitudes on professional fees. Because of the semistructured nature of the interviews, stakeholders responded to these factors and also independently offered additional factors. Additionally, responses may total more than 100 percent because stakeholders offered both positive and negative opinions in response to certain questions. The semistructured interviews were conducted by telephone from February 2015 through May 2015. We then performed a qualitative content analysis of these interviews to identify common themes and the frequency with which certain issues were raised. To ensure intercoder reliability, three analysts jointly developed a coding structure that was then used to independently code the interviews. This process was reviewed by a GAO methodologist, and all coding was reviewed by another analyst. Any discrepancies were discussed and resolved jointly by the analysts responsible for coding the interviews. We conducted this performance audit from November 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. From November 2013 through March 2015, 94 Chapter 11 bankruptcy cases met the U.S. Trustee Program’s 2013 guidelines’ thresholds of assets and liabilities each of $50 million or more (guidelines cases). A company may choose the court, or venue, in which to file its case on the basis of where the company is domiciled, which has been interpreted as the company’s place of incorporation, or where it maintains its residence or principal place of business (headquarters) or assets. For example, one guidelines case involves a telecommunications holding company that is headquartered in Virginia and maintains its principal assets in three New York financial institutions. A company may also file in a court where an affiliate, such as a franchise or dealership related to the parent company, has already a Chapter 11 bankruptcy case pending (known as the affiliate filing rule). We reviewed 91 of the 94 guidelines cases and found that 63 percent (57 of 91) were filed in a venue that differed from their headquarters location (see table 4 below). To identify the provisions companies used as the basis for selecting the venue in which to file their case, we reviewed the initial bankruptcy filing petitions for each of the 91 cases included in our analysis. We also reviewed first day declarations, and, as necessary, filings with the Securities and Exchange Commission or state secretary of state offices to identify the place of incorporation and headquarters location for each guidelines case. As shown in table 4, 57 of the 91 cases that were filed by companies through March 2015 filed in venues outside the company’s stated headquarters location. Of the options available to them under the Bankruptcy Code, companies in 31 of the 57 cases relied on place of incorporation as the primary basis to select their filing location (see fig. 6). All but 2 of these cases were filed in Delaware. Additionally, 8 relied on the affiliate filing rule, and 3 used principal place of assets as the basis for venue selection. In 12 cases filed outside of the headquarters location, data provided by the companies indicated that both place of incorporation and the affiliate filing rule were the bases for venue selection. As shown in table 4, 34 of the 91 guidelines cases filed by companies through March 2014 filed in the same venue as their stated headquarters location. Our analysis, as shown in figure 7, found that companies in 23 of the 34 cases relied on principal place of business or residence, and another 9 relied on both principal place of business or residence and the affiliate filing rule as the reason for selecting their headquarters venue. In addition to the contact named above, Adam Hoffman (Assistant Director), Bethany Benitez, Michele Fejfar, Elizabeth Kowalewski, Monica Savoy, and Sarah Turpin made significant contributions to this report. Also contributing to this report were Dominick Dale, Eric Hauswirth, Jean Orland, Michelle Su, and Wade Tanner.
Since 2010, there have been at least 765 Chapter 11 bankruptcy filings by large companies. The associated fees for bankruptcy professionals, including attorneys, can run into the hundreds of millions of dollars. The size of these fees has raised questions about whether professionals have charged a premium for large bankruptcies and used the venue selection process to file in courts where they believed they would receive higher fees. The USTP, a Department of Justice component, is responsible for, among other things, reviewing whether fees requested by professionals in bankruptcy cases are reasonable and necessary in accordance with the Bankruptcy Code. In 2013, the USTP issued new guidelines governing its review of attorney fee applications in large Chapter 11 cases. GAO was asked to review the USTP's 2013 guidelines. This report examines (1) the extent to which fee applications observed the 2013 guidelines and bankruptcy stakeholders' opinions regarding the guidelines' key provisions and their effects, and (2) what bankruptcy stakeholders and available research identify as contributing factors and effects of venue selection in large Chapter 11 cases. GAO conducted 57 nongeneralizable interviews with bankruptcy judges, attorneys, and AUSTs in 15 bankruptcy court jurisdictions responsible for large Chapter 11 cases. GAO also reviewed USTP data and court documents on cases subject to the 2013 guidelines, and relevant academic literature on professional fees and venue selection. The USTP generally agreed with GAO's findings. GAO's analysis of U.S. Trustee Program (USTP) data and interviews with bankruptcy stakeholders including Assistant U.S. Trustees (AUST), selected bankruptcy judges, and attorneys indicate that attorneys' fee applications for cases subject to the USTP's 2013 fee guidelines (cases involving assets and liabilities each of $50 million or more) have generally contained the information requested by the guidelines. This information is intended to assist the courts in determining whether requested fees are reasonable and necessary. Specifically, in the data GAO reviewed, the USTP identified no issues in submitted fee applications in 47 of the 94 cases filed since the guidelines went into effect in November 2013. Attorneys resolved almost all of the issues in the other 47 cases by providing an explanation or additional information. Bankruptcy stakeholders had mixed perspectives of the overall value of the guidelines and of their potential effect on the efficiency and transparency of the Chapter 11 bankruptcy process, or the fees awarded. Similarly, opinions regarding the effect of specific provisions of the 2013 guidelines also varied by group. For example, 15 of 18 AUSTs said the provision requesting that attorneys provide budgets was likely to have a positive effect on the fee review process, while 10 of 14 attorneys said it was unlikely to have an effect. For example, stakeholders with a positive view said the budgeting provision encourages early communication in a case, while those with a negative view said that the unpredictability of bankruptcy cases limit the value of a budget. Bankruptcy attorneys and judges GAO interviewed and academic research identify several factors that contribute to venue selection—the process of choosing where to file. Companies filing for bankruptcy have several options available to them when determining the venue, or court, in which to file their case, including their place of incorporation, principal place of business or assets, or where an affiliate has filed a Chapter 11 case. The most frequently cited factors—prior court rulings, the preferences of lenders, and judge experience—all contribute to overall predictability in a case and can provide some insights into what to expect from a court as a case proceeds through the bankruptcy process. For example, knowing a judge's level of experience with large cases and how a court has ruled on certain matters can help an attorney advise a client about how a court is likely to respond to issues in a specific case. Eight of the 39 attorneys and judges GAO interviewed cited perceived court attitudes on professional fees as a significant factor in venue selection. Approximately 61 percent of large Chapter 11 bankruptcy cases filed since October 2009 were filed in two jurisdictions–the Southern District of New York (SDNY) and the District of Delaware (Delaware). Bankruptcy attorneys and judges and academic research identified both positive and negative effects of the concentration of cases in these two jurisdictions. The positive effect most commonly cited by attorneys and judges was the significant large case experience developed by judges in the SDNY and Delaware. In contrast, the negative effects most commonly cited by attorneys were the difficulty local bankruptcy firms face in maintaining a bankruptcy practice outside of the SDNY and Delaware and the lack of opportunity for courts outside of these jurisdictions to develop precedent and expertise.
You are an expert at summarizing long articles. Proceed to summarize the following text: Grants to states to develop community health centers were first authorized by the federal government in the mid-1960s. By the early 1970s, about 100 health centers had been established by the Office of Economic Opportunity (OEO). When OEO was phased out in the early 1970s, the centers supported under this authority were transferred to the Public Health Service (PHS). Since 1989, close to $3 billion has been awarded in project grants to health centers. Project grants are authorized under Sections 329 and 330 of the Public Health Service Act and are to be used by health centers to provide primary health care and related services to medically underserved communities. BPHC sets policy and administers the Community and Migrant Health Center program. BPHC is part of the Health Resources and Services Administration (HRSA) under PHS. Ten regional PHS offices assist BPHC with managing the program. The regional offices are primarily responsible for monitoring the use of program funds by grantees. In 1994, the Community and Migrant Health Center program offered comprehensive primary health care services to about 7.1 million people through 1,615 health care delivery sites in medically underserved areas. Health centers are expected to target their services to those with the greatest risk of going without needed medical care. About 44 percent of health center patients are children under 19 years old and 30 percent are women in their childbearing years. About 60 percent of health center patients live in economically depressed areas and nearly 63 percent have incomes below the federal poverty level. A central feature of health centers is their governance structure. Local community boards govern health centers and are expected to tailor health center programs to the community they serve. In addition to comprehensive primary care services and case management, centers are expected to offer enabling services. These services are determined from assessments of community needs and are intended to help individuals overcome barriers that could prevent them from getting needed services. Health centers are supported by various funding streams. Community health center project grants and Medicaid provide the two largest components of health center revenues, respectively, 35 and 34 percent in 1994. Health centers may also receive other federal, state, and local grants to support their activities. While health centers are required to offer services to all individuals regardless of their ability to pay, centers must seek reimbursement from those who can pay as well as from third-party payers such Medicaid, Medicare, and private insurance. Patient fees are set using a sliding fee schedule that is tied to federal poverty levels. Patients with incomes below a certain percentage of the federal poverty level receive free care or may pay some portion—a discounted fee—while those in the highest income levels pay fees that cover the full service charge. The difference between service charges and the sliding fees collected is a measure of the amount of low-income care subsidized by the center. Two major developments in recent years have affected the financial status and, therefore, the viability of health centers. The first is the authorization of a cost-based reimbursement system for health centers and the second is centers’ participation in prepaid managed care. In the late 1980s, the Congress recognized that neither Medicare nor Medicaid paid the full cost for services provided to program beneficiaries at community health centers. This was due to low reimbursement rates and because some enabling services provided by health centers were not considered as reimbursable benefits by Medicaid. As a result, health centers had fewer financial resources to subsidize care for patients who could not pay and for conducting other program activities. In recognition of this problem, the Congress—as part of the Omnibus Budget Reconciliation Act of 1989 (OBRA )—created a new Medicaid and Medicare cost-based reimbursement system for health centers. Under this system, both programs were required to reimburse health centers for the reasonable cost of medical and enabling services provided to their beneficiaries. The second major development has been the move by states to managed care delivery systems for their Medicaid programs to address rising costs and access problems. Managed care in Medicaid is not a single health care delivery plan but a continuum of models that share a common approach. At one end of the continuum are prepaid or capitated models that pay health organizations a per capita amount each month to provide or arrange for all covered services. At the other end are primary care case management (PCCM) models, which are similar to traditional fee-for-service arrangements except that providers receive a per capita management fee to coordinate a patient’s care in addition to reimbursement for the services they provide. Both systems require that beneficiaries access care through a primary care provider. Between June 1993 and June 1994, the total number of Medicaid beneficiaries in managed care programs across the country increased 57 percent, from almost 5 million to nearly 8 million, with most of the growth occurring in fully capitated managed care programs. Health centers may not be as assured that capitated reimbursement will cover their costs as they are under traditional Medicaid fee-for-service systems. This becomes a concern when health centers lose their cost-based reimbursement under Medicaid prepaid managed care programs. Health plans that contract with centers reimburse them on the basis of a negotiated per capita rate for a set of services. This capitation rate must be sufficient to cover the cost of the contracted services for all Medicaid health plan members enrolled at the health center. Incorrect assumptions about the cost of individual services or the frequency with which they are used may result in an inadequate capitation rate. If the rate is too low, it can lead to financial losses for the centers. States establishing managed care programs that require beneficiaries to enroll in a Medicaid health plan must obtain one of two types of waivers from the Health Care Financing Administration (HCFA). Section 1115 of the Social Security Act offers authority to waive a broad range of Medicaid requirements. Eight states have approved statewide 1115 waivers, and 12 others have waiver proposals pending with HCFA. A second type of waiver is allowed by section 1915(b) of the Social Security Act. These waivers allow states to carry out competitive programs by waiving specific program requirements, such as a beneficiary’s choice of provider. Currently, 37 states and the District of Columbia have 1915(b) waivers and 4 other states have pending waivers. The loss of cost-based reimbursement is a major concern for health centers entering into prepaid capitated agreements. These health centers are concerned that (1) the per capita monthly rate may not adequately cover the costs of providing services to the most vulnerable populations and (2) the lack of reimbursement by health plans for some medical, enabling, or other health services may hinder their ability to continue to provide them. Changes in the health care delivery environment are impacting community health centers as more and more health centers participate in prepaid managed care arrangements. In our review of 10 health centers, we found that prepaid reimbursement for services provided to Medicaid patients did not diminish the centers’ ability to provide access to care for their patients. In fact, health centers have improved their overall financial positions to some degree while maintaining or expanding medical and enabling services. This is due to revenue increases from a variety of sources, such as federal funding other than health center grants. Earnings from prepaid managed care were modest and did not contribute significantly to the support of enabling services and subsidized care. Some center officials, however, credited the predictability of monthly capitation payments as assisting them in financial planning. Using another measure to determine financial vulnerability—cash balances—all 10 centers had limited cash balances. For centers with more than 15 percent of their total revenue from prepaid managed care, low cash balances could be a problem if they encounter significant unexpected expenses resulting from inadequate capitation rates or assumption of risk for nonprimary care services. In response to the changing health care environment, the number of health centers accepting capitated payments for their Medicaid patients grew from 92 health centers, with 280,000 prepaid patients in 1991, to 115 centers with nearly 435,000 prepaid patients in 1993. Health centers often feel pressure to enter into managed care arrangements when states implement such programs on a mandatory or voluntary basis statewide. Five of the 10 health centers we visited operate in areas where Medicaid beneficiaries are mandated to participate in prepaid managed care plans under Medicaid waivers. Increasingly, health centers also choose to participate in areas with voluntary programs. Whether mandatory or not, health center participation is driven by the growing importance of the Medicaid program to health center revenues. In 1993, Medicaid revenues accounted for 17 percent to over 50 percent of health center revenues at the centers we visited. In addition, between 1989 and 1993, 6 of the 10 health centers experienced an increase in the ratio of Medicaid revenues to total revenues. At the same time, 9 health centers experienced a decrease in the amount that federal community health center project grants represented of total revenues. ,8 (See fig. 1.) Except for Sunshine Health Center and Lynn Community Health Center, which received, respectively, 22 and 17 percent of their revenues in 1993 from other federal grants, contracts, or both, the remaining health centers received less than 7 percent of their revenues directly from other federal grants. Some of these health centers have also increased the percentage of their revenues from other income sources, such as state and local grants or other federal grants. Except for Sunshine Health Center and Lynn Community Health Center, which received, respectively, 22 and 17 percent of their revenues in 1993 from other federal grants, contracts, or both, the remaining health centers received less than 7 percent of their revenues directly from other federal grants. Some of these health centers have also increased the percentage of their revenues from other income sources, such as state and local grants or other federal grants. Except for Sunshine Health Center and Lynn Community Health Center, which received, respectively, 22 and 17 percent of their revenues in 1993 from other federal grants, contracts, or both, the remaining health centers received less than 7 percent of their revenues directly from other federal grants. The degree to which health centers were involved in prepaid managed care varied considerably among the 10 health centers. In 1993, prepaid managed care accounted for as little as 3 percent and as much as 52 percent of the total health center revenues (see fig. 2). Differences also existed in the percentage that prepaid managed care revenues represented of total Medicaid revenues, ranging from about 12 to 100 percent of total Medicaid revenues among the 10 centers. Typically, health centers participate in prepaid managed care through health plans serving Medicaid beneficiaries. The health centers contract with one or more health plans to provide a subset of health plan services. Reimbursement for primary care services at the 10 health centers we reviewed was paid as a monthly capitated rate. The capitation rates for primary care services ranged from $12 per member per month at one health center to $38 per member per month at another. Rates varied in large part because of the different services covered under health plan contracts. For example, a center receiving a higher rate may provide additional services, such as X rays and immunizations. If a center with a lower rate provides these services to plan enrollees, it could receive additional reimbursement on a fee-for-service basis. Some centers also told us that they had received a higher rate because they had negotiated for one with the health plan. In addition to agreeing to provide primary care services, four health centers have assumed financial responsibility for referrals, hospitalization, or both in return for a higher capitation rate. In such arrangements, the managed care plan withholds a portion of the health center’s primary care capitation payment to cover referral or hospitalization costs that are higher than expected. In some cases, if the funds withheld are insufficient to cover the losses, the amount withheld in the future from health center capitation payments can be increased. Despite the concern that capitation would make it difficult for health centers to maintain their service levels, we found that the 10 centers continue to offer many services targeted to the needs of their communities and that they have maintained the intensity and frequency of the services provided. In addition to medical care, many of the health centers offer transportation and translation services as well as health education, acquired immunodeficiency syndrome (AIDS) case management, and early intervention services for children of substance abusers. These enabling services are very important in reducing the barriers to health care as well as helping to address problems that can lead to the need for further medical care. In addition, these services are available to all health center patients including those whose benefit package may not cover the cost of these services. (See fig. 3 for a list of the enabling services provided at each health center.) Indicators of a health center’s ability to increase access to the community it serves include growth in the number of patients served and in the amount of funds spent on subsidizing low-income care. All the health centers increased access to medical care. The number of medical patients served by the health centers increased from 131,000 to almost 169,000 from 1989 to 1993, with individual center increases ranging from 4 to 164 percent (see fig. 4). In addition, the number of patient visits or encounters increased from 596,063 to 828,848 between 1989 and 1993 at the 10 health centers. Between 1989 and 1993, 7 of the 10 health centers increased their spending on subsidized low-income care; that is, the amount of spending for free care and the remaining portion of care that uninsured low-income patients are unable to cover (see fig. 5). We examined the growth of spending on enabling services in each health center, another indicator of a health center’s ability to increase access to care. We found that all 10 of the health centers had increased spending on these services between 1989 and 1993 (see fig. 6). Further, health center officials told us that enabling services were expanded or enhanced in response to growing community needs. In addition, officials at all 10 centers reported that the intensity or frequency of services typically provided at the center had not been reduced with prepaid managed care. While the amount of spending on enabling services and subsidized low-income care generally increased among all health centers, these amounts varied considerably from center to center as did the distribution of spending between enabling services and subsidized care of low-income patients. In most cases the sum of spending on enabling services and subsidized care exceeded revenues received from the Community and Migrant Health Center program grant (see fig. 7). With more spending on enabling services, 9 of the 10 health centers increased the number of full-time-equivalent staff involved in providing services other than medical or dental. These included health education, social services, and case management. Staff providing these services included drivers for transportation services, outreach workers, dietary technicians, and home health aides (see fig. 8). Center officials told us that community needs largely influenced patterns of spending on enabling services and to subsidize low-income care. For example, the health centers that we visited in densely populated areas spent more money on enabling services, which include social case workers, than the other centers. The health centers in less populated areas tended to subsidize low-income care to a greater extent. Officials also reported that changing local community conditions—such as an increase in drug abuse or AIDS—could affect the combination of enabling services and subsidized care. While maintaining or expanding their medical and enabling services, all the health centers that we studied reported improved financial positions, as indicated by increases in their year-end fund balances; that is, the excess between a center’s assets and liabilities. One contributing factor is an increase in total revenues. Among the 10 health centers, increases in total revenues ranged from 35 percent to 142 percent between 1989 and 1993. Three of the centers saw revenue increases of over 100 percent during this period. Improvement in fund balances results when increases in revenues from a variety of sources are greater than a center’s expenses. Five centers had increases in grants from other federal and state sources. For example, one health center received $556,000 from a Ryan White AIDS grant in 1993. All health centers, however, had increases in their Medicaid revenue between 1989 and 1993. Increases ranged from 12 percent at one center to over 1,000 percent at another. Medicaid prepaid managed care income also contributed modestly to fund balance increases. Prepaid managed care earnings were modest at best and played a small role in supporting enabling services and subsidized care. In 1993, three centers reported losses of up to $124,000 from prepaid managed care. Other funds offset these losses. During the same year, six centers reported excess prepaid managed care revenues of up to $100,000 after paying the cost of care for medical services and administrative expenses. One center reported no excess revenues from prepaid managed care. Officials at nine of the health centers told us that returns from managed care had not contributed significantly to center support of enabling services and subsidized care. At the tenth center, however, the director told us that growth in managed care revenues had allowed the center to increase its spending on subsidized care. Between 1989 and 1993, the center’s health center grant funding remained level, while the amount of spending on subsidized care grew from nearly $1.6 to $2.5 million revenues from prepaid managed care contributed to the spending on subsidized care. At the same time, the director noted that the federal health center grant was indispensable to the center’s maintaining a steady level of funding for enabling services and subsidized care. Officials from three health centers told us that the predictability of monthly capitation reimbursements allowed them to better manage center finances. Although all the health centers have increased their year-end fund balances, some may be vulnerable to financial difficulties. While all 10 health centers had year-end fund balance increases, none of the centers had cash on hand to cover more than 60 days of operating expenses.Cash on hand ranged from fewer than 1 day of operating expenses at 2 centers to 31 days’ worth at another. Three centers only had available cash to cover fewer than 10 days of operating expenses. Cash reserves are important because they represent liquid assets that can be used to pay for contractual obligations and unexpected expenses. Funds for unexpected expenses are especially critical for health centers with more than 15 percent of total revenues from prepaid managed care arrangements and those that have accepted financial responsibility for services other than primary care. For example, when centers take on risk for medical care and hospitalization but more patients than expected require costly treatment or extended hospitalization, losses could be substantial. We found that seven centers received more than 15 percent of their total revenue from prepaid managed care. The four centers that have assumed financial responsibility for specialty referrals, hospitalization, or both all had cash reserves of 31 or fewer days of operating expenses, thereby making them vulnerable to financial difficulties. Centers can also be financially vulnerable when capitation rates do not fully cover the cost of the care they provide. Centers are faced with either depleting their reserves or cutting back services. Several health center directors told us that their capitated reimbursements are adequate to cover the costs of medical services and some believed that their capitation rate roughly equaled what they would receive from cost-based reimbursement. In most cases, however, center directors could not provide us with data to substantiate their position. While the health centers we visited are now providing medical and enabling services to their communities, some initially faced several problems that are likely to confront other health centers as states expand Medicaid managed care. First, health centers must determine whether not participating in managed care arrangements will affect the number of patients served or revenues needed for financial viability. Centers that do participate may face financial problems if reimbursement is inadequate and they accept too much financial risk or lack managed care skills. Directors of most of the health centers we visited felt compelled to enter into agreements with Medicaid managed care plans to maintain their Medicaid patient population and revenues. The Medicaid population is an important component of the medically underserved population that health centers are intended to serve. Health centers that do not have agreements with Medicaid health plans can lose some or all of their Medicaid patients and revenues, jeopardizing their continued operation. Because Medicaid revenue is a large and growing part of most health centers’ funding, losing this funding could be catastrophic. In 1994, a health center in Washington state experienced severe financial difficulties when its relationship with the only local Medicaid health plan was discontinued. The structure of the health plan, which limits membership to individual physicians, made it impossible for the health center to contract directly with the plan. Rather, one physician employed by the center contracted with the plan. When this physician resigned from the center, its relationship with the plan ended. The center’s other physicians were not acceptable to the health plan because of concerns about the physicians’ admitting privileges at the local hospital and their ability to guarantee 24-hour coverage or because the physicians were not willing to contract with the plan. Because all Medicaid beneficiaries in the health center’s service area were enrolled in this health plan, the center lost 1,000 Medicaid patients when they were assigned to other health plan providers. As a result, the center abruptly lost one-third of its patients and 17 percent of its revenue over a 7-month period. The center’s director told us that without this revenue the center was not viable and eventually would have to close. The center reestablished its relations with this health plan when the physician returned and Medicaid patients are being reassigned. Also in 1994, health centers in another state, Tennessee, faced the loss of Medicaid revenues if they did not participate in the TennCare program. As a result, all the health centers in Tennessee participate in the TennCare program despite their loss of cost-based reimbursement. Health centers had no choice but to contract with the TennCare health plans, according to the director of the Tennessee Primary Care Association, an association of community health centers in Tennessee. Health centers felt compelled to participate because the Medicaid population is an important part of the health centers’ target population. In addition, without the Medicaid revenue, health centers would not be able to continue to offer the range of services they typically provide. Some center officials believed that centers would have closed without this revenue. While the 10 health centers we studied expanded their support for enabling services between 1989 and 1993, the early experience of 3 of these centers with managed care was problematic. Each reported initial depletion of financial resources, and in one case a cutback in services occurred as well as a reorganization due to bankruptcy. Early center problems stemmed from inadequate capitation rates paid to health centers; assignment of more financial risk to health centers than they were capable a lack of managed care knowledge, expertise, and systems. Low primary care capitation rates and assignment of financial risk for referral services contributed to financial difficulties at two Philadelphia health centers in 1987 and 1988, according to health center and BPHC officials. Because the capitation rate did not fully cover the centers’ operating costs, the centers were forced to deplete their cash balances to continue providing services. Both centers reported that they could not negotiate higher rates or avoid accepting too much financial risk in part because the Medicaid beneficiaries were all assigned to one health maintenance organization. This left the health centers in a poor position to negotiate a higher capitation rate or different risk arrangements. Since that time, competing health plans have been added to the Medicaid managed care program. In addition, the health centers are more knowledgeable about managed care arrangements. They no longer accept risk for services that they do not provide and have negotiated more acceptable rates. After one of the Philadelphia centers gained experience in tracking managed care operations, it developed data in 1991 showing that the utilization patterns of its health plan enrollees justified a higher capitation rate. An Arizona health center also suffered financial difficulties once it entered into Arizona’s Medicaid managed care program, established in 1982. According to the center’s current director, capitation rates were inadequate to cover the costs of serving patients in Arizona’s Medically Needy/Medically Indigent eligibility category. In the early 1980s, the center had accepted financial risk for all medical services, including referrals and hospitalizations for its enrollees. Further, the center did not have adequate information systems to manage the risk it had assumed or adequate capital to absorb losses. Within 4 years the center became insolvent and reorganized under chapter 11 of the Federal Bankruptcy Code. It was forced to cut back on its medical and enabling services as it reorganized through bankruptcy in 1986 after experiencing large managed care losses. The health center has completed its restructuring and is now a provider for several health plans. In addition, the health center no longer accepts full financial risk for referrals or hospitalizations. The explosive growth in Medicaid managed care leaves many community health centers with little choice about participating in these new arrangements. However, health centers entering prepaid arrangements are faced with a series of new activities, each of which they must manage well to succeed. First, they must negotiate a contract that pays an adequate capitation rate and does not expose them to undue risk or otherwise hinder them. They must also perform the medical management functions of a prepaid system. In addition, health centers must monitor their financial positions under each managed care agreement, including any liability for referral and hospital services. They must also develop and maintain the information systems needed to support the above clinical and financial management activities. BPHC has strongly encouraged health centers to consider participating in managed care arrangements, while cautioning them of the dangers of accepting risk for services provided by others. Further, BPHC is funding a number of activities to help health centers become providers that can effectively operate in a managed care system. Recognizing that health centers require both specific and general knowledge of managed care, BPHC cooperates with the National Association of Community Health Centers to provide training and technical assistance to grantees. Several training sessions are available to BPHC grantees. Subjects include managed care basics, negotiating a managed care contract, medical management, and rate setting. In 1994, 48 sessions in 35 states were provided, reaching over 1,500 individuals. Technical assistance consists of intensive one-on-one consultations between managed care experts and health center officials. During 1994, 65 health centers requested and received one-on-one technical consultations. BPHC has also developed various publications for health centers to use as self-assessment tools. These publications offer guidance on aspects of managed care such as preparing for prepaid health services, negotiating with managed care plans, and assessing the market area and internal operations. Realizing that health centers lack experience in negotiating contracts with health plans, BPHC offers a contract-review service between centers and health plans. These contracts are typically reviewed by outside private-sector managed care specialists who provide written advice on specific sections that could be revised more favorably for health centers. In 1994, BPHC reviewed 45 contracts for approximately 30 health centers. In addition to activities targeted toward individual health centers, BPHC also assists centers in planning and initiating participation in managed care arrangements through the ISN, established in 1994. These one-time awards are to be used by health centers for planning and developing an integrated delivery system with other providers that will ensure access for the medically underserved. Approximately $6 million was awarded to 29 health centers in 1994. One of the health centers we visited in Florida is using an ISN award to develop a network of community health centers that can negotiate with managed care plans. In Washington state, a health center received an ISN award to help establish a statewide Medicaid managed care plan. As states move to prepaid managed care to control costs and improve access for their Medicaid populations, the number of participating health centers continues to grow. Medicaid prepaid managed care is not incompatible with health centers’ mission of providing access to health care for medically underserved populations. However, health centers face substantial risks and challenges as they move into these arrangements. Such arrangements require new knowledge, skills, and information systems. Centers lacking this expertise face an uncertain future and those in a vulnerable financial position are at even greater risk. Today’s debate over possible changes in federal and state health programs—including Medicaid and other health grant programs, important funding streams for health centers, and the lack of available cash at all 10 centers—heightens the concern over the financial vulnerability of centers participating in prepaid managed care. If this funding source continues to grow as a percentage of total health center revenues, centers must face building larger cash reserves while not compromising medical and enabling services to the vulnerable populations that they serve. HRSA and BPHC officials reviewed a draft of this report and considered it a balanced presentation of the challenges facing community health centers involved in Medicaid prepaid managed care arrangements. We also incorporated their technical comments as appropriate. We are sending copies of this report to the Secretary of Health and Human Services and other congressional committees. Copies will be made available to others on request. If you or your staff have any questions about this report, please call me at (202) 512-7119; Rose Marie Martinez, Assistant Director, at (202) 512-7103; or Paul Alcocer at (312) 220-7615. Other contributors to this report include Jean Chase, Nancy Donovan, and Karen Penler. Founded in 1979, Mountain Park Health Center (MPHC) was formerly known as Memorial Family Health Center and was part of Phoenix Memorial Hospital. In 1987, MPHC became a community-organized primary care center. The center operates in urban South Phoenix, described as the “most multicultural community in Arizona.” Seventy-five percent of the center’s patients are Hispanic and 18 percent are African American. AIDS and infant mortality are among the health problems in South Phoenix, where the infant mortality rate for African Americans is 17.3 per 1,000 live births. Seventy-eight percent of the center’s patients are at or below the poverty level. Sixty-eight percent have Medicaid coverage and 14 percent are uninsured. This center’s rural-based service area consists of a main site in Surprise, Arizona, and two other sites; one in Queen Creek and another at Gila Bend. Eighty-eight percent of Clinica Adelante’s population is Hispanic. Thirty-nine percent of the center’s patients are migrant and seasonal farmworkers. Major health problems in the population covered by the center include a lack of adequate prenatal care, inadequate postpartum visits and newborn checks in the perinatal population; infectious diseases, inadequate nutrition, and dental decay in the pediatric population; and diabetes, hypertension, and cardiovascular disease in the adult population. Twenty-nine percent of the center’s patients have Medicaid coverage and 67 percent of them have no insurance coverage at all. Eighty-five percent are at or below the poverty level. Established 25 years ago, the El Rio health center consists of a main clinic and seven satellite clinics that provide medical and other services to the medically underserved in Tucson. With the majority of patients residing on the south and west sides of Tucson, the significant geographical barriers to health care access are isolation and the remoteness of these locations as well as poor public transportation. The locations of other health care facilities can be at a considerable distance from where most of the patients reside. In addition, language and cultural differences characterize the patients of the El Rio center. Almost one in seven households in the center’s service area routinely uses a language other than English in the home. Other factors exacerbating access to services are proximity to the U.S. border with Mexico, a large undocumented population and a local and transient homeless population. The El Rio service area has a higher proportion of Hispanics to the total population, 55 percent versus 23 percent. Twenty-two percent of other center patients are white and 14 percent are American Indian. Seventy-eight percent are at 100 percent or below the poverty level. Forty-one percent of center patients have Medicaid coverage and 38 percent are uninsured. Since 1964, Sunshine Health Center, Inc., has provided comprehensive primary medical and dental services to migrant and urban poor residing in Broward County, Florida. The Sunshine Health Center serves a patient population of migrant and seasonal farm workers; emigrants from various countries including Haiti, Jamaica, Puerto Rico, and Nicaragua; and African Americans and whites, most of whom are the poor and the working poor. Thirty-two percent of the center’s patients are white, 30 percent are African American, and 20 percent are Hispanic. Located in a county that leads the United States in the increase in AIDS patients, the center serves a population with high rates of infant mortality and morbidity, sexually transmitted diseases, and chronic disorders such as hypertension and diabetes. Ninety-three percent of the patients of the center are at or below the poverty level. Thirty-eight percent are Medicaid patients and 58 percent have no insurance. From its 1967 start in a trailer, the Economic Opportunity Family Health Center (EOFHC) has evolved into its main center, six satellite centers, and affiliated school outreach programs serving the north and northwest areas of Dade County. Dade County has a large and rapidly growing AIDS population, significant substance abuse problems, a large migratory farmworker population, and minority populations with extremely high incidence of tuberculosis, sexually transmitted diseases, and infectious diseases. The population served by EOFHC is 70 percent African American and 20 percent Hispanic. Sixty-six percent of center revenues are generated primarily from the federal government. This center began operations in 1967 to provide family planning and general health services to women. Located in the West Park section of Philadelphia, Pennsylvania, the center serves an area characterized by high infant mortality, low birthweight, teenage pregnancy, and the spread of sexually transmitted diseases including HIV infection. Ninety-nine percent of Spectrum’s patients are African American and 90 percent of the center’s patients are at or below the poverty level. Seventy-one percent of center patients have Medicaid coverage and 27 percent have no insurance. Greater Philadelphia Health Action, Inc. (GPHA) is targeted to provide health care to Philadelphia’s medically underserved population. GPHA operates five primary health care centers, a drug and alcohol counseling and treatment program, a child care program, and two comprehensive school-based clinics. Philadelphia’s health care problems include an infant mortality rate of 14.2 deaths per 1,000 live births; an 11.7-percent low-birth-weight rate; a high teen birth rate of 49 births per 1,000 females (up from 46 per 1,000 in 1988); increasing rates of substance abuse, especially among women; and increasing rates of HIV/AIDS. The vast majority of patients are African American (73 percent) and have incomes at or below 100 percent of the federal poverty level (88.5 percent). Seventy-three percent have Medicaid coverage and 22 percent are uninsured. The Lynn Community Health Center (LCHC) was organized in 1971 as a small storefront mental health center. It has grown into a comprehensive care facility that is the largest provider of outpatient primary care in Lynn, a city characterized by the center’s executive director as the most medically underserved area in Massachusetts. LCHC’s programs focus on people with the greatest barriers to care: the poor, minorities, new immigrants, non-English speaking people, teens, and the frail elderly. Sixty percent of the population served by the center do not consider English to be their first language. At present, Spanish and Russian are the most common languages spoken by the center’s patients. Over 30 percent of LCHC’s staff is bilingual or multilingual and can provide translation services in Spanish, Khmer, Vietnamese, Laotian, and Russian. Forty-five percent of center patients are white, 35 percent are Hispanic, and 11 percent are African American. Sixty-three percent are at or below the poverty level. Fifty-six percent have Medicaid coverage and 31 percent have no insurance. This center was founded in 1972 by a group of mothers living in Worcester’s largest housing project—the Great Brook Valley and Curtis Apartments. These women founded the center because they and their children lacked access to primary care. The center has grown from providing well-child care services to the residents of public housing projects to a comprehensive health center serving the surrounding neighborhood. Special populations requiring services include the perinatal population (in Worcester, rates in two areas—infant mortality and low-birth-weight infants—have been above the state average for the past decade) and the Spanish-speaking elderly population who are monolingual. In addition, the HIV/AIDS epidemic is growing in Worcester, particularly among the minority populations and among the estimated 4,000 injection drug users in the city. In addition, adolescents are exposed to high levels of stress, violence, and depression. The Hispanic community represents 76 percent of center patients. Ninety-five percent of those using the center are at or below the poverty level. Fifty-five percent are covered by Medicaid and 30 percent have no insurance. Roxbury Comprehensive Community Health Center (RoxComp), established in 1969 by a mother concerned about the lack of medical services in the Roxbury community, is the largest community health center serving the Roxbury and North Dorchester areas. Health status indicators for these communities are higher than the national average. For example, the infant mortality rate is twice the national average of 10.1 per 1,000 live births. The area served by the center also exceeds the national average in deaths from heart disease, cancer, stroke, pneumonia, influenza, cirrhosis, homicide, suicide, and injuries. Approximately 20 percent of reported AIDS cases in Boston come from this area. Substance abuse among patients 19 years old and younger and among pregnant women is a problem in the area. Residents served by the center are poor, with 91 percent at or below the poverty level. Eighty-eight percent of center patients are African American. Sixty-two percent have Medicaid coverage and 26 percent have no insurance. To examine how Medicaid prepaid managed care affected community health centers’ ability to continue their mission of providing community-based health care to underserved populations, we first selected a nonrandom judgmental sample of states with a variety of Medicaid managed care situations. The states included Arizona, Florida, Massachusetts, and Pennsylvania, whose prepaid managed care programs included (1) mandatory and voluntary enrollment of beneficiaries, (2) statewide and more geographically limited programs, and (3) capitated Medicaid programs implemented with and without waivers (see table II.1). Table II.1: Characteristics of Four State Programs 1915(b)1915(b)1915(b) In each state, we then visited selected health centers that had prepaid managed care plans operating in their areas for at least 3 years and gathered at least 5 years’ worth of audited financial statements. Program data for the same period were obtained from health center responses to the Bureau of Primary Care’s Common Reporting Requirements. To determine whether health centers were encountering financial difficulties while engaged in prepaid managed care operations, we compiled data on their financial positions. Specifically, we reviewed data on year-end fund balances, which represent the excess between center assets and their liabilities. In addition, we calculated the number of days of operating expenses that cash balances could support. We analyzed program data in several different ways. To determine whether health centers were maintaining access for underserved and vulnerable populations, we compiled data on the number of patients served and the number of patient encounters—a proxy measure for patient visits. To determine whether health centers were continuing to provide enabling services to their communities, we compiled data on spending for other health and community services, including transportation and translation services. In addition, we reviewed the number of full-time-equivalent staff hired to provide these services. To determine whether health centers were continuing to provide care to indigent and low-income patients, we compiled data on the amount of subsidized care. To determine whether health centers’ sources of funds were changing under prepaid managed care, we compared these sources to total receipt of funds. We also conducted work in two states that have more recently begun capitated Medicaid managed care programs—Tennessee and Washington. Washington is making specific accommodations for health centers as it implements its Healthy Options program and is helping the centers establish their own Medicaid health plan. In contrast, Tennessee has so far not made programmatic changes to accommodate health centers, such as requiring their inclusion as providers. At all the health centers we visited, we toured the facilities and interviewed administrators. We also interviewed officials of health plans operating in the area, some that contracted with health centers and some that did not; state community health center associations; and state Medicaid officials. We also interviewed BPHC, HRSA, and National Association of Community Health Center officials. Because we selected our sites judgmentally, our results do not necessarily represent all health centers’ experience with prepaid managed care but illustrate the kinds of issues faced by health centers in these systems. Our work was performed between January 1994 and March 1995 in accordance with generally accepted government auditing standards. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the effects of managed health care on community health centers, focusing on: (1) whether centers participating in prepaid managed care have been able to provide medical services without jeopardizing their financial position; (2) lessons learned from centers' experiences in prepaid managed care; and (3) whether the Bureau of Primary Health Care (BPHC) prepares community health centers to operate under prepaid managed care systems. GAO found that by 1993: (1) almost 500,000 community health center patients were covered by prepaid managed care arrangements; (2) the 10 centers surveyed were able to continue to provide full services to their vulnerable clients in part due to other revenue sources; (3) all 10 centers increased their patient load and spending for a variety of services, while 7 centers also increased their spending for uncompensated care; (4) all 10 centers improved their financial condition due to increased revenues from a variety of sources; and (5) 3 centers had losses of up to $124,000, while 6 centers had excess revenues of up to $100,000 from prepaid managed care. GAO also found that: (1) the centers may be financially vulnerable if they depend on Medicaid prepaid managed care for a sizeable portion of their revenues, have inadequate capitation rates, and have financial responsibility for other than primary care services or rely on other federal and state funding sources; (2) lessons learned from centers' experiences with prepaid managed care include the likely loss of patients if the centers fail to participate, low capitation rates, assumption of too much financial risk, and the lack of managed care skills; and (3) to encourage centers' participation in prepaid managed care, BPHC has implemented an initiative to fund centers' efforts to develop delivery networks with other health providers for managed care operations.
You are an expert at summarizing long articles. Proceed to summarize the following text: As I have stated in other testimony, Medicare as currently structured is fiscally unsustainable. While many people have focused on the improvement in the HI trust fund’s shorter-range solvency status, the real news is that we now have a more realistic view of Medicare’s long-term financial condition and the outlook is much bleaker. A consensus has emerged that previous program spending projections have been based on overly optimistic assumptions and that actual spending will grow faster than has been assumed. First, let me talk about how we measure Medicare’s fiscal health. In the past, Medicare’s financial status has generally been gauged by the projected solvency of the HI trust fund, which covers primarily inpatient hospital care and is financed by payroll taxes. Looked at this way, Medicare—more precisely, Medicare’s Hospital Insurance trust fund—is described as solvent through 2029. However, even from the perspective of HI trust fund solvency, the estimated exhaustion date of 2029 does not mean that we can or should wait until then to take action. In fact, delay in addressing the HI trust fund imbalance means that the actions needed will be larger and more disruptive. Taking action today to restore solvency to the HI trust fund for the next 75 years would require benefit cuts of 37 percent or tax increases of 60 percent, or some combination of the two. While these actions would not be easy or painless, postponing action until 2029 would require more than doubling of the payroll tax or cutting benefits by more than half to maintain solvency. (See fig. 1.) Given that in the long-term, Medicare cost growth is now projected to grow at 1 percentage point faster than GDP, HI’s financial condition is expected to continue to worsen after the 75-year period. By 2075, HI’s annual financing shortfall—the difference between program income and benefit costs—will reach 7.35 percent of taxable payroll. This means that if no action is taken this year, shifting the 75-year horizon out one year to 2076—a large deficit year—and dropping 2001—a surplus year—would yield a higher actuarial deficit, all other things being equal. Moreover, HI trust fund solvency does not mean the program is financially healthy. Under the Trustees’ 2001 intermediate estimates, HI outlays are projected to exceed HI tax revenues beginning in 2016, the same year in which Social Security outlays are expected to exceed tax revenues. (See fig. 2.) As the baby boom generation retires and the Medicare-eligible population swells, the imbalance between outlays and revenues will increase dramatically. Thus, in 15 years the HI trust fund will begin to experience a growing annual cash deficit. At that point, the HI program must redeem Treasury securities acquired during years of cash surplus. Treasury, in turn, must obtain cash for those redeemed securities either through increased taxes, spending cuts, increased borrowing, retiring less debt, or some combination thereof. Finally, HI trust fund solvency does not measure the growing cost of the Part B SMI component of Medicare, which covers outpatient services and is financed through general revenues and beneficiary premiums. Part B accounts for somewhat more than 40 percent of Medicare spending and is expected to account for a growing share of total program dollars. As the Trustees noted in this year’s report, a rapidly growing share of general revenues and substantial increases in beneficiary premiums will be required to cover part B expenditures. Clearly, it is total program spending—both Part A and Part B—relative to the entire federal budget and national economy that matters. This total spending approach is a much more realistic way of looking at the combined Medicare program’s sustainability. In contrast, the historical measure of HI trust fund solvency cannot tell us whether the program is sustainable over the long haul. Worse, it can serve to distort perceptions about the timing, scope, and magnitude of our Medicare challenge. These figures reflect a worsening of the long-term outlook. Last year a technical panel advising the Medicare Trustees recommended assuming that future per-beneficiary costs for both HI and SMI eventually will grow at a rate 1 percentage point above GDP growth—about 1 percentage point higher than had previously been assumed. That recommendation—which was consistent with a similar change CBO had made to its Medicare and Medicaid long-term cost growth assumptions—was adopted by the Trustees. In their new estimates published on March 19, 2001, the Trustees adopted the technical panel’s long-term cost growth recommendation. The Trustees note in their report that this new assumption substantially raises the long-term cost estimates for both HI and SMI. In their view, incorporating the technical panel’s recommendation yields program spending estimates that represent a more realistic assessment of likely long-term program cost growth. Under the old assumption (the Trustees’ 2000 best estimate intermediate assumptions), total Medicare spending consumed 5 percent of GDP by 2063. Under the new assumption (the Trustees’ 2001 best estimate intermediate assumptions), this occurs almost 30 years sooner in 2035— and by 2075 Medicare consumes over 8 percent of GDP, compared with 5.3 percent under the old assumption. The difference clearly demonstrates the dramatic implications of a 1-percentage point increase in annual Medicare spending over time. (See fig. 3) In part the progressive absorption of a greater share of the nation’s resources for health care, as with Social Security, is a reflection of the rising share of the population that is elderly. Both programs face demographic conditions that require action now to avoid burdening future generations with the program’s rising costs. Like Social Security, Medicare’s financial condition is directly affected by the relative size of the populations of covered workers and beneficiaries. Historically, this relationship has been favorable. In the near future, however, the covered worker-to-retiree ratio will change in ways that threaten the financial solvency and sustainability of this important national program. In 1970 there were 4.6 workers per HI beneficiary. Today there are about 4, and in 2030, this ratio will decline to only 2.3 workers per HI beneficiary. (See fig. 4.) Unlike Social Security, however, Medicare growth rates reflect not only a burgeoning beneficiary population, but also the escalation of health care costs at rates well exceeding general rates of inflation. Increases in the number and quality of health care services have been fueled by the explosive growth of medical technology. Moreover, the actual costs of health care consumption are not transparent. Third-party payers generally insulate consumers from the cost of health care decisions. All of these factors contribute to making Medicare a much greater and more complex fiscal challenge than even Social Security. When viewed from the perspective of the federal budget and the economy, the growth in health care spending will become increasingly unsustainable over the longer term. Figure 5 shows the sum of the future expected HI cash deficit and the expected general fund contribution to SMI as a share of federal income taxes under the Trustees 2001 intermediate estimates. SMI has received contributions from the general fund since the inception of the program. This general revenue contribution is projected to grow from about 5 percent of federal personal and corporate income taxes in 2000 to 13 percent by 2030. Beginning in 2016, use of general fund revenues will be required to pay benefits as the HI trust fund redeems its Treasury securities. Assuming general fund revenues are used to pay benefits after the trust fund is exhausted, by 2030 the HI program alone would consume more than 6 percent of income tax revenue. On a combined basis, Medicare’s draw on general revenues would grow from 5.4 percent of income taxes today to nearly 20 percent in 2030 and 45 percent by 2070. Figure 6 reinforces the need to look beyond the HI program. HI is only the first layer in this figure. The middle layer adds the SMI program, which is expected to grow faster than HI in the near future. By the end of the 75- year projection period, SMI will represent almost half of total estimated Medicare costs. To get a more complete picture of the future federal health care entitlement burden, Medicaid is added. Medicare and the federal portion of Medicaid together will grow to 14.5 percent of GDP from today’s 3.5 percent. Taken together, the two major government health programs— Medicare and Medicaid—represent an unsustainable burden on future generations. In addition, this figure does not reflect the taxpayer burden of state and local Medicaid expenditures. A recent statement by the National Governors Association argues that increased Medicaid spending has already made it difficult for states to increase funding for other priorities. Our long-term simulations show that to move into the future with no changes in federal health and retirement programs is to envision a very different role for the federal government. Assuming, for example, that Congress and the President adhere to the often-stated goal of saving the Social Security surpluses, our long-term simulations show a world by 2030 in which Social Security, Medicare, and Medicaid absorb most of the available revenues within the federal budget. Under this scenario, these programs would require more than three-quarters of total federal revenue even without adding a Medicare prescription drug benefit. (See fig. 7.) Revenue as a share of GDP declines from its 2000 level of 20.6 percent due to unspecified permanent policy actions. In this display, policy changes are allocated equally between revenue reductions and spending increases. The “Save the Social Security Surpluses” simulation can only be run through 2056 due to the elimination of the capital stock. This scenario contemplates saving surpluses for 20 years—an unprecedented period of surpluses in our history—and retiring publicly held debt. Alone, however, even saving all Social Security surpluses would not be enough to avoid encumbering the budget with unsustainable costs from these entitlement programs. Little room would be left for other federal spending priorities such as national defense, education, and law enforcement. Absent changes in the structure of Medicare and Social Security, sometime during the 2040s government would do nothing but mail checks to the elderly and their health care providers. Accordingly, substantive reform of the Medicare and Social Security programs remains critical to recapturing our future fiscal flexibility. Demographics argue for early action to address Medicare’s fiscal imbalances. Ample time is required to phase in the reforms needed to put this program on a more sustainable footing before the baby boomers retire. In addition, timely action to bring costs down pays large fiscal dividends for the program and the budget. The high projected growth of Medicare in the coming years means that the earlier reform begins, the greater the savings will be as a result of the effects of compounding. Beyond reforming the Medicare program itself, maintaining an overall sustainable fiscal policy and strong economy is vital to enhancing our nation’s future capacity to afford paying benefits in the face of an aging society. Today’s decisions can have wide-ranging effects on our ability to afford tomorrow’s commitments. As I have testified before, you can think of the budget choices you face as a portfolio of fiscal options balancing today’s unmet needs with tomorrow’s fiscal challenges. At the one end— with the lowest risk to the long-range fiscal position—is reducing publicly held debt. At the other end—offering the greatest risk—is increasing entitlement spending without fundamental program reform. Reducing publicly held debt helps lift future fiscal burdens by freeing up budgetary resources encumbered for interest payments, which currently represent about 12 cents of every federal dollar spent, and by enhancing the pool of economic resources available for private investment and long- term economic growth. This is particularly crucial in view of the known fiscal pressures that will begin bearing down on future budgets in about 10 years as the baby boomers start to retire. However, as noted above, debt reduction is not enough. Our long-term simulations illustrate that, absent entitlement reform, large and persistent deficits will return. Despite common agreement that, without reform, future program costs will consume growing shares of the federal budget, there is also a mounting consensus that Medicare’s benefit package should be expanded to cover prescription drugs, which will add billions to the program’s cost. This places added pressure on policymakers to consider proposals that could fundamentally reform Medicare. Our previous work provides, I believe, some considerations that are relevant to deliberations regarding the potential addition of a prescription drug benefit and Medicare reform options that would inject competitive mechanisms to help control costs. In addition, our reviews of HCFA offer lessons for improving Medicare’s management. Implementing necessary reforms that address Medicare’s financial imbalance and meet the needs of beneficiaries will not be easy. We must have a Medicare agency that is ready and able to meet these 21st century challenges. Among the major policy challenges facing the Congress today is how to reconcile Medicare’s unsustainable long-range financial condition with the growing demand for an expensive new benefit—namely, coverage for prescription drugs. It is a given that prescription drugs play a far greater role in health care now than when Medicare was created. Today, Medicare beneficiaries tend to need and use more drugs than other Americans. However, because adding a benefit of such potential magnitude could further erode the program’s already unsustainable financial condition, you face difficult choices about design and implementation options that will have a significant impact on beneficiaries, the program, and the marketplace. Let’s examine the current status regarding Medicare beneficiaries and drug coverage. About a third of Medicare beneficiaries have no coverage for prescription drugs. Some beneficiaries with the lowest incomes receive coverage through Medicaid. Some beneficiaries receive drug coverage through former employers, some can join Medicare+Choice plans that offer drug benefits, and some have supplemental Medigap coverage that pays for drugs. However, significant gaps remain. For example, Medicare+Choice plans offering drug benefits are not available everywhere and generally do not provide catastrophic coverage. Medigap plans are expensive and have caps that significantly constrain the protection they offer. Thus, beneficiaries with modest incomes and high drug expenditures are most vulnerable to these coverage gaps. Overall, the nation’s spending on prescription drugs has been increasing about twice as fast as spending on other health care services, and it is expected to keep growing. Recent estimates show that national per-person spending for prescription drugs will increase at an average annual rate exceeding 10 percent until at least 2010. As the cost of drug coverage has been increasing, employers and Medicare+Choice plans have been cutting back on prescription drug benefits by raising enrollees’ cost-sharing, charging higher copayments for more expensive drugs, or eliminating the benefit altogether. It is not news that adding a prescription drug benefit to Medicare will be costly. However, the cost consequences of a Medicare drug benefit will depend on choices made about its design—including the benefit’s scope and financing mechanism. For instance, a Medicare prescription drug benefit could be designed to provide coverage for all beneficiaries, coverage only for beneficiaries with extraordinary drug expenses, coverage only for low-income beneficiaries. Policymakers would need to determine how costs would be shared between taxpayers and beneficiaries through premiums, deductibles, and copayments and whether subsidies would be available to low-income, non-Medicaid eligible individuals. Design decisions would also affect the extent to which a new pharmaceutical benefit might shift to Medicare portions of the out-of- pocket costs now borne by beneficiaries as well as those costs now paid by Medicaid, Medigap, or employer plans covering prescription drugs for retirees. Clearly, the details of a prescription drug benefit’s implementation would have a significant impact on both beneficiaries and program spending. Experience suggests that some combination of enhanced access to discounted prices, targeted subsidies, and measures to make beneficiaries more aware of costs may be needed. Any option would need to balance concerns about Medicare sustainability with the need to address what will likely be a growing hardship for some beneficiaries in obtaining prescription drugs. The financial prognosis for Medicare clearly calls for meaningful spending reforms to help ensure that the program is sustainable over the long haul. The importance of such reforms will be heightened if financial pressures on Medicare are increased by the addition of new benefits, such as coverage for prescription drugs. Some leading reform proposals envision that Medicare could achieve savings by adapting some of the competitive elements embodied in the Federal Employees Health Benefits Program. Specifically, these proposals would move Medicare towards a model in which health plans compete on the basis of benefits offered and costs to the government and beneficiaries, making the price of health care more transparent. Currently, Medicare follows a complex formula to set payment rates for Medicare+Choice plans, and plans compete primarily on the richness of their benefit packages. Medicare permits plans to earn a reasonable profit, equal to the amount they can earn from a commercial contract. Efficient plans that keep costs below the fixed payment amount can use the “savings” to enhance their benefit packages, thus attracting additional members and gaining market share. Under this arrangement, competition among Medicare plans may produce advantages for beneficiaries, but the government reaps no savings. In contrast, a competitive premium approach offers certain advantages. Instead of having the government administratively set a payment amount and letting plans decide—subject to some minimum requirements—the benefits they will offer, plans would set their own premiums and offer at least a required minimum Medicare benefit package. Under these proposals, Medicare costs would be more transparent: beneficiaries could better see what they and the government were paying for in connection with health care expenditures. Beneficiaries would generally pay a portion of the premium and Medicare would pay the rest. Plans operating at lower cost could reduce premiums, attract beneficiaries, and increase market share. Beneficiaries who joined these plans would enjoy lower out-of- pocket expenses. Unlike today’s Medicare+Choice program, the competitive premium approach provides the potential for taxpayers to benefit from the competitive forces. As beneficiaries migrated to lower- cost plans, the average government payment would fall. Experience with the Medicare+Choice program reminds us that competition in Medicare has its limits. First, not all geographic areas are able to support multiple health plans. Medicare health plans historically have had difficulty operating efficiently in rural areas because of a sparseness of both beneficiaries and providers. In 2000, 21 percent of rural beneficiaries had access to a Medicare+Choice plan, compared to 97 percent of urban beneficiaries. Second, separating winners from losers is a basic function of competition. Thus, under a competitive premium approach, not all plans would thrive, requiring that provisions be made to protect beneficiaries enrolled in less successful plans. The extraordinary challenge of developing and implementing Medicare reforms should not be underestimated. Our look at health care spending projections shows that, with respect to Medicare reform, small implementation problems can have huge consequences. To be effective, a good program design will need to be coupled with competent program management. Consistent with that view, questions are being raised about the ability of CMS to administer the Medicare program effectively. Our reviews of Medicare program activities confirm the legitimacy of these concerns. In our companion statement today, we discuss not only the Medicare agency’s performance record but also areas where constraints have limited the agency’s achievements. We also identify challenges the agency faces in seeking to meet expectations for the future. As the Congress and the Administration focus on current Medicare management issues, our review of HCFA suggests several lessons: Managing for results is fundamental to an agency’s ability to set meaningful goals for performance, measure performance against those goals, and hold managers accountable for their results. Our work shows that HCFA has faltered in adopting a results-based approach to agency management, leaving the agency in a weakened position for assuming upcoming responsibilities. In some instances, the agency may not have the tools it needs because it has not been given explicit statutory authority. For example, the agency has sought explicit statutory authority to use full and open competition to select claims administration contractors. The agency believes that without such statutory authority it is at a disadvantage in selecting the best performers to carry out Medicare claims administration and customer service functions. To be effective, any agency must be equipped with the full complement of management tools it needs to get the job done. A high-performance organization demands a workforce with, among other things, up-to-date skills to enhance the agency’s value to its customers and ensure that it is equipped to achieve its mission. HCFA began workforce planning efforts that continue today in an effort to identify areas in which staff skills are not well matched to the agency’s evolving mission. In addition, CMS recently reorganized its structure to be more responsive to its customers. It is important that CMS continue to reevaluate its skill needs and organizational structure as new demands are placed on the agency. Data-driven information is essential to assess the budgetary impact of policy changes and distinguish between desirable and undesirable consequences. Ideally, the agency that runs Medicare should have the ability to monitor the effects of Medicare reforms, if enacted—such as adding a drug benefit or reshaping the program’s design. However, HCFA was unable to make timely assessments, largely because its information systems were not up to the task. The status of these systems remains the same, leaving CMS unprepared to determine, within reasonable time frames, the appropriateness of services provided and program expenditures. The need for timely, accurate, and useful information is particularly important in a program where small rate changes developed from faulty estimates can mean billions of dollars in overpayments or underpayments. An agency’s capacity should be commensurate with its responsibilities. As the Congress continues to modify Medicare, CMS’ responsibilities will grow substantially. HCFA’s tasks increased enormously with the enactment of landmark Medicare legislation in 1997 and the modifications to that legislation in 1999 and 2000. In addition to the growth in Medicare responsibilities, the agency that administers this program is also responsible for other large health insurance programs and activities. As the agency’s mission has grown, however, its administrative dollars have been stretched thinner. Adequate resources are vital to support the kind of oversight and stewardship activities that Americans have come to count on—inspection of nursing homes and laboratories, certification of Medicare providers, collection and analysis of critical health care data, to name a few. Shortchanging this agency’s administrative budget will put the agency’s ability to handle upcoming reforms at serious risk. In short, because Medicare’s future will play such a significant role in the future of the American economy, we cannot afford to settle for anything less than a world-class organization to run the program. However, achieving such a goal will require a clear recognition of the fundamental importance of efficient and effective day-to-day operations. In determining how to reform the Medicare program, much is at stake— not only the future of Medicare itself but also assuring the nation’s future fiscal flexibility to pursue other important national goals and programs. I feel that the greatest risk lies in doing nothing to improve the Medicare program’s long-term sustainability. It is my hope that we will think about the unprecedented challenge facing future generations in our aging society. Engaging in a comprehensive effort to reform the Medicare program and put it on a sustainable path for the future would help fulfill this generation’s stewardship responsibility to succeeding generations. It would also help to preserve some capacity for future generations to make their own choices for what role they want the federal government to play.
Although the short-term outlook of Medicare's hospital insurance trust fund improved in the last year, Medicare's long-term prospects have worsened. The Medicare Trustee's latest projections, released in March, use more realistic assumptions about health care spending in the years ahead. These latest projections call into question the program's long-term financial health. The Congressional Budget Office also increased its long-term estimates of Medicare spending. The slowdown in Medicare spending growth in recent years appears to have ended. In the first eight months of fiscal year 2001, Medicare spending was 7.5 percent higher than a year earlier. This testimony discusses several fundamental challenges to Medicare reform. Without meaningful entitlement reform, GAO's long-term budget simulations show that an aging population and rising health care spending will eventually drive the country back into deficit and debt. The addition of a prescription drug benefits would boost spending projections even further. Properly structured reform to promote competition among health plans could make Medicare beneficiaries more cost conscious. The continued importance of traditional Medicare underscores the need to base adjustments to provider payments on hard evidence rather than on anecdotal information. Similarly, reforms in the management of the Medicare program should ensure that adequate resources accompany increased expectations about performance and accountability. Ultimately, broader health care reforms will be needed to balance health care spending with other societal priorities.
You are an expert at summarizing long articles. Proceed to summarize the following text: Medicare is generally the primary source of health insurance for people age 65 and over. However, traditional Medicare leaves beneficiaries liable for considerable out-of-pocket costs, and most beneficiaries have supplemental coverage. Military retirees can also obtain some care from MTFs and, since October 1, 2001, DOD has provided comprehensive supplemental coverage to its retirees age 65 and over. Civilian federal retirees and dependents age 65 and over can obtain supplemental coverage from FEHBP. The demonstration tested extending this coverage to military retirees age 65 and over, and their dependents. Medicare, a federally financed health insurance program for persons age 65 and older, some people with disabilities, and people with end-stage kidney disease, is typically the primary source of health insurance for persons age 65 and over. Eligible Medicare beneficiaries are automatically covered by part A, which includes inpatient hospital and hospice care, most skilled nursing facility (SNF) care, and some home health care. They can also pay a monthly premium ($54 in 2002) to join part B, which covers physician and outpatient services as well as those home health services not covered under part A. Outpatient prescription drugs are generally not covered. Under traditional fee-for-service Medicare, beneficiaries choose their own providers and Medicare reimburses those providers on a fee-for- service basis. Beneficiaries who receive care through traditional Medicare are responsible for paying a share of the costs for most services. The alternative to traditional Medicare, Medicare+Choice, offers beneficiaries the option of enrolling in private managed care plans and other private health plans. In 1999, before the demonstration started, about 16 percent of all Medicare beneficiaries were enrolled in a Medicare+Choice plan; by 2002, the final year of the demonstration, enrollment had fallen to 12 percent. Medicare+Choice plans cover all basic Medicare benefits, and many also offer additional benefits such as prescription drugs, although most plans place a limit on the amount of drug costs they cover. These plans typically do not pay if their members use providers who are not in their plans, and plan members may have to obtain approval from their primary care doctors before they see specialists. Members of Medicare+Choice plans generally pay less out of pocket than they would under traditional Medicare. Medicare’s traditional fee-for-service benefit package and cost-sharing requirements leave beneficiaries liable for significant out-of-pocket costs, and most beneficiaries in traditional fee-for-service Medicare have supplemental coverage. This coverage typically pays part of Medicare’s deductibles, coinsurance, and copayments, and may also provide benefits that Medicare does not cover—notably, outpatient prescription drugs. Major sources of supplemental coverage include employer-sponsored insurance, the standard Medigap policies sold by private insurers to individuals, and Medicaid. Employer-sponsored insurance. About one-third of Medicare’s beneficiaries have employer-sponsored supplemental coverage. These plans, which typically have cost-sharing requirements, pay for some costs not covered by Medicare, including part of the cost of prescription drugs. Medigap. About one-quarter of Medicare’s beneficiaries have Medigap, the only supplemental coverage option available to all beneficiaries when they initially enroll in Medicare. Prior to 1992, insurers were free to establish the benefits for Medigap policies. The Omnibus Budget Reconciliation Act of 1990 (OBRA 1990) required that beginning in 1992, Medigap policies be standardized, and OBRA authorized 10 different benefit packages, known as plans A through J, that insurers could offer. The most popular Medigap policy is plan F, which covers Medicare coinsurance and deductibles, but not prescription drugs. It had an average annual premium per person of about $1,200 in 1999, although in some cases plan F cost twice that amount. Among the least popular Medigap policies are those offering prescription drug coverage. These policies are the most expensive of the 10 standard policies—they averaged about $1,600 in 1999, and some cost over $5,000. Beneficiaries with these policies pay most of the cost of drugs because the Medigap drug benefit has a deductible and high cost sharing and does not reimburse policyholders for drug expenses above a set limit. DOD provides health care to active-duty military personnel and retirees, and to eligible dependents and survivors through its TRICARE program. Prior to 2001, retirees lost most of their military health coverage when they turned age 65, although they could still use MTFs when space was available, and they could obtain prescription drugs without charge from MTF pharmacies. In the Floyd D. Spence National Defense Authorization Act for Fiscal Year 2001 (NDAA 2001), Congress established two new benefits to supplement military retirees’ Medicare coverage: Pharmacy benefit. Effective April 1, 2001, military retirees age 65 and over were given access to prescription drugs through TRICARE’s National Mail Order Pharmacy (NMOP) and civilian pharmacies. Retirees make lower copayments for prescription drugs purchased through NMOP than at civilian pharmacies. Retirees continue to have access to free prescription drugs at MTF pharmacies. TFL. Effective October 1, 2001, military retirees age 65 and over who were enrolled in Medicare part B became eligible for TFL. As a result, DOD is now a secondary payer for these retirees’ Medicare-covered services, paying all of their required cost sharing. TFL also offers certain benefits not covered by Medicare, including catastrophic coverage. Retirees can continue to use MTFs without charge on a “space available” basis. In fiscal year 1999, before TFL was established, DOD’s annual appropriations for health care were about $16 billion, of which over $1 billion funded the care of military retirees age 65 and over. In fiscal year 2002, DOD’s annual health care appropriations totaled about $24 billion, of which over $5 billion funded the care of retirees age 65 and over who used TFL, the pharmacy benefit, and MTF care. In addition to their DOD coverage, military retirees—but generally not their dependents—can use Department of Veterans Affairs (VA) facilities. There are 163 VA medical centers throughout the country that provide inpatient and outpatient care as well as over 850 outpatient clinics. VA care is free to veterans with certain service-connected disabilities or low incomes; other veterans are eligible for care but have lower priority than those with service-connected disabilities or low incomes and are required to make copayments. FEHBP, the health insurance program administered by OPM for federal civilian employees and retirees, covered about 8.3 million people in 2002. Civilian employees become eligible for FEHBP when hired by the federal government. Employees and retirees can purchase health insurance from a variety of private plans, including both managed care and fee-for-service plans, that offer a broad range of benefits, including prescription drugs. Insurers offer both self-only plans and family plans, which also cover the policyholders’ dependents. Some plans also offer two levels of benefits: a standard option and a high option, which has more benefits, less cost sharing, or both. For retirees age 65 and over, FEHBP supplements Medicare, paying beneficiaries’ Medicare deductibles and coinsurance in addition to paying some costs not covered by Medicare, such as part of the cost of prescription drugs. Over two-thirds of FEHBP policyholders are in national plans; the remainder are in local plans. National plans include plans that are available to all civilian employees and retirees as well as plans that are available only to particular groups, for example, foreign service employees. In the FEHBP, the largest national plan is Blue Cross Blue Shield, accounting for about 45 percent of those insured by an FEHBP plan. Other national plans account for about 24 percent of insured individuals. The national plans are all preferred provider organizations (PPO) in which enrollees use doctors, hospitals, and other providers that belong to the plan’s network, but are allowed to use providers outside of the network for an additional cost. Local plans, which operate in selected geographic areas and are mostly managed care, cover the remaining 32 percent of people insured by the FEHBP. Civilian employees who enroll in FEHBP can change plans during an annual enrollment period. During this period, which runs from mid- November to mid-December, beneficiaries eligible for FEHBP can select new plans for the forthcoming calendar year. To assist these beneficiaries in selecting plans, OPM provides general information on FEHBP through brochures and its Web site. Also, as part of this information campaign, plans’ representatives may visit government agencies to participate in health fairs, where they provide detailed information about their specific health plans to government employees. The premiums charged by these plans, which are negotiated annually between OPM and the plans, depend on the benefits offered by the plan, the type of plan—fee-for-service or managed care—and the plan’s out-of- pocket costs for the enrollee. Plans may propose changes to benefits as well as changes in out-of-pocket payments by enrollees. OPM and the plans negotiate these changes and take them into account when negotiating premiums. Fee-for-service plans must base their rates on the claims experience of their FEHBP enrollees, while adjusting for changes in benefits and out-of-pocket payments, and must provide OPM with data to justify their proposed rates. Managed care plans must give FEHBP the best rate that they offer to groups of similar size in the private sector under similar conditions, with adjustments to account for differences in the demographic characteristics of FEHBP enrollees and the benefits provided. The government pays a maximum of 72 percent of the weighted average premium of all plans and no more than 75 percent of any plan’s premium. Unlike most other plans, including employer-sponsored insurance and Medigap, FEHBP plans charge the same premium to all enrollees, regardless of age. As a result, persons over age 65, for whom the FEHBP plan supplements Medicare, pay the same rate as those under age 65, for whom the FEHBP plan is the primary insurer. The FEHBP demonstration allowed eligible beneficiaries in the demonstration sites to enroll in an FEHBP plan. The demonstration ran for 3 years, from January 1, 2000, through December 31, 2002. The law that established the demonstration capped enrollment at 66,000 beneficiaries and specified that DOD and OPM should jointly select from 6 to 10 sites. Initially, the agencies selected 8 sites that had about 69,000 eligible beneficiaries according to DOD’s calculation for 2000. (See table 1.) Four sites had MTFs, and 1 site—Dover—also participated in the subvention demonstration. Two other sites, which had about 57,000 eligible beneficiaries, were added in 2001. Demonstration enrollees received the same benefits as civilian FEHBP enrollees, but could no longer use MTFs or MTF pharmacies. Military retirees age 65 and over and their dependents age 65 and over were permitted to enroll in either self-only or family FEHBP plans. Dependents who were under age 65 could be covered only if the eligible retiree chose a family plan. Several other groups were permitted to enroll including: unremarried former spouses of a member or former member of the armed forces entitled to military retiree health care, dependents of a deceased member or former member of the armed forces entitled to military retiree health care, and dependents of a member of the armed services who died while on active duty for more than 30 days. About 13 percent of those eligible for the demonstration were under age 65. DOD, with assistance from OPM, was responsible for providing eligible beneficiaries information on the demonstration. A description of this information campaign is in appendix IV. The demonstration guaranteed enrollees who dropped their Medigap policies the right to resume their coverage under 4 of the 10 standard Medigap policies—plans A, B, C, and F—at the end of the demonstration. However, demonstration enrollees who held any other standard Medigap policies, or Medigap policies obtained before the standard plans were established, were not given the right to regain the policies. Enrollees who dropped their employer-sponsored retiree health coverage had no guarantee that they could regain it. Each plan was required by OPM to offer the same package of benefits to demonstration enrollees that it offered in the civilian FEHBP, and plans operating in the demonstration sites were generally required to participate in the demonstration. Fee-for-service plans that limit enrollment to specific groups, such as foreign service employees, did not participate. In addition, health maintenance organizations (HMO) and point-of-service (POS) plans were not required to participate if their civilian FEHBP enrollment was less than 300 or their service area overlapped only a small part of the demonstration site. Thirty-one local plans participated in the demonstration in 2000; for another 14 local plans participation was optional, and none of these participated. The law established a separate risk pool for the demonstration, so any losses from the demonstration were not covered at the expense of persons insured under the civilian FEHBP. As a result, plans had to establish separate reserves for the demonstration and were allowed to charge different premiums in the demonstration than they charged in the civilian program. Enrollment in the demonstration was low, although enrollment in Puerto Rico was substantially higher than on the U.S. mainland. Among eligible beneficiaries who knew about the demonstration yet chose not to enroll, most were satisfied with their existing health care coverage and preferred it to the demonstration’s benefits. Lack of knowledge about the demonstration accounted for only a small part of the low enrollment. Although most eligible retirees did not enroll in a demonstration plan, several factors encouraged enrollment. Some retirees took the view that the demonstration plans’ benefits, notably prescription drug coverage, were better than available alternatives. Other retirees mentioned lack of satisfactory alternative coverage. In particular, retirees who were not covered by an existing Medicare+Choice or employer-sponsored health plan were much more likely to enroll. The higher enrollment in Puerto Rico reflected a higher proportion of retirees there who considered the demonstration’s benefits—ranging from drug coverage to choice of doctors—better than what they had. The higher enrollment in Puerto Rico also reflected in part Puerto Rico’s greater share of retirees without existing coverage, such as an employer-sponsored plan. While some military retiree organizations as well as a large FEHBP plan predicted at the start of the demonstration that enrollment would reach 25 percent or more of eligible beneficiaries, demonstration-wide enrollment was 3.6 percent in 2000 and 5.5 percent in 2001. In 2002, following the introduction of the senior pharmacy benefit and TFL the previous year, demonstration-wide enrollment fell to 3.2 percent. (See fig. 1.) The demonstration’s enrollment peaked at 7,521 beneficiaries, and by 2002 had declined to 4,367 of the 137,230 eligible beneficiaries. These low demonstration-wide enrollment rates masked a sizeable difference in enrollment between the mainland sites and Puerto Rico. In 2000, enrollment in Puerto Rico was 13.2 percent of eligible beneficiaries—about five times the rate on the mainland. By 2001, Puerto Rico’s enrollment had climbed to 28.6 percent. Unlike 2002 enrollment on the mainland, which declined, enrollment in Puerto Rico that year rose slightly, to 30 percent. (See fig. 2.) Among the mainland sites, there were also sizeable differences in enrollment, ranging from 1.3 percent in Dover, Delaware, in 2001, to 8.8 percent in Humboldt County, California, that year. Enrollment at all mainland sites declined in 2002. Retirees who knew about the demonstration and did not enroll cited many reasons for their decision, notably that their existing coverage’s benefits— in particular its prescription drug benefit—and costs were more attractive than those of the demonstration. In addition, nonenrollees expressed several concerns, including uncertainty about whether they could regain their Medicare supplemental coverage after the demonstration ended. Benefits of existing coverage. Almost two-thirds of nonenrollees who knew about the demonstration reported that they were satisfied with their existing employer-sponsored or other health coverage. For the majority of nonenrollees with private employer-sponsored coverage, the demonstration’s benefits were no better than those offered by their current plan. Costs of existing coverage. Nearly 30 percent of nonenrollees who knew about the demonstration stated that its plans were too costly. This was likely a significant concern for retirees interested in a managed care plan, such as a Medicare+Choice plan, whose premiums were generally lower than demonstration plans. Prescription drugs and availability of doctors. In explaining their decision not to enroll, many eligible beneficiaries who knew about the demonstration focused on limitations of specific features of the benefits package that they said were less attractive than similar features of their existing coverage. More than one-quarter of nonenrollees cited not being able to continue getting prescriptions filled without charge at MTF pharmacies if they enrolled. More than one-quarter also said their decision at least partly reflected not being able to keep their current doctors if they enrolled. These nonenrollees may have been considering joining one of the demonstration’s managed care plans, which generally limit the number of doctors included in their provider networks. Otherwise, they would have been able to keep their doctors, because PPOs, while encouraging the use of network doctors, permit individuals to select their own doctors at an additional cost. Uncertainty. About one-fourth of nonenrollees said they were uncertain about the viability of the demonstration and wanted to wait to see how it worked out. In addition, more than 20 percent of nonenrollees were concerned that the demonstration was temporary and would end in 3 years. Furthermore, some nonenrollees who looked beyond the demonstration period expressed uncertainty about what their coverage would be after the demonstration ended: Roughly one-quarter expressed concern that joining a demonstration plan meant risking the future loss of other coverage—either Medigap or employer-sponsored insurance. Finally, about one-quarter of nonenrollees were uncertain about how the demonstration would mesh with Medicare. Lack of knowledge—although common among eligible retirees—was only a small factor in explaining low enrollment. If everyone eligible for the demonstration had known about it, enrollment might have doubled, but would still have been low. DOD undertook an extensive information campaign, intended to inform all eligible beneficiaries about the demonstration, but nearly 54 percent of those eligible for the demonstration did not know about it at the time of our survey (May through August 2000). Of those who knew about the demonstration, only 7.4 percent enrolled. Those who did not know about the demonstration were different in several respects from those who did: They were more likely to be single, female, African American, older than age 75, to have annual income of $40,000 or less, to live an hour or more from an MTF, not covered by employer-sponsored health insurance, not officers, not to belong to military retiree organizations and to live in the demonstration areas of Camp Pendleton, California, Dallas, Texas, and Fort Knox, Kentucky. Accounting for the different characteristics of those retirees who knew about the demonstration and those who did not, we found that roughly 7 percent of those who did not know about the demonstration would have enrolled in 2000 if they had known about it. As a result, we estimate that demonstration-wide enrollment would have been about 7 percent if all eligible retirees knew about the demonstration. (See app. II.) Comparison of enrollment in Puerto Rico and the mainland sites also suggests that, among the factors that led to low enrollment, knowledge about the demonstration was not decisive. In 2000, fewer people in Puerto Rico reported knowing about the demonstration than on the mainland (35 percent versus 47 percent). Nonetheless, enrollment in Puerto Rico was much higher. In making the decision to enroll, retirees were attracted to an FEHBP plan if it had better benefits—particularly prescription drug coverage—or lower costs than their current coverage or other available coverage. Among those who knew about the demonstration, retirees who enrolled were typically positive about one or both of the following: Better FEHBP benefits. Two-thirds of enrollees cited their demonstration plan’s benefits package as a reason to enroll, with just over half saying the benefits package was better than other coverage available to them. Nearly two-thirds of enrollees mentioned the better coverage of prescription drugs offered by their demonstration plan. Furthermore, the inclusiveness of FEHBP plans’ networks of providers mattered to a majority of enrollees: More than three-fifths mentioned as a reason for enrolling that they could keep their current doctors under the demonstration. Lower demonstration plan costs. Among enrollees, about 62 percent said that their demonstration FEHBP plan was less costly than other coverage they could buy. Beneficiaries’ favorable assessments of FEHBP—and their enrollment in the demonstration—were related to whether they lacked alternative coverage to traditional Medicare and, if they had such coverage, to the type of coverage. In 2000, among those who lacked employer-sponsored coverage or a Medicare+Choice plan, or lived more than an hour’s travel time from an MTF, about 15 percent enrolled. By contrast, among those who had such coverage, or had MTF access, 4 percent enrolled. In particular, enrollment in an FEHBP plan was more likely for retirees who lacked either Medicare+Choice or employer-sponsored coverage. Lack of Medicare+Choice. Controlling for other factors affecting enrollment, those who did not use Medicare+Choice were much more likely to enroll in a demonstration plan than those who did. (See fig. 3.) Several reasons may account for this. First, in contrast to fee-for-service Medicare, Medicare+Choice plans are often less costly out-of-pocket, typically requiring no deductibles and lower cost sharing for physician visits and other outpatient services. Second, unlike fee-for-service Medicare, many Medicare+Choice plans offered a prescription drug benefit. Third, while Medicare+Choice plan benefits were similar to those offered by demonstration FEHBP plans, Medicare+Choice premiums were typically less than those charged by the more popular demonstration plans, including Blue Cross Blue Shield, the most popular demonstration plan on the mainland. Lack of employer-sponsored coverage. Retirees who did not have employer-sponsored health coverage were also more likely to join a demonstration plan. Of those who did not have employer-sponsored coverage, 8.6 percent enrolled in the demonstration, compared to 4.7 percent of those who had such coverage. Since benefits in employer- sponsored health plans often resemble FEHBP benefits, retirees with employer-sponsored coverage would have been less likely to find FEHBP plans attractive. Retirees with another type of alternative coverage, Medigap, responded differently to the demonstration. Unlike the pattern with other types of insurance coverage, more of those with a Medigap plan enrolled (9.3 percent) than did those without Medigap (5.6 percent). Medigap plans generally offered fewer benefits than a demonstration FEHBP plan, but at the same or higher cost to the retiree. Seven of the 10 types of Medigap plans available to those eligible for the demonstration do not cover prescription drugs. As a result of these differences, retirees who were covered by Medigap policies would have had an incentive to enroll instead in a demonstration FEHBP plan, which offered drug coverage and other benefits at a lower premium cost than the most popular Medigap plan. Like the lack of Medicare+Choice or employer-sponsored coverage, lack of nearby MTF care stimulated enrollment. While living more than an hour from an MTF was associated with higher demonstration enrollment, MTF care may have served some retirees as a satisfactory supplement to Medicare-covered care, making demonstration FEHBP plans less attractive to them. Of eligible retirees who knew of the demonstration and lived within 1 hour of an MTF, 3.7 percent enrolled, compared to 11.1 percent of those who lived more than 1 hour away. Higher enrollment in Puerto Rico than on the mainland reflected in part the more widespread lack of satisfactory alternative health coverage in Puerto Rico compared to the mainland. In Puerto Rico, of those who knew of the demonstration, the share of eligible retirees with employer- sponsored health coverage (14 percent) was about half that on the mainland (27 percent). In addition, before September 2001, no Medicare+Choice plan was available in Puerto Rico. By contrast, in mainland sites where Medicare+Choice plans were available, their attractive cost sharing and other benefits discouraged retirees from enrolling in demonstration plans. Other factors associated with Puerto Rico’s high enrollment and cited by enrollees there included the demonstration plan’s better benefits package—especially prescription drug coverage—compared to many retirees’ alternatives, the demonstration plan’s broader choice of doctors, and the plan’s reputation for quality of care. The premiums charged by the demonstration plans varied widely, reflecting differences in how they dealt with the concern that the demonstration would attract a disproportionate number of sick, high-cost enrollees. To address these concerns, plans generally followed one of two strategies. Most plans charged higher premiums than those they charged to their civilian FEHBP enrollees—a strategy that could have provided a financial cushion and possibly discouraged enrollment. A small number of plans set premiums at or near their premiums for the civilian FEHBP with the aim of attracting a mix of enrollees who would not be disproportionately sick. Plans’ underlying concern that they would attract a sicker population was not borne out. In the first year of the demonstration, for example, on average health care for demonstration retirees was 50 percent less expensive per enrollee than the care for their civilian FEHBP counterparts. Demonstration plans charged widely varying premiums to enrollees, with the most popular plans offering some of the lowest premiums. In 2000, national plans’ monthly premiums for individual coverage ranged from $65 for Blue Cross Blue Shield to $208 for the Alliance Health Plans. Among local plans—most of which were managed care—monthly premiums for individual coverage ranged from $43 for NYLCare Health Plans of the Southwest to $280 for Aetna U.S. Healthcare. Not surprisingly, few enrollees selected the more expensive plans. The two most popular plans were Blue Cross Blue Shield and Triple-S; the latter offered a POS in Puerto Rico. Both plans had relatively low monthly premiums—the Triple-S premium charged to individuals was $54 in the demonstration’s first year. Average premiums for national plans were about $20 higher than for local plans, which were largely managed care plans. (See table 2.) Some plans in the demonstration were well known in their market areas, while others—especially those open only to government employees— likely had much lower name recognition. Before the demonstration started, OPM officials told us that they expected beneficiaries to be unfamiliar with many of the plans included in the demonstration. These officials said that beneficiaries were likely to have only experience with or knowledge of Blue Cross Blue Shield and, possibly, some local HMOs. The success of Blue Cross Blue Shield relative to other national plans in attracting enrollees appears to support their view, as does Triple-S’s success in Puerto Rico, where it is one of the island’s largest insurers. In 2000, Blue Cross Blue Shield was the most popular plan in the demonstration, with 42 percent of demonstration-wide enrollment and 68 percent of enrollment on the mainland. Among national plans, the GEHA Benefit Plan (known as GEHA) was a distant second with 4 percent of enrollment. The other five national plans together captured less than 1 percent of all demonstration enrollment. Among local plans, Triple-S was most successful, capturing 96 percent of enrollment in Puerto Rico and 38 percent of enrollment demonstration-wide. The other local plans, taken together, accounted for about 14 percent of demonstration-wide enrollment. Several factors contributed to plans’ concern that they would attract sicker—and therefore more costly—enrollees in the demonstration. Plans did not have the information that they usually use to set premiums— claims history for fee-for-service plans and premiums charged to comparable private sector groups for managed care plans. Moreover, according to officials, some plans were reluctant to assume that demonstration enrollees would be similar to their counterparts in the civilian FEHBP. A representative from one of the large plans noted that the small size of the demonstration was also a concern. The number of people eligible for the demonstration (approaching 140,000, when the demonstration was expanded in 2001) was quite small compared to the number of people in the civilian program (8.5 million in 2001). If only a small number of people enrolled in a plan, one costly case could result in losses, because claims could exceed premiums. In response to the concern that the demonstration might attract a disproportionate number of sick enrollees, plans developed two different strategies for setting premiums. Plans in one group, including Blue Cross Blue Shield and GEHA, kept their demonstration premiums at or near those they charged in the civilian FEHBP. Representatives of one plan explained that it could have priced high, but they believed that would have resulted in low enrollment and might have attracted a disproportionate number of sick—and therefore costly—enrollees. Instead, by keeping their premium at the same level as in the civilian program, these plan officials hoped to make their plan attractive to those who were in good health as well as to those who were not. Such a balanced mix of enrollees would increase the likelihood that a plan’s revenues would exceed its costs. By contrast, some plans charged higher premiums in the demonstration— in some cases, 100 percent higher—than in the civilian FEHBP. Setting higher premiums might provide plans with a financial cushion to deal with potential high-cost enrollees. While higher premiums might have discouraged enrollment and reduced plans’ exposure to high-cost patients, this strategy carried the risk that those beneficiaries willing to pay very high premiums might be sick, high-cost patients. More than four-fifths of plans chose the second strategy, charging higher premiums in the demonstration than in the civilian FEHBP. In 2000, only two plans—both local plans—charged enrollees less in the demonstration than in the civilian program for individual, standard option policies; these represented about 6 percent of all plans. By contrast, three plans—about 9 percent of all plans—set premiums at least twice as high as premiums in the civilian FEHBP. (See fig. 4.) The demonstration did not attract sicker, more costly enrollees—instead, military retirees who enrolled were less sick on average than eligible nonenrollees. We found that, as scored by a standard method to assess patients’ health, older retirees who enrolled in the demonstration were an estimated 13 percent less sick than eligible nonenrollees. At each site enrollees were, on average, less sick than nonenrollees. In the GAO-DOD- OPM survey, fewer enrollees on the U.S. mainland (33 percent) reported that they or their spouses were in fair or poor health compared to nonenrollees (40 percent). Retirees who enrolled in demonstration plans had scores that indicated they were, on average, 19 percent less sick than civilian FEHBP enrollees in these plans. Plans’ divergent strategies for setting premiums resulted in similar mixes of enrollees. Blue Cross Blue Shield and GEHA, both of which did not increase premiums, attracted about the same proportion of individuals in poor health as plans on the mainland that raised premiums. During 2000, the first year of the demonstration, enrolled retirees’ health care was 28 percent less expensive—as measured by Medicare claims— than that of eligible nonenrolled retirees and one-third less expensive than that of their FEHBP counterparts. (See table 3.) The demonstration enrollees’ average age (71.8 years) was lower than eligible nonenrollees’ average age (73.1 years), which in turn was lower than the average age of civilian FEHBP retirees (75.2 years) in the demonstration areas. OPM has obtained from the three largest plans claims information that includes the cost of drugs and other services not covered by Medicare. These claims show a similar pattern: Demonstration enrollees were considerably less expensive than enrollees in the civilian FEHBP. Although demonstration enrollees’ costs were lower than those of their FEHBP counterparts in the first year, demonstration premiums generally remained higher than premiums for the civilian FEHBP. In 2001, the second year of the demonstration, only a limited portion of the first year’s claims was available when OPM and the plans negotiated the premiums, so the lower demonstration costs had no effect on setting 2001 premiums. Demonstration premiums in 2001 increased more rapidly than the civilian premium charged by the same plans: a 30 percent average increase in the demonstration for individual policies compared to a 9 percent increase for civilians in the same plans. In 2002, the third year, when both the plans and OPM were able to examine a complete set of claims for the first year before setting premiums, the pattern was reversed: On average, the demonstration premiums for individual policies fell more than 2 percent while the civilian premiums rose by 13 percent. However, on average, 2002 premiums remained higher in the demonstration than in the civilian FEHBP. Blue Cross Blue Shield was an exception, charging a higher monthly premium for an individual policy to civilian enrollees ($89) in 2000 than to demonstration enrollees ($74). Because the demonstration was open to only a small number of military retirees—and only a small fraction of those enrolled—the demonstration had little impact on DOD, nonenrollees, and MTFs. However, the impact on enrolled retirees was greater. If the FEHBP option were made permanent, the impact on DOD, nonenrollees, and MTFs would depend on the number of enrollees. Because of its small size, the demonstration had little impact on DOD’s budget. About 140,000 of the more than 8 million people served by the DOD health system were eligible for the demonstration in its last 2 years. Enrollment at its highest was 7,521—about 5.5 percent of eligible beneficiaries. DOD’s expenditures on enrollees’ premiums that year totaled about $17 million—roughly 0.1 percent of its total health care budget. Under the demonstration, DOD was responsible for about 71 percent of each individual’s premium, whereas under TFL it is responsible for the entire cost of roughly similar Medicare supplemental coverage. Probably because of its small size, the demonstration had no observable impact on either the ability of MTFs to assist in the training and readiness of military health care personnel or on nonenrollees’ access to MTF care. Officials at the four MTFs in demonstration sites told us that they had seen no impact from the demonstration on either MTFs or nonenrollees’ access to care. Since enrollees were typically attracted to the demonstration by both its benefits and its relatively low costs, the impact on those who enrolled was necessarily substantial. In the first 2 years, the demonstration provided enrollees with better supplemental coverage, which was less costly or had better benefits, or both. In the third year of the demonstration, after TFL and the retirees’ pharmacy benefit were introduced and enrollment declined, the number of beneficiaries affected by the demonstration decreased. TFL entitled military retirees to low-cost, comprehensive coverage, making the more expensive FEHBP unattractive. The average enrollee premium for an individual policy in the demonstration’s third year was $109 per month. In comparison, to obtain similar coverage under the the combined TFL-pharmacy benefit, the only requirement was to pay the monthly Medicare part B premium of $54. Further, pharmacy out-of- pocket costs under TFL are less than those in the most popular FEHBP plan. The impact on DOD of a permanent FEHBP option for military retirees nationwide would depend on the number of retirees who enrolled. For example, if the same percentage of eligible retirees who enrolled in 2002— after TFL and the retirees’ pharmacy benefit were introduced—enrolled in FEHBP, enrollment would be roughly 20,000 of the more than 1.5 million military retirees. As retirees’ experience with TFL grows, their interest in an FEHBP alternative may decline further. As long as enrollment in a permanent FEHBP option remains small, the impact on DOD’s ability to provide care at MTFs and on MTF readiness would also likely be small. We provided DOD and OPM with the opportunity to comment on a draft of this report. In its written comments DOD stated that, overall, it concurred with our findings. However, DOD differed with our description of the demonstration’s impact on DOD’s budget as small. In contrast, DOD described these costs of the 3-year demonstration–$28 million for FEHBP premiums and $11 million for administration—as substantial. While we do not disagree with these dollar-cost figures and have included them in this report, we consider them to be small when compared to DOD’s health care budget, which ranged from about $18 billion in fiscal year 2000 to about $24 billion in fiscal year 2002. For example, as we report, DOD’s premium costs for the demonstration during 2001, when enrollment peaked, were about $17 million—less than 0.1 percent of DOD’s health care budget. Although DOD’s cost per enrollee in the demonstration was substantial, the number of enrollees was small, resulting in the demonstration’s total cost to DOD being small. DOD’s comments appear in appendix VI. DOD also provided technical comments, which we incorporated as appropriate. OPM declined to comment. We are sending copies of this report to the Secretary of Defense and the Director of the Office of Personnel Management. We will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-7101. Other GAO contacts and staff acknowledgments are listed in appendix VII. To determine why those eligible for the Federal Employees Health Benefits Program (FEHBP) demonstration enrolled or did not enroll in an FEHBP plan, we co-sponsored with the Department of Defense (DOD) and the Office of Personnel Management (OPM) a mail survey of eligible beneficiaries—military retirees and others eligible to participate in the demonstration. The survey was fielded during the first year of the demonstration, from May to August 2000, and was sent to a sample of eligible beneficiaries, both those who enrolled and those who did not enroll, at each of the eight demonstration sites operating at that time. The survey was designed to be statistically representative of eligible beneficiaries, enrollees, nonenrollees, and sites, and to facilitate valid comparisons between enrollees and nonenrollees. In constructing the questionnaire, we developed questions pertaining to individuals’ previous use of health care services, access to and satisfaction with care, health status, knowledge of the demonstration, reasons for enrolling or not enrolling in the demonstration, and other topics. Because eligible beneficiaries could choose FEHBP plans that also covered their family members, we included questions about spouses and dependent children. DOD and OPM officials and staff members from Westat, the DOD subcontractor with responsibility for administering the survey, provided input on the questionnaire’s content and format. After pretesting the questionnaire with a group of military retirees and their family members, the final questionnaire included the topic areas shown in table 4. We also produced a Spanish version of the questionnaire that was mailed to beneficiaries living in Puerto Rico. Working with DOD, OPM, and Westat, we defined the survey population as all persons living in the initial eight demonstration sites who were eligible to enroll in the demonstration. The population included military retirees, their spouses and dependents, and other eligible beneficiaries, such as unremarried former spouses, designated by law. We drew the survey sample from a database provided by DOD that listed all persons eligible for the demonstration as of April 1999. We stratified the sample by the eight demonstration sites and by enrollment status—enrollees and nonenrollees. Specifically, we used a stratified two-stage design in which households were selected within each of the 16 strata and one eligible person was selected from each household. For the enrollee sample, we selected all enrollees who were the sole enrollee in their households. In households with multiple enrollees, we randomly selected one enrollee to participate. For the nonenrollee sample, first we randomly selected a sample of households from all nonenrollee households and then randomly selected a single person from each those households. We used a modified equal allocation approach, increasing the size of the nonenrollee sample in steps, bringing it successively closer to the sample size that would be obtained through proportional allocation. This modified approach produced the best balance in statistical terms between the gain from the equal allocation approach and the gain from the proportional allocation approach. If both an enrollee and a nonenrollee were selected from the same household, the nonenrollee was dropped from the sample and a different nonenrollee was selected. We adjusted the nonenrollee sample size to take account of expected nonresponse. Our final sample included 1,676 out of 2,507 enrollees and 3,971 out of 66,335 nonenrollees. Starting with an overall sample of 5,647 beneficiaries, we obtained usable questionnaires from 4,787 people—an overall response rate of 85 percent. (See table 5.) Response rates varied across sites, from 76 percent to 85 percent among nonenrollees, and from 92 percent to 98 percent among enrollees. (See table 6.) At each site, enrollees responded at higher rates than nonenrollees. Each of the 16 strata was weighted separately to reflect its population. The enrollee strata were given smaller sampling weights, reflecting enrollees’ higher response rates and the fact that they were sampled at a higher rate than nonenrollees. The weights were also adjusted to reflect the variation in response rates across sites. Finally, the sampling weights were further adjusted to reflect differences in response rates between male and female participants in 8 strata. In this appendix, we describe the data, methods, and models used to (1) analyze the factors explaining how beneficiaries knew about the demonstration and why they enrolled in it, (2) assess the health of beneficiaries and civilian FEHBP enrollees, and (3) obtain the premiums of Medigap insurance in the demonstration areas. Our approach to analyzing eligible beneficiaries’ behavior involved two steps: first, analyzing the factors related to whether eligible beneficiaries knew about the demonstration, and second, analyzing the factors related to whether those who knew about the demonstration decided to enroll. Knowledge about the demonstration. To account for differences in beneficiaries’ knowledge about the demonstration, we used individual- level variables as well as variables corresponding to individual sites. These individual-level categories were demographic and economic variables, such as age and income; health status; other sources of health coverage, such as having employer-sponsored health insurance; and military-related factors. The inclusion of site variables allowed the model to take account of differences across the different sites in beneficiaries’ knowledge about the demonstration. We analyzed the extent to which these variables influenced beneficiaries’ knowledge about the demonstration using a logistic regression—a standard statistical method of analyzing an either/or (binary) variable. This method yields an estimate of each factor’s effect, controlling for the effects of all other factors in the regression. In our analysis, either a retiree knew about the demonstration or did not. The logistic regression predicts the probability that a beneficiary knew about the demonstration, given information about the person’s traits—for example, over age 75, had employer-sponsored health insurance, and so on. The coefficient on each variable measures its effect on beneficiaries’ knowledge. These coefficients pertain to the entire demonstration population, not just those beneficiaries in our survey sample. To make the estimates generalizable to the entire eligible population, we applied sample weights to all observations. In view of the large difference in enrollment between the mainland sites and Puerto Rico, we tested whether the same set of coefficient estimates was appropriate for the mainland sites and the Puerto Rico site. Our results showed that the coefficient estimates for the mainland and for Puerto Rico were not significantly different (at the 5 percent level), so it was appropriate to estimate a single logistic regression model for all sites. Table 7 shows for each variable its estimated effect on knowledge, as measured by the variable’s coefficient and odds ratio. The odds ratio expresses how much more likely—or less likely—it is that a person with a particular characteristic knows about the demonstration, compared to a person without that characteristic. The odds ratio is based on the coefficient, which indicates each explanatory variable’s estimated effect on the dependent variable, holding other variables constant. For the mainland sites, retirees were more likely to know about the demonstration if they were male, were married, were officers, were covered by employer- sponsored health insurance, lived less than an hour from a military treatment facility (MTF), or belonged to military retiree organizations. Retirees were less likely to know about the demonstration if they were African American; were older than age 75; or lived in Camp Pendleton, California, Dallas, Texas, or Fort Knox, Kentucky. Decision to enroll in the demonstration. To account for a retiree’s decision to enroll or not to enroll, we considered four categories of individual-level variables similar to those in the “knowledge of the demonstration” regressions, and a site-level variable for Puerto Rico. We also introduced a set of health insurance factors pertaining to the area in which the retiree lived—the premium for a Medigap policy and the proportion of Medicare beneficiaries in a retiree’s county of residence enrolled in a Medicare+Choice plan. In our logistic regression analysis of enrollment, we included only those people who knew about the demonstration. Despite the large enrollment differences between the mainland sites and Puerto Rico, our statistical tests determined that the mainland sites and the Puerto Rico site could be combined into a single logistic regression of enrollment. We included a variable for persons in the Puerto Rico site. (See table 8.) We found that retirees were less likely to enroll in the demonstration if they were African American, enrolled in Medicare+Choice plans, had employer-sponsored health insurance, lived in areas with a high proportion of Medicare beneficiaries enrolled in a Medicare+Choice plan, lived in areas where Medigap was more expensive, or lived less than an hour from an MTF. Retirees who had higher incomes, were officers, were members of a military retiree organization, were enrolled in Medicare part B, lived in Puerto Rico, or were covered by a Medigap policy were more likely to enroll. We estimated what the demonstration’s enrollment rate would have been in 2000 if everyone eligible for the demonstration had known about it. For the 54 percent of retirees who did not know about the demonstration, we calculated their individual probabilities of enrollment, using their characteristics (such as age) and the coefficient estimates from the enrollment regression. Aggregating these individual estimated enrollment probabilities, we found that if all eligible retirees had known about the demonstration, enrollment in 2000 would have been 7.2 percent of eligible beneficiaries, compared with actual enrollment of 3.6 percent. To measure the health status of retired enrollees and nonenrollees, as well as of civilian FEHBP enrollees, we calculated scores for individuals using the Principal Inpatient Diagnostic Cost Group (PIP-DCG) method. This method—used by the Centers for Medicare & Medicaid Services (CMS) in adjusting Medicare+Choice payment rates—yielded a proxy for the healthiness of military and civilian retirees as of 1999, the year before the demonstration. The method relates individuals’ diagnoses to their annual Medicare expenditures. For example, a PIP-DCG score of 1.20 indicates that the individual is 20 percent more costly than the average Medicare beneficiary. In our analysis, we used Medicare claims and other administrative data from 1999 to calculate PIP-DCG scores for eligible military retirees and their counterparts in the civilian FEHBP in the demonstration sites. Using Medicare part A claims for 1999, we calculated PIP-DCG scores for Medicare beneficiaries who were eligible for the demonstration. We used a DOD database to identify enrollees as well as those who were eligible for the demonstration but did not enroll. We also calculated PIP-DCG scores based on 1999 Medicare claims for each Medicare-eligible person enrolled in the civilian FEHBP. We obtained from OPM data on enrollees in the civilian FEHBP and on the plans in which they were enrolled. We restricted our analysis to those Medicare- eligible civilian FEHBP enrollees who lived in a demonstration site. Results of PIP-DCG calculations. We compared the PIP-DCG scores of demonstration enrollees with those of eligible retirees who did not enroll. In every site, the average PIP-DCG score was significantly less for demonstration enrollees than for those who did not enroll. We also compared the PIP-DCG scores of those enrolled in the demonstration with those enrolled in the civilian FEHBP: For every site, these scores were significantly less for demonstration enrollees than for their counterparts in the civilian FEHBP. (See table 9.) We compiled data from Quotesmith Inc. to obtain a premium price for Medigap plan F in each of the counties in the eight demonstration sites. We collected the lowest premium quote for a Medigap plan F policy for each sex at 5-year intervals: ages 65, 70, 75, 80, 85, and over 89. A person age 65 to 69 was assigned the 65-year-old’s premium, a person age 70 to 74 was assigned the 70-year-old’s premium, and so on. Using these data, we assigned a Medigap plan F premium to each survey respondent age 65 and over, according to the person’s age, sex, and location. Tables 10, 11, and 12 show enrollment rates by site and for the U.S. mainland sites as a whole for each year of the demonstration, 2000 through 2002. The program for informing and educating eligible beneficiaries about the demonstration was modeled on OPM’s approach to informing eligible civilian beneficiaries about FEHBP. Elements of OPM’s approach include making available a comparison of FEHBP plans and holding health fairs sponsored by individual federal agencies. DOD expanded upon the OPM approach–for example, by sending postcards to inform eligible beneficiaries about the demonstration because they, unlike civilian federal employees and retirees, were unlikely to have any prior knowledge of FEHBP. In addition, DOD established a bilingual toll-free number. During the first year’s enrollment period, DOD adjusted its information and education effort, for example, by changing the education format from health fairs to town meetings designed specifically for demonstration beneficiaries. In the second year of the demonstration, DOD continued with its revised approach. In the third year, after TRICARE For Life (TFL) began, DOD significantly reduced its information program but continued to mail information to all eligible beneficiaries. It limited town meetings to Puerto Rico, the only site where enrollment remained significant during the third year. DOD sent a series of mailings to all eligible beneficiaries. These included a postcard announcing the demonstration, mailed in August 1999, that alerted beneficiaries to the demonstration–the returned postcards allowed DOD to identify incorrect mailing addresses and to target follow-up mailings to beneficiaries with correct addresses; an OPM-produced booklet, The 2000 Guide to Federal Employees Health Benefits Plans Participating in the DOD/FEHBP Demonstration Project, received by all eligible retirees from November 3 through 5, 1999, that contained information on participating FEHBP plans, including coverage and consumer satisfaction; a trifold brochure describing the demonstration, which was mailed on September 1 and 4, 1999; and a list of Frequently Asked Questions (FAQ) explaining how Medicare and FEHBP work together. At the time of our survey, after the first year’s information campaign, over half of eligible beneficiaries were unaware of the demonstration. Among those who knew about it, more recalled receiving the postcard than recalled receiving any of the later materials—although the FAQ was cited more often as being useful. (See table 13.) Initially, the health fairs that DOD sponsored for military bases’ civilian employees were its main effort—other than the mailings—to provide information about the demonstration to eligible beneficiaries. At these health fairs, plans set up tables at which their representatives distributed brochures and answered questions. At one site, the military base refused to allow the demonstration representatives to participate in its health fair because of concern about an influx of large numbers of demonstration beneficiaries. At another site, the turnout exceeded the capacity of the plan representatives to deal with questions and DOD officials told us that they accommodated more people by giving another presentation at a different facility or at the same facility 1 month later. A DOD official discovered, however, that it was difficult to convey information about the demonstration to large numbers of individuals at the health fairs. DOD officials determined that the health fairs were not working well, so by January 2000, DOD replaced them with 2-hour briefings, which officials called town meetings. In these meetings, a DOD representative explained the demonstration during the first hour and then answered questions from the audience. A DOD official told us that these town meetings were more effective than the health fairs. For the first year of the demonstration, just under 6 percent of those eligible attended either a health fair or a town meeting. The number of eligible beneficiaries who reported attending these meetings varied considerably by site—from about 3 percent in New Orleans and Camp Pendleton to 4 percent in Fort Knox and 18 percent in Humboldt County. Roughly 11 percent of beneficiaries reported attending in Puerto Rico, the site with the highest enrollment. DOD also established a call center and a Web site to inform eligible beneficiaries about the demonstration. The call center, which was staffed by Spanish and English speakers, answered questions and sent out printed materials on request. In the GAO-DOD-OPM survey, about 18 percent of those who knew about the demonstration reported calling the center’s toll- free number. The proportion that called the toll-free number was much higher among subsequent enrollees (77 percent) than among nonenrollees who knew about the demonstration (13 percent). The Web site was another source of information about the demonstration. Although less than half of eligible beneficiaries knew about the demonstration, most of those who did know said they obtained their information from DOD’s mailings. Other important sources of information included military retiree and military family organizations and FEHBP plans. (See table 14.) Nearly all of enrollees (93 percent) and more than half of nonenrollees who said they considered enrolling in an FEHBP health plan (55 percent) reported that they had enough information about specific plans to make an informed decision about enrolling in one of them. More than three-fifths of these beneficiaries who enrolled or considered enrolling in an FEHBP plan said they used The 2000 Guide to FEHBP Plans Participating in the DOD/FEHBP Demonstration Project as a source of information. Other major sources of information were the plans’ brochures and DOD’s health fairs and town meetings. More than 18 percent of those who considered joining did not obtain information about any specific plan. (See table 15.) Table 16 shows reasons cited by enrollees for enrolling in a DOD-FEHBP health plan in 2000, and table 17 shows reasons cited by nonenrollees for not enrolling. Major contributors to this work were Michael Kendix, Robin Burke, Jessica Farb, Martha Kelly, Dae Park, and Michael Rose. Defense Health Care: Oversight of the Adequacy of TRICARE’s Civilian Provider Network Has Weaknesses. GAO-03-592T. Washington, D.C.: March 27, 2003. Federal Employees’ Health Benefits: Effects of Using Pharmacy Benefit Managers on Health Plans, Enrollees, and Pharmacies. GAO-03-196. Washington, D.C.: January 10, 2003. Federal Employees’ Health Plans: Premium Growth and OPM’s Role in Negotiating Benefits. GAO-03-236. Washington, D.C.: December 31, 2002. Medicare+Choice: Selected Program Requirements and Other Entities’ Standards for HMOs. GAO-03-180: Washington, D.C.: October 31, 2002. Medigap: Current Policies Contain Coverage Gaps, Undermine Cost Control Incentives. GAO-02-533T. Washington, D.C.: March 14, 2002. Medicare Subvention Demonstration: Pilot Satisfies Enrollees, Raises Cost and Management Issues for DOD Health Care. GAO-02-284. Washington, D.C.: February 11, 2002. Retiree Health Insurance: Gaps in Coverage and Availability. GAO-02- 178T. Washington, D.C.: November 1, 2001. Medigap Insurance: Plans Are Widely Available but Have Limited Benefits and May Have High Costs. GAO-01-941. Washington, D.C.: July 31, 2001. Health Insurance: Proposals for Expanding Private and Public Coverage. GAO-01-481T. Washington, D.C.: March 15, 2001. Defense Health Care: Pharmacy Copayments. GAO/HEHS-99-134R. Washington, D.C.: June 8, 1999. Federal Health Programs: Comparison of Medicare, the Federal Employees Health Benefits Program, Medicaid, Veterans’ Health Services, Department of Defense Health Services, and Indian Health Services. GAO/HEHS-98-231R. Washington, D.C.: August 7, 1998. Defense Health Care: Offering Federal Employees Health Benefits Program to DOD Beneficiaries. GAO/HEHS-98-68. Washington, D.C.: March 23, 1998.
Prior to 2001, military retirees who turned age 65 and became eligible for Medicare lost most of their Department of Defense (DOD) health benefits. The DOD-Federal Employees Health Benefits Program (FEHBP) demonstration was one of several demonstrations established to examine alternatives for addressing retirees' lack of Medicare supplemental coverage. The demonstration was mandated by the Strom Thurmond National Defense Authorization Act for Fiscal Year 1999 (NDAA 1999), which also required GAO to evaluate the demonstration. GAO assessed enrollment in the demonstration and the premiums set by demonstration plans. To do this, GAO, in collaboration with the Office of Personnel Management (OPM) and DOD, conducted a survey of enrollees and eligible nonenrollees. GAO also examined DOD enrollment data, Medicare and OPM claims data, and OPM premiums data. Enrollment in the DOD-FEHBP demonstration was low, peaking at 5.5 percent of eligible beneficiaries in 2001 (7,521 enrollees) and then falling to 3.2 percent in 2002, after the introduction of comprehensive health coverage for all Medicare-eligible military retirees. Enrollment was considerably greater in Puerto Rico, where it reached 30 percent in 2002. Most retirees who knew about the demonstration and did not enroll said they were satisfied with their current coverage, which had better benefits and lower costs than the coverage they could obtain from FEHBP. Some of these retirees cited, for example, not being able to continue getting prescriptions filled at military treatment facilities if they enrolled in the demonstration. For those who enrolled, the factors that encouraged them to do so included the view that FEHBP offered retirees better benefits, particularly prescription drugs, than were available from their current coverage, as well as the lack of any existing coverage. Monthly premiums charged to enrollees for individual policies in the demonstration varied widely--from $65 to $208 in 2000--with those plans that had lower premiums and were better known to eligible beneficiaries, capturing the most enrollees. In setting premiums initially, plans had little information about the health and probable cost of care for eligible beneficiaries. Demonstration enrollees proved to have lower average health care costs than either their counterparts in the civilian FEHBP or those eligible for the demonstration who did not enroll. Plans enrolled similar proportions of beneficiaries in poor health, regardless of whether they charged higher, lower, or the same premiums for the demonstration as for the civilian FEHBP. In commenting on a draft of the report, DOD concurred with the overall findings but disagreed with the description of the demonstration's impact on DOD's budget as small. As noted in the draft report, DOD's costs for the demonstration relative to its total health care budget were less than 0.1 percent of that budget. OPM declined to comment.
You are an expert at summarizing long articles. Proceed to summarize the following text: The United States and many of its trading partners have long used laws known as “trade remedies” to mitigate the adverse impact of certain trade practices on domestic industries and workers, notably dumping (i.e. sales at below fair market value), and foreign government subsidies that lower producers’ costs or increase their revenues. In both situations, U.S. law provides that a duty intended to counter these advantages be imposed on imports. Such duties are known as AD/CV duties. The process involves the filing of a petition for relief by domestic producer interests, or self- initiation by the U.S. Department of Commerce (Commerce), followed by two separate investigations: one by Commerce, which determines if dumping or subsidies are occurring, and the other by the ITC, which determines whether a domestic U.S. industry is materially injured by such unfairly traded imports. If both agencies make affirmative determinations, Commerce issues an order to CBP directing it to collect the additional duties on imports. These are known as AD/CV duty orders. No later than 5 years after publication of these orders, Commerce and the ITC conduct a “sunset review” to determine whether revoking the order would likely lead to the continuation or recurrence of dumping and/or subsidization and material injury. Congress enacted CDSOA on October 28, 2000, as part of the Agriculture, Rural Development, Food and Drug Administration and Related Agencies Appropriations Act to strengthen the remedial nature of U.S. trade laws, restore conditions of fair trade, and assist domestic producers. Congress noted in its accompanying findings that “continued dumping and subsidization . . . after the issuance of antidumping orders or findings or countervailing duty orders can frustrate the remedial purpose” of U.S. trade laws, potentially causing domestic producers to be reluctant to reinvest or rehire and damaging their ability to maintain pension and health care benefits. Consequently, Congress enacted the CDSOA, reasoning that “U.S. trade laws should be strengthened to see that the remedial purpose of those laws is achieved.” CDSOA instructs Customs to distribute AD/CV duties directly to affected domestic producers. Previously, CBP transferred such duties to the Treasury for general government use. Two agencies are involved in CDSOA implementation. The law gives each agency—ITC and CBP—specific responsibilities for implementing CDSOA. The ITC is charged with developing a list of producers who are potentially eligible to receive CDSOA distributions and providing the names of these producers to CBP. CBP has overall responsibility for annually distributing duties collected to eligible affected domestic producers. CDSOA also makes CBP responsible for several related actions. Specifically, it charges CBP with establishing procedures for the distribution of payments and requires that CBP publish in the Federal Register a notice of intent to distribute payments and, based on information provided by the ITC, a list of affected domestic producers potentially eligible for the distribution. Both agencies had some start-up challenges and have made improvements in response to reports by their Inspectors General (IG). In September 2004, ITC’s IG found that the ITC had effectively implemented its part of the act but made several suggestions for enhancing the agency’s CDSOA efforts. For example, it suggested that the ITC better document its policies and procedures for identifying and reporting eligible producers to CBP and improve its communication with companies regarding eligibility. In response, the ITC implemented these suggestions to, among other things, formalize and strengthen its procedures for identifying eligible producers, developing a list of potentially eligible producers, and transmitting the list to CBP. For example, the ITC updated its desk procedures, clarified certain responsibilities to support the staff responsible for maintaining the ITC list, and added additional guidance on CDSOA requirements to its website. In June 2003, the Treasury’s IG issued a report finding several major deficiencies in CBP’s implementation of CDSOA and made several recommendations. The Treasury’s IG found that CBP was not in compliance with the law because it did not properly establish special accounts for depositing and disbursing CDSOA payments, did not pay claimants within the required time frame, and did not institute standard operating procedures or adequate controls for managing the program. Specifically, Treasury’s IG noted that the absence of proper accounts, accurate financial data, and adequate internal controls had resulted in “overpayments of at least $25 million, and likely more.” Treasury’s IG also emphasized that several other issues warranted attention, including no routine verification of claims and significant amounts of uncollected AD/CV duties. In response, CBP consolidated the processing of claims and payments by establishing a CDSOA team in Indianapolis, Indiana; instituted procedures for processing claims and disbursements, and for conducting claim verification audits; and started proceedings to secure reimbursements from the companies that had received overpayments. Despite these efforts, CBP still faces issues raised by the Treasury IG, such as the issue of uncollected duties. The United States has an obligation that its trade remedy actions conform to its legal commitments as part of the WTO, an international body based in Geneva, Switzerland. The WTO agreements set forth the agreed-upon rules for international trade. The WTO provides a mechanism for settling disputes between countries, and serves as a forum for conducting trade negotiations among its 148 member nations and separate customs territories. WTO trade remedy rules involve both procedural and substantive requirements, and a number of U.S. trade remedies have been challenged at the WTO. WTO members that believe other members are not complying with their WTO obligations can file a dispute settlement case. The resulting decisions by a dispute settlement panel, once adopted, are binding on members who are parties to the dispute, and WTO rules create an expectation of compliance. Under WTO rules and U.S. law, however, compliance is not automatic. WTO dispute settlement panels cannot order the United States to change its law. Alternatively, the United States may choose not to comply with WTO agreements and instead may choose to offer injured members mutually-agreed upon trade compensation or face retaliatory suspension of trade concessions by the complainant members. A new round of global trade talks aimed at liberalizing trade barriers is now underway and includes discussions of possible clarifications and improvements to the WTO rules on antidumping and on subsidies and countervailing measures. U.S. trade with members of the WTO totaled $2.1 trillion in 2004, giving the United States a considerable stake in these WTO negotiations, which aim to liberalize trade in agriculture, industrial goods, and services. Three key features of CDSOA guide and affect agency implementation. These features (1) determine company eligibility to receive CDSOA disbursements, (2) shape the allocation of CDSOA disbursements among companies based on their claimed expenditures, and (3) specify milestones that agencies must achieve when implementing the act, including a tight time frame for disbursing funds. CDSOA establishes criteria that restrict eligibility for CDSOA disbursements. As guidance for agency implementation, these criteria raise issues because (1) two-thirds of the orders in effect predate CDSOA, (2) ITC investigative procedures were not designed to, and do not result in, collecting information on support of petitions from all industry participants, and (3) other factors further limit company eligibility. Some companies deemed ineligible regard these criteria as unfair, and several have initiated legal action to secure eligibility. The law restricts eligibility to “affected domestic producers”—namely, any “manufacturer, producer, farmer, rancher, or worker representative (including associations of these persons)” that (1) petitioned the ITC or supported a petition for relief to the ITC that resulted in an AD/CV duty order and (2) remains in operation. The law also requires the ITC to prepare a list of potentially eligible producers for CBP, which publishes it in advance of each annual distribution. The law only applies to orders in effect on or after January 1, 1999. CDSOA further specifies that support must be communicated to the ITC through a letter from the company or a response to an ITC questionnaire. Successor companies or members of an association may also be eligible for CDSOA distributions. Conversely, CDSOA deems as ineligible those companies that (1) opposed a petition, (2) ceased production of a product covered by an order, or (3) were acquired by companies that opposed a petition. These eligibility criteria create special problems when older AD/CV orders are involved. Our analysis of ITC data reveals that roughly two-thirds of (234 out of 351) AD/CV duty orders in effect as of April 15, 2005, precede CDSOA. The application of CDSOA to orders that predate the law’s enactment raises concern. This is because, for AD/CV relief petitions that were investigated before CDSOA was enacted, producers had no way of knowing that their lack of expression of support for the petition would later adversely affect their ability to receive CDSOA disbursements. Moreover, firms that began operations or entered the U.S. market after the ITC’s original investigation are not eligible to receive CDSOA distributions. For petitions that have been investigated since CDSOA was enacted, producers would likely be aware of this linkage. The ITC and CBP told us that in a recent case involving shrimp, industry associations reached out broadly to ensure producers were aware of the need to communicate support to the ITC. Similarly, officials from a law firm that works with importers told us they were aware of such industry association efforts in cases involving live swine. However, in examining seven industries, we spoke to several ineligible companies that were frustrated because they had not expressed support during, or in some cases had not even known about, AD/CV investigations conducted before CDSOA’s adoption. The ITC relies on company data that is sometimes incomplete, and this further limits eligibility. CDSOA’s criteria link companies’ eligibility to a process the ITC has long followed in investigating AD/CV petitions by U.S. domestic industry interests for relief from unfair imports. However, the ITC’s investigative process does not result in collecting information from all industry participants, because it is intended for purposes other than CDSOA. The ITC’s primary role in AD/CV investigations is to define the scope of the industry that is affected by competition from imported goods and to determine whether the industry has suffered or been threatened with material injury as a result of dumped or subsidized imports. The ITC collects information from U.S. producers, primarily by surveying them. ITC officials told us that they generally strive to cover 100 percent of industry production in their surveys and usually receive responses from producers accounting for a substantial share of production. In situations with a relatively small number of producers, ITC officials said they often succeed in getting coverage of 90 percent of the domestic industry. However, in certain circumstances, such as with agricultural products, which have a large number of small producers, ITC surveys a sample of U.S. producers instead of the entire industry. In these situations, it is not uncommon for the share of production reflected in the ITC’s questionnaire responses to account for 10 percent or less of production. The following four factors additionally define the list of eligible producers: The questionnaires that the ITC sends to domestic producers during its investigations have only asked respondents to indicate their position on the petition since 1985. For cases prior to 1985, only petitioners and producers who indicated support of the petition by letter in the ITC’s public reports or documents have been considered “affected domestic producers.” The ITC considers the most recent answer a company provides as the one that determines eligibility. In its investigations, the ITC sends out both preliminary and final surveys in which producers are asked about support for petitions. Presently, producers have the option of checking one of three boxes: (1) support, (2) take no position, and (3) oppose. According to ITC officials, because the statute requires support, only those firms that check the “support” box are considered eligible. Moreover, ITC’s practice has been to look to the most recent clear expression of a company’s position on the petition to determine its CDSOA eligibility. For example, if a company’s response was “support” on the preliminary survey but “take no position” on the final survey, the ITC interprets “take no position” as non-support, and considers the company ineligible for CDSOA disbursements. The ITC limits its list of potentially eligible producers to those who indicate their support can be made public. The ITC is required by statute to keep company information, including positions on petitions, confidential, unless the company waives its confidentiality rights. CDSOA requires CBP to publish the list of potentially eligible producers; as a result, the list the ITC provides CBP only includes companies who have affirmatively indicated willingness (in the original investigation or after) to have their support be made public. Because of CDSOA’s interpretation of the phrase “support of the petition,” the ITC only considers evidence of support during its initial investigation to satisfy CDSOA requirements. Once an investigation is over, a producer that has not communicated its support to the ITC cannot later become eligible for CDSOA disbursements, even if it supports the continuation of an existing order at the time of the 5-year “sunset review.” Several companies have brought legal action challenging agency decisions that rendered them ineligible to receive disbursements, but none of these challenges have been successful. The following examples illustrate challenges to agency decisions: A case was brought by candle companies to compel the payment of CDSOA distributions to them. The companies were not on the ITC’s list of potentially eligible producers and did not file timely certifications with CBP. The companies asserted that the ITC had violated CDSOA by failing to include them on the list of affected domestic producers and that this omission excused their failure to timely file their certifications. A federal appellate court held that the ITC properly excluded the two producers from the list of affected domestic producers because the producers provided support for the AD petition in a response to a confidential questionnaire and failed to waive confidentiality. The court also held that when the ITC properly excludes a producer from the list, the producer still must file a timely certification with CBP to obtain retroactive consideration for CDSOA distributions. As a result, the court found that the firms were not entitled to CDSOA disbursements for the years in question. Another set of candle companies, which had opposed the relevant petition and subsequently acquired companies in support of the same petition, brought a case seeking to obtain CDSOA disbursements on behalf of the acquired companies. An appellate court held that CDSOA bars claims made on behalf of otherwise affected domestic producers who were acquired by a company that opposed the investigation or were acquired by a business related to a company that opposed the investigation. The court also found that the acquired companies are also barred from claiming disbursements for themselves. A seafood producer brought a case seeking an evidentiary hearing and/or inclusion of affidavits in the agency record where the producer was excluded from the list of affected domestic producers because the ITC had no record of the producer’s support for the petition. The producer claimed that it had mailed a questionnaire response indicating support to the ITC on time and wanted to have its affidavits in support of the contention included in the agency’s records. The U.S. Court of International Trade held that because the producer failed to allege the proper reasons for amending the agency record, affidavits concerning the timely mailing of a questionnaire could not be added to the agency record and considered when reviewing the producer’s eligibility for a CDSOA distribution. Two other legal challenges are still pending and involve claims that CDSOA violates the First Amendment of the U.S. Constitution (“free speech”) by conditioning the distribution of benefits on a company’s expression of support for an AD/CV relief petition. The second key CDSOA feature provides for CDSOA funding and a pro rata mechanism for allocating funds among the companies that claim disbursements based on a broad definition of qualifying expenditures. Partly as a result of the incentive this creates, company claims approached $2 trillion in fiscal year 2004. Each fiscal year’s duty assessments on all AD/CV duty orders that were in effect for that year fund annual CDSOA disbursements. Each fiscal year, CBP creates a special account that acts as an umbrella over multiple holding accounts used to track collections by specific active AD/CV duty orders and deposits collected duties under an order into its respective account. Within these accounts, CBP indicates that the dollar amounts attributable to each specific case are clearly identifiable. For example, a total of 351 AD/CV duty orders were in effect as of April 15, 2005, covering 124 products from 50 countries. In other words, as of that date, CBP intended to allocate CDSOA disbursements not from “one CDSOA pie” but from “351 CDSOA pies.” Each of these accounts constitutes a separate fund from which CBP makes annual distributions. After the fiscal year closes, CBP distributes the duties collected and interest earned under a given order that year to the affected eligible producers filing timely claims related to the specific order. The agency cannot distribute funds collected from one order to producers that were petitioners under other orders. For example, funds collected from the order on pineapples from Thailand cannot be used to pay producers covered by the frozen fish from Vietnam order. As a result, in fiscal year 2004, the one U.S. producer of pineapples received all the money collected under that order, but CBP did not make CDSOA disbursements to U.S. producers of frozen fish because the agency had not collected any funds under that order. CDSOA’s definition of expenses companies can claim is very broad. The law defines ten categories of qualifying expenditures, such as health benefits and working capital expenses, incurred during the production of the product under the order. According to CBP officials we spoke with, this broad definition means companies can include a wide range of expenses in their certifications. Moreover, CDSOA allows companies to claim any expenses incurred since an order was issued, a period that may span as far back as the early 1970s for some orders. Indeed, 68 of the 351 orders in effect have been in place for 15 years or more. Companies can also make claims under multiple AD/CV orders. For example, in fiscal year 2004, one of the top recipient companies filed claims for different products under 89 AD/CV orders. Finally, the law allows companies to submit claims for qualified expenditures that have not been reimbursed in previous fiscal years. However, CBP implementing regulations require that producers relate claimed expenditures to the production of the product that is covered by the scope of the order or finding. CDSOA uses a pro rata formula to allocate disbursements under a given order among the eligible companies filing claims, with percentages determined according to the claims of qualifying expenditures submitted. If the amount collected under an order is insufficient for all claims to be paid in full, as is often the case, each company receives its pro rata share of the amount collected. This pro rata formula creates an incentive for producers to claim as many expenses as possible relative to other producers so that their share of the funds available under an order is as large as possible. CBP officials cited the increase in claims—from $1.2 trillion in fiscal year 2001 to just under $2 trillion in fiscal year 2004—as an indication of this incentive. The third key feature of CDSOA is that it sets a strict deadline by which CBP must distribute payments for a fiscal year. Most disbursement-related activities cannot begin until the fiscal year ends. As a result, CBP has a significant workload in October and November and cannot perform all the desired quality controls prior to disbursement. CDSOA gives CBP a flexible time frame for processing claims and the CBP has used its discretion to give itself more time. Specifically, the law directs CBP to publish a Federal Register notice of its intent to distribute payments, and the list of affected domestic producers potentially eligible to receive payments under each order, at least 30 days before distributions are made. However, CBP has scheduled the publication, which is the first step in processing claims, at least 90 days before the end of the fiscal year for which distributions are being made. For the fiscal year 2004 disbursements, CBP actually published the notice on June 2, 2004—about 120 days before the end of the fiscal year. CBP requires producer claims/certifications to be submitted within 60 days after this notice is published. The fiscal year 2004 deadline for submitting claims was August 2, 2004. This gave CBP the months of August and September to examine certifications, seek additional information from the producers, send acceptance or rejection letters to producers, and finalize a list of recipients. The law is not flexible in the time frame allowed for processing disbursements for a given fiscal year, specifying that payments must be made within 60 days after the first day of the following fiscal year. Because of the need to calculate funds available based on a completed fiscal year, CBP cannot commence these calculations until the following fiscal year. This tight time frame means that during October and November, CBP must perform the bulk of the tasks associated with calculating the funds available for disbursement under each order and the funds that will be distributed to each recipient company under an order. In discussions with us, CBP officials said CDSOA’s 60-day time frame for disbursing payments was tight, posing the biggest risk associated with running the program. For instance, in fiscal year 2002, the program missed this deadline by about 2 weeks and, in the process, overpaid some producers. Efforts to collect these overpayments have yielded some results but are still continuing. An extension of 30 days in the disbursement deadline would give CBP additional time to undertake desired quality control measures before sending the instructions to Treasury and issuing payments. The present schedule does not allow sufficient time for quality control, forcing CBP to ask companies for repayment if errors are subsequently detected. CBP faces three key problems in implementing CDSOA. First, despite some recent improvements, CBP’s processing of CDSOA claims and disbursements is labor intensive, and the agency is facing a dramatic increase in its 2005 workload. Second, the agency does not systematically verify claims and thus cannot be sure it appropriately distributes disbursements. Third, CBP disbursed only about half the funds that should have been available in fiscal year 2004 because of ongoing problems collecting AD/CV duties. Figure 1 depicts how the various units of CBP and Treasury interact when processing claims, verifying claims, and making payments. Following the consolidation of CBP’s CDSOA program within the Revenue Division at Indianapolis in 2004, the division is now fully responsible for processing claims and disbursements. The division issues payment instructions for Treasury’s Financial Management Service, which actually issues CDSOA disbursement checks to U.S. companies. CBP’s Regulatory Audit Division may selectively perform claims verifications upon request of the CDSOA program. In addition to these offices within CBP, the Office of Regulations and Rulings addresses legal matters, the Office of the Chief Counsel addresses litigation, the Office of Information Technology provides necessary reports, and the Office of Field Operations is responsible for liquidations. The CDSOA program’s efforts to process claims and disbursements are cumbersome and likely to become more challenging with impending workload increases. The processing of claims and disbursements requires intensive manual efforts, in part because CBP does not require companies to file claims using a standardized form. Also, existing computer systems do not have the capabilities to produce the data needed to calculate amounts available for distribution. CBP’s guidance for filing claims is not sufficiently specific and causes confusion, requiring extra effort by CBP staff to answer questions from companies. CBP officials are concerned that, despite recent staffing increases, the number and experience level of staff may not be sufficient to handle the dramatic workload increase in fiscal year 2005. Despite being aware of these problems, CBP’s CDSOA program lacks plans for improving its processes, staff, and technology. CDSOA claims processing is cumbersome and labor intensive. Through fiscal year 2004, CBP only received updates to the list of potentially eligible companies from the ITC in hard copy. As a result, CBP had to manually update its electronic database of potentially eligible producers. During the course of our review, ITC officials took the initiative to provide the list to CBP in hard copy and in electronic format to facilitate CBP’s processing of this information. CBP officials noted that getting the file electronically was very helpful. However, because CBP still needed to perform considerable data re-entry to get the list into the format they preferred, ITC and CBP officials told us they are exploring whether to formalize and improve this file exchange in the future. Because CBP does not require companies to submit claims electronically using a standardized form, program staff scan all the documents received for electronic storage and subsequently archive all paper copies of the documents. CDSOA program staff must review each claim to ensure it contains the required information, contact claimants to clarify basic information, and send out letters concerning rejected claims. Staff must manually enter information from accepted claims into a “standalone” database, and perform repeated checks to ensure that they followed the prescribed procedures and that their data entries are valid and accurate. The payments processing component is also labor intensive because existing computer systems do not have the capabilities to provide precise information on the amounts available for disbursement under each order or the amounts to be disbursed to each claimant. CBP’s CDSOA program continues to face a risk in this area because its staff must manually perform the calculations and any inaccurate calculations can result in over or underpayments. Multiple data elements are required to determine the amounts available for disbursement, and these come from different computer systems. In some instances, the computer systems produce conflicting information, and program staff must manually reconcile these differences. While internal control procedures are in place to ensure the validity and accuracy of the calculations, the process is nonetheless subject to human error. Program officials told us that the new computer system being implemented agencywide will not have the financial component needed to perform this task for several more years. Claims processing is further complicated because the guidance about how to file CDSOA claims is very general and open to interpretation. As a result, CDSOA program staff field many phone calls from claimants regarding their claims, including clarification questions on how to file claims. Respondents to GAO’s questions generally praised CBP for its handling of these calls. However, a recent CBP verification of a company’s claims raised various claims-related questions. For example, CDSOA provides that companies can receive disbursements for qualifying expenditures not previously reimbursed, but officials involved in the verification said it was not clear whether companies must subtract all past disbursements when making claims under multiple orders, or only those disbursements related to a particular order. Also, one CDSOA recipient company reported that, because of uncertainty about whether cumulative expenses could be claimed, it claimed only 1 year’s expenses. As a result, it received a much smaller share of disbursements than it otherwise could have. Although the number of staff assigned to process claims and payments has grown, program officials noted that this increase may not be sufficient to handle the dramatic workload increase expected in fiscal year 2005. Specifically, the number of eligible claimants has grown by 500 percent between fiscal years 2004 and 2005, and the number of claims might increase more than 10-fold, from 1,960 to over 29,000. This growth is largely due to AD duty orders on certain warm-water shrimp or prawns recently coming into effect. Table 1 shows the number of program staff for fiscal years 2003-2005 and the program’s responsibilities and workload during those years. Program officials are concerned about fiscal year 2005 processing activities because only about half of the staff has processed claims and payments before. The rest are new and not experienced with the procedures. Moreover, if the workload becomes unmanageable, CBP may be unable to quickly bring new staff on board and up to speed. This is because new employees must undergo a 4 to 6 month background check and initial training of entry-level college graduates takes 3 to 4 months. New staff attains full proficiency only after they complete a full annual cycle of processing claims and payments. Despite these challenges, the CDSOA program does not have formal plans for improving its processes, technology, and staff. In our efforts to help improve the federal government’s performance, we regularly emphasize that processes, staff, and technology are vital to agency performance and that planning is central to managing and improving these three organizational components. For instance, our work on human capital issues throughout the government has revealed the importance of having a human capital plan in place to address problems, such as those faced by the CDSOA program, and ensure that staff with the right skills and abilities is available continuously and can meet changing organizational needs. Claims verification poses another implementation problem for CBP. Companies are not held accountable for the claims they file because CBP does not require them to provide any supporting documentation for their claims and does not systematically verify company claims. The only comprehensive verification conducted to date found significant issues. Although CBP has put in place procedures for verifying CDSOA claims, it does not plan to implement them on a systematic or routine basis. Program officials told us they basically accept the information in company claims and rely on complaints from competitors to initiate verifications. In reviewing certain claims and CBP’s procedures, we found that claims are generally not questioned even though top CDSOA recipient companies have claimed over $2 trillion since fiscal year 2001 (see app.II). CBP normally does not take steps to determine that companies are still in business and producing the item covered by the order under which they are making a claim. Neither CDSOA nor CBP require companies to explain their claims, provide supporting documentation about their claims, or follow a format when listing their qualifying expenditures. For example, in reviewing the 2004 claims filed by top CDSOA recipients, we found that most companies did not provide any details about their claimed expenditures. Indeed, one company listed all of its claimed expenditures under the category of raw materials. CDSOA and CBP do not require that companies have their claims reviewed by a certified public accountant or a party outside of the company. CBP has only verified the claims of a handful of claimants. One of these verifications was comprehensive and revealed significant problems. In the first 3 years of the CDSOA program, staff in CBP’s Office of Regulations and Rulings conducted four, 1-day site visit verifications that revealed no substantive issues. Subsequently, CBP’s Regulatory Audit Division decided to conduct a fifth verification using the detailed verification procedures the division developed in mid-2004. This verification, which took about a year and was completed in June 2005, revealed significant problems, including substantial overstatement of claimed expenses. According to CBP, the primary cause of the CDSOA expenditure overstatement was the company’s failure to maintain an internal control system to prepare and support its CDSOA claims. This prevented the company from identifying the non-qualifying products and costs associated with them. As a result, the company included expenditures incurred in the production of products not covered by the scope of the AD/CV orders. The company acknowledged that it had wrongly claimed expenditures and subsequently took corrective action. CBP does not plan to change its present reactive approach or to systematically target more companies for verifications. Although the law does not require verification of claims, CBP has recognized over time the need for them but has always stopped short of implementing a systematic verification plan. In the third year of CDSOA implementation, a CBP working group under the direction of the Deputy Commissioner’s office developed a statement of work to, among other things, verify claims according to a risk-based plan. However, CBP does not have any evidence that this plan was ever developed or implemented. Despite having new claim verification procedures in place and having performed an in-depth verification as a prototype review to determine the extent of work involved in the verification, Regulatory Audit Division officials told us they do not plan to verify claims systematically or on a routine basis. Instead, CBP will continue to rely on complaints from competitors to select companies for verification. According to CBP officials, this approach is logical because the pro rata formula for allocating disbursements among firms creates an incentive for other companies to police their competitors. Although CBP has an agencywide risk-based plan for targeting companies for audits, this plan does not target the CDSOA program’s recipients because the agency does not consider the program a high risk to revenue or a high priority for policy reasons. CBP’s current position is at odds with its own Inspector General’s (IG) position and our work on financial management, which highlights the importance of verifying claims. In its audit of the CDSOA program, Treasury’s IG emphasized the need for more robust claim verification. In the report, the IG questioned why CBP was not reviewing CDSOA claims on an annual basis, and particularly the expenditures claimed. The IG went on to note that certifications are legally subject to verification and that these certifications would serve as a deterrent against the submission of deceptive claims. Moreover, it emphasized that untimely verifications could result in the loss of revenue for other deserving companies if, in fact, deception was later discovered. Our overall work on claims and disbursements throughout the government shows that the systematic verification of claims before they are processed (or after they are paid) is key to ensuring the validity of transactions and to avoid disbursement problems such as improper payments. This work also reveals the importance of internal controls, such as verification, to ensure that only valid transactions are initiated in accordance with management decisions and directives. Collecting AD/CV duties has been another problem for CBP, compromising the effectiveness of AD/CV trade remedies generally and limiting funding available for distribution under CDSOA. CBP reported that the problem has grown dramatically in the last couple of years. For example, it distributed about half of the money that should have been available under CDSOA in fiscal year 2004. CBP’s efforts to date to address the causes of its collections problems have not been successful, leading CBP to pledge further steps in a July 2005 report to Congress. Customs collections problems have been evident since mid-2003 and have two distinct components. Specifically, the 2003 report on CDSOA by Treasury’s IG highlighted CBP’s collections problems, raising particular concerns about the following two AD/CV collection issues: Unliquidated entries make the eventual collection of duties owed less certain. Liquidation is the final determination of duties owed on an import entry. Liquidation of import entries subject to AD/CV duties only occurs after Commerce issues a final order, determines final dumping margins or final net countervailable subsidies (i.e. duty), and issues liquidation instructions to CBP. Upon receipt of liquidation instructions, CBP calculates and seeks to collect the actual duties owed. In some cases, such as softwood lumber, liquidation is being suspended due to ongoing litigation. While neither Commerce nor CBP can hasten collection of duties tied up in litigation, Treasury’s IG report found that, in some cases, CBP was not collecting duties because Commerce had failed to issue proper liquidation instructions to CBP. In other cases, the report said CBP had overlooked Commerce liquidation instructions. The report said clearing up the liquidation backlog should be given a high priority given the substantial dollars involved—about $2 billion in 2003. Clearing the backlog is also urgent because discrepancies between unliquidated duties and final duties often means that CBP must attempt to collect additional sums from producers that did not expect to pay more, or that went out of business. Open (unpaid or uncollected) duty bills are liquidated entries for which final bills have been issued but not paid. The Treasury’s IG report expressed concern that CBP had not collected $97 million in duties owed and said that the agency might not be able to recover some of these funds. Treasury’s IG said its discussion with CBP personnel suggested recovery could be difficult because: (1) port personnel are accepting bonds that are not sufficient to cover the duties owed plus interest when the entry is liquidated, and (2) the length of time between entry and liquidation is often several years, and in that time, some importers go out of business, leaving CBP with no way to go back for collection of additional duties. In response, CBP and Commerce took steps to identify and address the causes of CBP’s collections problems. CBP attributes the uncollected duties problem largely to “new shippers” with little import history, a problem that is particularly prevalent in the agriculture and aquaculture industries. According to CBP, one of these new shippers accounted for $130 million in uncollected duties in fiscal year 2004. To address this problem, in 2004, Commerce changed its new shipper review process and listed several steps it has taken to strengthen it. These included steps such as making the bondholder liable for duties owed on each import entry, and formalizing a checklist to ensure the legitimacy of new shippers and their sales. Subsequently in 2004, CBP announced an amended directive to help ensure that duties on agriculture and aquaculture imports were collected properly by reviewing and applying a new formula for bonds on these imports, effectively increasing these bonds by setting them at higher rates. Nevertheless, since the problem and its basic reasons became known in 2003, the size of CBP’s collections problem has more than doubled. As figure 2 shows, according to CBP data, $4.2 billion in AD/CV duties remained unliquidated and $260 million in AD/CV duties were unpaid at the end of fiscal year 2004. According to CBP, a large amount of the unliquidated entries involves duties on softwood lumber from Canada (about $3.7 billion). In February 2005, CBP reported to Congress that it had developed a plan to isolate suspended entries that were beyond the normal time frames of an AD/CV case and then worked with Commerce to obtain liquidation instructions, reducing the inventory of one million suspended entries by 80,000. However, many unliquidated entries remain and some of the unliquidated entries are still due to problems within CBP’s and Commerce’s control. CBP estimates that over 90 percent of all unliquidated AD/CV entries are awaiting Commerce instructions for liquidation. Regarding unpaid duties, a large percentage pertains to imports from China. Specifically, nearly two-thirds of these unpaid duties (about $170 million) relate to an AD order on crawfish tail meat from China. The second largest amount (about $25 million) relate to an AD order on fresh garlic from China. CBP’s continued collections problems have led to calls for more drastic measures. Several industry groups, including representatives of the garlic, honey, mushroom, and crawfish industries, have advocated for elimination of the new shipper bonding rules in favor of cash deposits on entries for new AD orders. Most crawfish and some steel recipients responding to our questionnaire also raised concerns about CBP’s collection efforts and quality of communication about ongoing problems. As a result, CBP is pursuing additional measures. In a February 2005 report to Congress, CBP said it is working with Treasury to address financial risks associated with bond holders’ insolvency and monitoring of agriculture/aquaculture importers’ compliance with its new bonding requirements on a weekly basis. In its July 2005 report to Congress, CBP highlights that it has begun working with other U.S. agencies to develop legislative proposals and other solutions to better address AD/CV duty collection problems. CBP notes that it plans to forward the results of this interagency effort to Congress by December 2005. Meanwhile, Congress is considering legislation that would change new shipper privileges. Most CDSOA payments went to a small number of U.S. producers and industries, with mixed effects reported. Top recipient companies reported that the payments had positive overall effects, although their assessments of the extent of the benefits varied. Leading recipient companies within the seven industries we examined also reported varying positive effects. In four of these industries—bearings, candles, crawfish, and pasta— recipients we contacted reported benefits, but some non-recipients said that CDSOA payments were having adverse effects on their ability to compete in the U.S. market. Although some have argued that CDSOA has caused increases in the number of AD/CV petitions filed and in the scope and duration of AD/CV duty orders, the evidence to date is inconclusive. From fiscal year 2001 to fiscal year 2004, CBP has distributed approximately $1 billion in CDSOA payments to 770 companies from a broad range of industries. These payments have been highly concentrated in a few companies. Figure 3 shows the share of payments going to the top five companies and the share received by the remaining CDSOA recipients. One company, Timken, a bearings producer, received about twenty percent of total distributions, approximately $205 million, during fiscal years 2001- 2004. Five companies, including Timken, received nearly half of the total payments, or about $486 million. Figure 4 shows the distribution of payments to the top 39 recipient companies that have received 80 percent of total CDSOA disbursements. These top recipient companies included several producers of steel, candles, and pasta. They also included producers of cement, chemicals, cookware, pencils, pineapples, and textiles. For most of the top recipient companies responding to our questionnaire, the ratio of CDSOA payments to sales was less than 3 percent. Specifically, the ratio of payments to sales ranged from less than 1 percent to over 30 percent. The ratio was generally the smallest for steel companies and the largest for candle companies. In analyzing CDSOA distributions by industry, or product group, the payments are similarly concentrated among only a few industries or product groups. For example, approximately two-thirds of total CDSOA distributions went to three product groups—bearings, candles, and iron and steel mills—which recieved approximately 40 percent, 14 percent, and 12 percent respectively. Also, 95 percent of all total payments went to 24 out of the 77 product groups. Figure 5 shows the leading industries or product groups that received CDSOA distributions. As detailed in appendix II, the 24 companies that responded to our survey of top CDSOA recipients indicated that the CDSOA disbursements had positive effects, but the extent of benefit varied from slight to substantial. We asked these companies to assess CDSOA’s effects at both the industry and company level on a number of different dimensions including prices, investment, employment, and ability to compete. The top recipients reported that CDSOA had the most positive impact in areas such as net income and employment. For example, one company commented that CDSOA payments have allowed for substantial investments in its factory and workers, providing, among other things, supplemental health care benefits. Another company reported that CDSOA payments have been helpful in justifying continued investment during periods when prices are depressed, due to dumping or subsidization. The top recipients reported that CDSOA had less of an effect in other areas such as prices and market share. For example, a company commented that disbursements have had little or no effect on prices for its CDSOA product because such prices are ultimately determined by market forces. As detailed in appendix III, in our examination of seven industries that received CDSOA payments—bearings, steel, candles, pasta, dynamic random access memory (DRAM) semiconductors, crawfish, and softwood lumber—leading recipients we contacted generally reported benefits to varying degrees, and the non-recipients we contacted either complained about being disadvantaged or did not report effects. In four industries— bearings, candles, crawfish, and pasta—recipients generally reported benefits, but some non-recipients complained that the disbursements were having negative effects on them. These industries all involve cases that predate CDSOA. In general, the non-recipients that complained of negative effects are ineligible for disbursements and several complained about their ineligibility. Bearings. The leading domestic producer of bearings is eligible for CDSOA disbursements, but several large foreign-owned companies with longstanding production in the United States are its major competitors and ineligible. Three bearings recipient companies commented that CDSOA has had positive effects, although they varied in their assessments of the extent of the benefit. One company stated that the disbursements helped it to replace equipment and enabled it to recover to the position it had held prior to being injured from dumping. Another recipient commented that, while the CDSOA disbursements were helpful, they were distributed several years after the initial injury and did not fully compensate the company for lost profits due to unfair trade. Two non-recipients provided views. One non-recipient commented that CDSOA harms global bearings companies because the antidumping duties they pay are transferred directly to a competitor. It further commented that not only is it forced to subsidize competitors through CDSOA, but the money it is paying in duties limits its ability to invest in and expand its U.S. operations. The other said it is too early to know what injurious effect CDSOA disbursements would have on non- recipients. Steel. In this industry, the largest U.S. producers are CDSOA recipients. Recipient companies reported that payments—though small relative to company size and the challenges they face in their capital-intensive industries—had positive effects. Steel accounts for the single largest industry share of outstanding dumping orders, and most major U.S. producers receive CDSOA payments under numerous AD/CV orders on different products. Steel recipients we contacted varied in their assessments of CDSOA’s effects, but generally agreed that the program benefited them by providing greater opportunities for making needed capital investments in their plant and equipment. Steel recipients also commented, though, that CDSOA has not been a complete solution to the serious problems they faced. When the Asian financial crisis spawned rising imports, falling steel prices, and consolidating of firms, the receipt of CDSOA disbursements did not prevent several steel producers from joining numerous others in the industry in filing for bankruptcy. Candles. Ten of the estimated 400 U.S. candle companies are eligible and receive CDSOA disbursements. A number of recipients contended that distributions have helped keep them in business, enabling them to develop newer, better, and safer candles through investment in equipment and research and development. One recipient stated that it has been able to offer employees more consistent and comprehensive benefits packages due to CDSOA. Several large candle producers that are comparable in size to leading recipients complained that they are in favor of the order but are ineligible to receive CDSOA disbursements. Some non-recipients argue that recipients have an unfair advantage in their ability to keep prices lower than they otherwise would. For instance, a major non-recipient company has closed two of four of its domestic manufacturing facilities and has reduced shifts at others. A smaller non-recipient company contended that when it matched its competitors’ lower prices, it was not able to make a profit. As a result, the company stated that it was forced to exit this segment of the candle business and release some workers. Crawfish. About 30 small, family-owned crawfish processors have received CDSOA disbursements. Recipients said CDSOA payments provided the industry with its first effective relief against dumped imports in several years and enabled them to buy and process more crawfish, make long-needed repairs and investments, hire more employees, and pay off debts. In June 2003, the ITC reported that CDSOA disbursements to some domestic producers had converted an industrywide net loss into net income. The 16 crawfish tail meat processors who received CDSOA distributions that we spoke with generally believe that the program has had positive effects on the industry and their companies, keeping businesses open and employees working. Non-recipients we spoke with in this industry said that CDSOA had helped recipient companies---but had put them at a competitive disadvantage. These companies want to be eligible for CDSOA disbursements and several reported they had contacted certain government and congressional sources to try to address their eligibility status, but were told they did not meet the law’s eligibility requirements regarding the expression of support during the investigation. As discussed previously, two of these companies brought legal action to challenge agency decisions on their eligibility status. Because they also have to compete against cheap Chinese imports, these non-recipients viewed the application of the law as unfair. In addition, several said they were not able to compete with recipient companies that offer processed tail meat at prices below their cost of production and appear to be able to do so because the recipients’ CDSOA disbursements will compensate them for any losses. In such conditions, some non-recipients said they cannot operate profitably and some decided to stop processing tail meat. Pasta. Three of the four leading U.S. pasta makers received CDSOA disbursements, but the fourth producer is ineligible. The top two CDSOA recipients in this industry did not respond to our questions, and one of them has filed for bankruptcy. The four CDSOA recipients that responded said they had used the funds to increase or upgrade equipment, invest in research and product development, defray manufacturing costs, and expand production capacity. Nevertheless, CDSOA payments, while not insignificant, were not large relative to sales or enough to offset other problems that the industry faces, such as decreased demand for pasta due to low-carbohydrate diets and low margins. Most non-recipients we contacted said CDSOA had no effect, but a few non-recipients said that the funds had created an uneven playing field and decreased their ability to compete in the marketplace. Several of these companies tried to file for CDSOA funds, but were found ineligible. The large non-recipient company said the money it pays in duties transferred to its competitors could have been used for product development, capital investment, and expansion of its new U.S. operations. DRAMs. All four major DRAM producers in the United States currently have production facilities in the United States as well as abroad; however, three of these companies are U.S. subsidiaries of foreign producers and have entered the market within the last decade. A CV order is in effect for DRAMs produced by one Korean company only, but the bulk of the distributions were made under an AD order on DRAMs of one megabit and above from Korea issued in 1993 and revoked in 2000, as well as on an AD order on SRAMs (static random access memory chips) issued in 1998 and revoked in 2002. A leading CDSOA recipient was the sole recipient of duties on these revoked orders. Fabrication facility costs are high and require complete replacement every few years. The DRAM industry is cyclical in nature and subject to “booms and busts,” where demand is driven by investments in computers and other end products. Both CDSOA recipients reported some net losses. One company reported benefits from receiving payments and another reported fewer effects; both payments were small relative to their net sales. Softwood Lumber. Both CDSOA recipients and non-recipients include leading softwood lumber producers. Recipients and non-recipients that we contacted indicated that disbursements to date have been too small to have a discernable effect. However, non-recipients expressed concern about potential adverse effects in the future, should the $3.7 billion in AD/CV duties being held on deposit pending liquidation ever be distributed. These duties are presently in escrow pending the outcome of litigation by Canadian interests against the U.S. duties. Current evidence does not clearly demonstrate that CDSOA is linked to an increasing number of AD/CV petition filings. Critics have raised concerns that, by awarding a portion of the tariff revenue that results from successful petitions, CDSOA could potentially lead to more AD/CV petition filings and thereby more restrictions on imports, to the detriment of the U.S. economy. However, the evidence we analyzed was inconclusive. Because CDSOA provides direct financial benefits to firms participating or supporting AD/CV petitions by awarding them a proportion of the tariff revenue, some analysts have warned that CDSOA could lead to more petitions and to more companies supporting the filings because only companies who supported the petition would receive disbursements. A report by the Congressional Budget Office (CBO) supports this view, arguing on economic incentive grounds that CDSOA encourages more firms to file or support petitions and discourages settling cases. CBO also argues that firms may resume production or increase their output due to CDSOA, which would result in inefficient use of resources and would be harmful to the U.S. economy and consumers. Our examination of the actual number of filings shows that there is no clear trend of increased AD/CV petition filings since CDSOA. Figure 6 shows that since the passage of CDSOA in 2000, the number of petitions spiked in 2001 and then sharply declined over the next three years. Moreover, this fits the historical pattern of the number of AD/CV petition filings, which also do not show a clear upward trend. The number of AD/CV petitions filed each year has fluctuated widely, ranging from a maximum of 120 in 1985 to a minimum of 16 cases in 1995. Economists have found evidence that the number of antidumping filings is closely linked to macroeconomic conditions and real exchange rates. Our analysis of company responses to our case study questions similarly reveals mixed evidence but no trend. In general, companies told us CDSOA had little impact on their decision whether to file AD/CV relief petitions. Most companies that responded to our questions said that filing and winning new cases was too expensive, and the receipt of CDSOA payments was too speculative, for CDSOA to be a major factor in their filing decision. For example, producers accounting for a sizeable share of U.S. softwood lumber production freely chose not to support the case, despite being aware of the prospect of sizeable CDSOA disbursements. However, bearings companies that had not supported earlier cases subsequently supported a later case on China brought after CDSOA’s passage. In addition to the number of filings, our interviews and responses from companies in the seven industries we examined revealed a few allegations that CDSOA resulted in orders that cover imports of more products for longer periods—that is, through wider-than-necessary product scopes of AD/CV duty orders and longer-than-warranted retention of existing orders. However, these allegations contradicted other examples, and we could not independently verify them. One steel user, for example, complained that CDSOA disbursements were a factor in the denial of its request for narrowing the scope of an order and claimed the result has been to put certain U.S. fastener makers at a disadvantage. In contrast, one steel company noted that the domestic industry has no incentive to overly broaden the scope of an AD/CV relief petition because doing so could undermine its ability to prove injury and to obtain an order in the first place. Bearings recipient companies similarly responded that CDSOA has not affected the scope or duration of AD/CV duty orders and said regular “sunset” reviews should ensure the government terminates unwarranted orders. Bearings non-recipients, on the other hand, drew a connection between the main CDSOA beneficiary within the industry and its support for continuance of orders. In the candle industry, companies universally reported that they are united in supporting retention of the existing order, but divided over efforts by some candle firms to expand its scope. After finding the CDSOA inconsistent with WTO agreements and after the United States’ failure to bring the act in compliance with the agreements, in 2004 the WTO gave 8 of the 11 members that complained about CDSOA authorization to suspend concessions or other WTO obligations owed to the United States. Canada, the European Unian (EU), Mexico and Japan have consequently applied additional tariffs to U.S. imports, and others are authorized to follow. In 2003, the WTO found the CDSOA inconsistent with U.S. obligations under WTO agreements and asked the United States to bring the act into conformity with WTO Agreements. Eleven members had brought complaints about the CDSOA to the WTO and prevailed in their claims that the CDSOA is inconsistent with WTO agreements. The WTO found that CDSOA was not consistent with U.S. WTO obligations because it was not one of the permitted specific actions against dumping and subsidization specifically listed in applicable WTO agreements. The following the ruling, the United States indicated its intention to comply. WTO gave the United States until December 27, 2003, to bring the CDSOA into conformity with the organization’s pertinent agreements. However, all efforts to repeal the law have thus far been unsuccessful. Meanwhile, the United States is also pursuing negotiations at the WTO to address the right of WTO members to distribute AD/CV duties. The President proposed repealing CDSOA in his fiscal year 2004, 2005, and 2006 budget submissions. Senate Bill 1299 was introduced in Congress in 2003 to amend the CDSOA and House Bill 3933 in 2004 to repeal the CDSOA. Neither of these efforts succeeded during that legislative session of Congress and thus expired. In a March 10, 2005, status report to the WTO, the United States reaffirmed its commitment to bringing the CDSOA into conformity with WTO agreements. The United States also reported that House Bill 1121 had been introduced on March 3, 2005, to repeal CDSOA and that it had been referred to the Committee on Ways and Means. Also in 2005, Senator Grassley introduced Amendment 1680 to the Departments of Commerce and Justice, Science and Related Agencies Appropriations bill to prohibit any further CDSOA distributions until the USTR determines that such distributions are not inconsistent with U.S. WTO obligations. However, as of the date of publication of this report, Congress has not passed House Bill 1121 and the Senate Committee on Appropriations has not adopted Amendment 1680. Since late 2001, the United States has been engaged in WTO negotiations at the Doha Round, which may include changes to the WTO agreements under which CDSOA was challenged. Following a congressional mandate to the USTR and Commerce that negotiations shall be conducted within the WTO to recognize the right of its members to distribute monies collected from antidumping and countervailing duties, the United States submitted a paper to the WTO Rules Negotiating Group stating that “the right of WTO Members to distribute monies collected from antidumping and countervailing duties” should be an issue to be discussed by the negotiating group. USTR officials told us that, to date, the U.S. proposal has not attracted support from any other WTO member. In January 2004, 8 of the 11 complainants—Brazil, Canada, Chile, the EU India, Japan, Korea, and Mexico—sought and secured authorization to retaliate against the United States. As a result of binding arbitration regarding the level of authorized retaliation, the eight members received authorization to impose an additional import duty on U.S. exports covering a total value of trade up to 72 percent of the total of disbursements made under the CDSOA for the preceding year relating to AD/CV duties on that member’s products each year. The total suspension authorized for 2005 could be up to $134 million based on the fiscal year 2004 CDSOA disbursements. Specifically, for fiscal year 2004 disbursements, the WTO arbitrators authorized the imposition of additional duties covering a total value of trade not exceeding $0.3 million for Brazil, $11.2 million for Canada, $0.6 million for Chile, $27.8 million for the EU, $1.4 million for India, $52.1 million for Japan, $20.0 million for Korea, and $20.9 million for Mexico. On May 1, 2005, Canada and the European Communities began the imposition of additional duties on various U.S. exports. In particular, Canada has imposed a 15 percent tariff on live swine, cigarettes, oysters, and certain specialty fish (including live ornamental fish and certain frozen fish) and the EU have imposed a 15 percent tariff on various paper products, various types of trousers and shorts, sweet corn, metal frames, and crane lorries. On August 18, 2005, Mexico began imposing additional duties on U.S. exports such as chewing gum, wines, and milk-based products. On September 1, 2005, Japan began imposing additional duties on U.S. exports such as steel products and bearings. The remaining four members say they might suspend concessions. The three members that did not request authorization to retaliate— Australia, Indonesia, and Thailand— have agreed to extend the deadline for requesting authorization indefinitely. As agreed, the countries will give the United States advance notice before seeking authorization to retaliate. In return, the countries retain the ability to request authorization to retaliate at any point in the future, and the United States agreed not to seek to block those requests. See figure 7 for a timeline of events related to the WTO decision on CDSOA. Congress’ stated purposes in enacting CDSOA were to strengthen the remedial nature of U.S. trade laws, restore conditions of fair trade, and assist domestic producers. Our review suggests that the implementation of CDSOA is achieving some objectives more effectively than others. One reason is that, as a result of some of the key features of CDSOA, the law in practice operates differently from trade remedies. For instance, while trade remedies such as AD/CV duties generally provide relief to all producers in a particular market, the eligibility requirements of CDSOA limit relief to only a subset of domestic producers—only those that petitioned for relief or that publicly supported the petition by sending a letter to the ITC or filling an ITC questionnaire while the agency was conducting its original investigation and remain in operation. Our analysis of CDSOA disbursement data and company views on the effects of CDSOA indicate that CDSOA has provided significant financial benefits to certain U.S. producers but little or no benefits to others. As a result, CDSOA has, in some cases, created advantages for those U.S. producers that are eligible and receive the bulk of disbursements over those U.S. producers that receive little relief or are ineligible, by choice or circumstance. Moreover, because the WTO found that CDSOA did not comply with WTO agreements, the EU, Canada, Mexico, and Japan recently retaliated against U.S. exports and this imposes costs on a number of U.S. companies exporting to those markets. In implementing CDSOA, CBP faces problems processing CDSOA claims and payments, verifying these claims, and collecting AD/CV duties. The CDSOA program’s time frame for processing payments is already too tight to perform desired quality controls. The dramatic growth in the program’s workload--an estimated 10-fold increase in the number of claims in fiscal year 2005 and the potential disbursement of billions of dollars from softwood lumber duties--heighten program risks. CBP’s labor-intensive process for claims could be streamlined through steps such as regularly obtaining from the ITC electronic updates of the list of potentially eligible companies and having companies file CDSOA claims using a standard form and submit them electronically. CBP’s recent comprehensive company claim verification effort also indicates that the agency needs additional guidance in place for filing claims. In addition, CBP lacks plans for managing and improving its CDSOA program’s processes, staff, and technology. For instance, it needs a human capital plan for enhancing its staff in the face of dramatic growth in workload processing for both CDSOA claims and payments. Accountability for the accuracy of the claims is virtually non-existent and CBP has no plans to verify claims systematically or on a routine basis. Finally, CDSOA has helped highlight CBP’s collection problems. Despite reports to Congress on its efforts to address these problems, CBP faced a doubling in the AD/CV collections shortfall in fiscal year 2004, to $260 million. This shortfall not only reduces the amount available for disbursement under CDSOA, but also undermines the effectiveness of the trade remedies generally. Given the results of our review, as Congress carries out its CDSOA oversight functions and considers related legislative proposals, it should consider whether CDSOA is achieving the goals of strengthening the remedial nature of U.S. trade laws, restoring conditions of fair trade, and assisting domestic producers. If Congress decides to retain and modify CDSOA, it should also consider extending CBP’s 60-day deadline for completing the disbursement of CDSOA funds. Meeting this deadline has been a problem in the past, and may be even more difficult in the future given that the program is experiencing a dramatic growth in its workload. For instance, extending the deadline for processing payments for another 30 days would give the program’s staff additional time for processing payments and for pursuing additional internal control activities. To the extent that Congress chooses to continue implementing CDSOA, we recommend that the Secretary of Homeland Security direct the Commissioner of Customs and Border Protection to enhance the processing of CDSOA claims and payments, the verification of these claims, and the collection of AD/CV duties. Specifically, we recommend that: To improve the processing of CDSOA claims, CBP should implement labor savings steps such as working with the ITC to formalize and standardize exchanges of electronic updates of the list of eligible producers, and requiring that company claims follow a standard form and be submitted electronically. This would also reduce data entry- related errors. To further improve the processing of claims, CBP should provide additional guidance for preparing CDSOA certifications or claims. To enhance the processing of claims and payments in the face of a growing workload, CBP should develop and implement plans for managing and improving its CDSOA program processes, staff, and technology. For instance, a human capital plan would help ensure that the CDSOA program has staff in place with the appropriate competencies, skills, and abilities. To enhance accountability for claims, CBP should implement a plan for systematically verifying CDSOA claims. This plan should aim to ensure that companies receiving CDSOA disbursements are accountable for the claims they make. CBP should also consider asking companies to justify their claims by providing additional information on their claims, such as an explanation of the basis for the claim, supporting financial information, and an independent assessment of the claim’s validity and accuracy. To better address antidumping and countervailing duty collection problems, CBP should report to Congress on what factors have contributed to the collection problems, the status and impact of efforts to date to address these problems, and how CBP, in conjunction with other agencies, proposes to improve the collection of antidumping and countervailing duties. We provided a draft of this report to the U.S. International Trade Commission, Customs and Border Protection, and the Office of the U.S. Trade Representative. We obtained written comments from CBP (see app. IV). CBP concurred with our recommendations. We also received technical comments on this draft from our liaisons at CBP, the ITC and USTR, which we have incorporated where appropriate. We are sending copies of this report to interested congressional committees, the U.S. International Trade Commission, Customs and Border Protection, and the Office of the U.S. Trade Representative. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4347. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. At the request of the Chairman of the House Subcommittee on Trade, Committee on Ways and Means, as well as several House Members, we examined the implementation and effects of the Continued Dumping and Subsidy Offset Act (CDSOA) of 2000. Specifically, we assessed (1) what key legal requirements guide and have affected agency implementation of CDSOA; (2) what problems, if any, U.S. agencies have faced in implementing CDSOA; and (3) which U.S. companies and industries have received payments under CDSOA and what effects these payments have had for recipient and non-recipient companies; and described (4) the status of the World Trade Organization (WTO) decisions on CDSOA. To determine the key legal requirements that guide and have affected agency implementation of CDSOA, we obtained and reviewed legislation and regulations establishing the requirements and procedures for the International Trade Commission (ITC) to determine company eligibility to receive CDSOA funds and for the Department of Homeland Security’s Customs and Border Protection (CBP) to implement CDSOA. We discussed these requirements and their relationship to agency implementation with officials at the ITC and CBP that carry out the agencies’ respective roles. We also reviewed judicial opinions and other documents associated with certain legal cases that have been brought to challenge key requirements of CDSOA, and incorporated the viewpoints expressed by some companies that we contacted in addressing our third objective, which illustrated the impacts of certain requirements. To assess the problems, if any, U.S. agencies have faced in implementing CDSOA, we first determined the agency roles and responsibilities that CDSOA established. We then obtained and analyzed ITC and CBP documents outlining their procedures for carrying out their CDSOA responsibilities and discussed with agency officials the actions the agencies have taken to implement CDSOA. We reviewed evaluations of CDSOA operations in both the Department of the Treasury (Treasury) and ITC conducted by the Inspectors General (IG) of those agencies. We also obtained from CBP a statement of work that had been developed for improving CBP’s management of the CDSOA program. We discussed agency implementation of CDSOA with officials from the Departments of Commerce and Agriculture, as well as certain industry representatives, affected companies, and law firms that handle CDSOA-related actions for their clients. We also reviewed GAO’s documents on human capital and disbursements for additional criteria to assess the agencies’ implementation of CDSOA. Our work focused on certain problems at CBP: To assess CBP’s claims and payments processing procedures, we conducted field work at CBP’s Revenue Division in Indianapolis where we met with officials and staff of the CDSOA Team. After they gave us a comprehensive briefing of their CDSOA operations, we observed these operations, reviewed documentation of their procedures, and discussed challenges they face in implementing the law. We discussed changes in the CDSOA Team’s workload over time with these officials and obtained data on their workload and staff resources. We discussed the team’s procedures for counting and recording eligible and actual claimants and claims, which included information they obtain from the ITC on eligible producers and internal controls the team applies to ensure accuracy in receiving and processing claims. We determined that their data were sufficiently reliable for the purpose of analyzing the changing relationship between the team’s workload and staff resources. To assess CBP’s approach to verifying claims, we discussed the approach and the extent of claim verification since the program’s inception with CBP officials and we reviewed CBP procedures for verifying company claims that were developed in 2004. We also reviewed documentation of a comprehensive verification of one company’s CDSOA claims that was conducted using these new procedures. Because this verification raised issues about the quality and consistency of CBP’s guidance regarding claims submission, we examined the fiscal year 2004 claim files for 32 top CDSOA recipients to ascertain the prevalence of these issues and also obtained the viewpoints of certain CDSOA recipients on CBP’s claims guidance. To describe CBP’s efforts to collect the anti-dumping (AD) and countervailing (CV) duties that fund CDSOA, we obtained and reviewed data on CBP’s annual CDSOA disbursements and AD/CV duty liquidations and collections. To assess the reliability of the data on unliquidated AD/CV duties, we compared them to data used by Treasury’s IG in its 2003 report and performed basic reasonable checks. We determined the data were sufficiently reliable to support the finding that there had been a substantial increase in unliquidated AD/CV duties since 2002. We also reviewed CBP reports to Congress in 2004 and 2005 that reported on AD/CV duty collections issues and problems and the section of the 2003 Treasury IG report that addressed CBP’s efforts related to liquidating and collecting AD/CV duties. Finally, we incorporated the viewpoints of certain companies and industry groups about the status of uncollected duties, and CBP’s efforts to collect them. To determine which U.S. companies and industries have received payments under CDSOA and what effects these payments have had for recipient and non-recipient companies, we obtained and analyzed CBP’s annual disbursement data for fiscal years 2001 to 2004 and collected information from top CDSOA recipients and from recipients and non-recipients in seven industries. Specifically, we identified 770 companies that had received disbursements at some point during fiscal years 2001 through 2004 and combined the multiple disbursements that companies may have received to calculate the total amount of disbursements made to each company during this period. Some companies received disbursements under different names or were acquired by, or merged with or were otherwise affiliated with, other companies on the list during this period. We did not make adjustments to the number of companies, but rather retained the company distinctions in the data as CBP provided it. We then identified 39 companies that had received the top 80 percent of the disbursements made during fiscal years 2001 through 2004 and we reported information about these disbursements. Using this data, we also identified the top 24 product groups that received 95 percent of disbursements during fiscal years 2001 through 2004, and we reported information about these disbursements. We assessed the reliability CBP’s CDSOA disbursements data, and the related Harmonized Tariff Schedule data, and Census Bureau’s data matching the Harmonized Tariff Schedule to the North American Industry Classification System by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. To further determine the effects of CDSOA payments on recipients and non-recipients, we primarily relied on the views provided by top CDSOA recipient companies and of certain recipients and non-recipients in 7 of the top 24 industries (bearings, steel, candles, pasta, DRAMs, crawfish, and softwood lumber) to which CDSOA payments have been made. We selected these industries based on a range of criteria including: the leading recipients of CDSOA funds; industries with the most AD/CV duty orders; industries receiving press coverage related to CDSOA; and industries considered by certain experts to have unique or noteworthy situations. In selecting these industries, we also considered including different types of industries and industries with differing numbers of CDSOA recipients. We consulted with experts at the ITC, the Departments of Commerce and Agriculture, and relevant trade associations to help define the industries and identify leading non-recipients companies within them. In addition, we obtained industry background from ITC investigative reports and other official industry sources. To obtain these companies’ views on CDSOA, we developed and sent out a questionnaire to top CDSOA recipient companies, and a set of structured questions to selected recipient and non-recipient companies in the seven case study industries. We developed and pretested the questionnaire between February and April 2005. Our structured questions were based on the items in our questionnaires. We sent surveys to 32 of the top 39 recipient companies we had identified. Twenty-four of these companies provided written responses to our questions. Their views are not necessarily representative of all CDSOA recipients. We selected non-probability samples of CDSOA recipients and non- recipients that are U.S. producers for each of our seven case study industries. We selected recipient companies based primarily on the amount of CDSOA funds they had received between fiscal year 2001 and 2004. However, in certain industries with small numbers of recipients, including bearings, DRAMs, and candles, we sent structured questions to all recipient companies. We selected non-recipient companies based on industry experts’ views of the importance of the companies and recipient company views of their major non-recipient competitors. We also considered available lists of companies by industry, but found that these lists had limitations in terms of coverage and could not be used to draw probability samples. Overall, we selected 69 recipient and 82 non-recipient companies in the seven industries. In total, we received 61 written responses from recipient companies and 31 written responses from non-recipient companies. Appendix III provides details on how many companies we contacted and received information from for each industry. All recipient companies in the bearings, DRAMs, and candles industries provided responses, and these responses can be generalized. For recipient companies in the other four industries, and for non-recipient companies in all the industries, the responses we received cannot be generalized because of the non-probability samples we used and/or the number of responses we received. Thus, in these cases, the views we report are not necessarily representative of their respective groups. However, we supplemented the information we received with telephone interviews to verify, and in some cases, expand upon, the information that some companies provided. We also compared the overall responses in each industry with industry experts’ views and the information contained in available studies, such as ITC reports, and found the information we gathered to be broadly consistent with these sources. Finally, within this objective, we also conducted an analysis on the trends in the filings of AD/CV relief petitions. We collected data on the number, type, and status of AD/CV duty orders from the ITC and Commerce. We verified this information directly with the Federal Register notices, which are the official sources for AD and CV orders. We determined that the data were sufficiently reliable for the purposes of this report. In addition, we reviewed literature on the determinants of AD petition filings. We applied regression analysis to study the effects of macroeconomic conditions, real exchange rates, and CDSOA itself on the number of petition filings. We also asked the companies we surveyed to discuss CDSOA’s impact on AD/CV filings and interviewed industry representatives to gain an understanding of what affects their decision to file or support AD/CV petitions, and whether CDSOA was a significant factor in their decision. To determine the status of the WTO decisions on CDSOA, we analyzed official U.S., foreign government, and WTO documents. We also interviewed officials from the Office of the U.S. Trade Representative and the Department of State. We conducted our work in Washington, D.C., and Indianapolis, Indiana, from September 2004 to September 2005 in accordance with generally accepted government auditing standards. This appendix provides information on the CDSOA payments received by the top recipient companies and the views of these companies on CDSOA’s effects. Table 2 lists the top 39 companies that received 80 percent of the total CDSOA payments during fiscal years 2001-2004. This table also presents each company’s percentage of the total payments and the cumulative percentages. Each company’s industry is also listed. We sent surveys to the companies that received 80 percent of the CDSOA payments from 2001 through 2004, asking for their views on CDSOA’s effects. We asked these companies to assess CDSOA’s effects on a number of different dimensions including prices, employment, and ability to compete. We asked the companies to rate CDSOA’s effect, ranging from 1 (very positive) to 5 (very negative) for each particular company dimension. The top recipients reported that CDSOA had the most positive impact in the areas of net income and employment. In its written comments, one company stated that CDSOA payments have allowed it to make substantial investments in its plant and its workers, including providing supplemental health care benefits. The top recipients reported that CDSOA had less of effect in areas such as prices, net sales, and market share. Several companies commented that, for example, disbursements have had little or no effect on prices for its CDSOA products, since such prices are ultimately determined by market forces. The ratio of CDSOA payments to company net sales ranged from less than 1 percent to over 30 percent. However, this ratio was less than 3 percent for all but five companies. In table 3 we present summary information on these companies’ responses. Table 4 shows that 17 of the 24 companies reported that CDSOA had increased their ability to compete in the U.S. market. This appendix provides information on the CDSOA payments received by recipient companies in seven industries: bearings, steel, candles, DRAMs, pasta, crawfish, and softwood lumber. It also discusses the views of recipient and non-recipient companies in these industries on CDSOA’s effects. Figure 8 shows the share of CDSOA disbursements received by U.S. companies in the seven industries and in the remaining industries. Bearings are used in virtually all mechanical devices and are used to reduce friction in moving parts. Types of bearings include ball bearings, tapered roller bearings, and spherical plain bearings. The market for bearings is global and dominated by only a few multinational companies. Within the U.S. market the degree of concentration among different segments of the industry varies; the Census Bureau listed 19 producers of tapered roller bearings and 65 producers of ball bearings in 2003. The Timken Company is the largest U.S. bearings company, but several foreign-owned companies have also had a long-standing presence in this country as bearings producers and are Timken’s main competitors in the U.S. market. One foreign-owned producer, for example, has operated U.S. production facilities for over 80 years, while two others have produced in this country for over 25 years. These companies have not been eligible to receive CDSOA disbursements because they did not support the original cases. In 1975 the ITC determined that tapered roller bearings from Japan were harming the domestic industry and a dumping finding was published the following year. The Department of Commerce subsequently published antidumping orders on tapered roller bearings against Japan, China, Hungary, and Romania in 1987. Commerce then issued antidumping orders for ball bearings, cylindrical roller bearings, and spherical plain bearings from a number of other countries in 1989. Currently, there are eight bearings orders in effect against seven countries. Import penetration of the U.S. market has grown from 5 percent of consumption in 1969 to approximately 25 percent in 2003. When Commerce levied ball bearing dumping duties against Japan, Singapore, and Thailand in 1989, an opportunity arose for China. All of the world’s major bearing companies, including Timken, now have manufacturing facilities in China. Timken and Torrington are the two largest CDSOA recipient companies. Together, they received over 80 percent of all disbursements to the bearings industry and one-third of disbursements to all companies in CDSOA’s first four years. Table 5 shows CDSOA recipients in the bearings industry from fiscal years 2001 through 2004. We obtained the views of three bearings recipient companies. These companies commented that CDSOA has had positive effects, although they varied in their assessments of the extent of the benefit. Bearings recipients reported that CDSOA’s greatest impact has been in the areas of net income, employment, and ability to compete. These companies also commented that CDSOA has had less of an effect on prices, sales, and profits. One company stated that the disbursements helped it to replace equipment and become more competitive, enabling it to recover to the position it had held prior to being injured from dumping. Another recipient commented that while the CDSOA disbursements were helpful, they were distributed years after the initial injury and did not fully compensate the company for lost profits due to unfair trade. The bearings recipient companies vary greatly in their overall size. These companies are also significantly different in terms of the amount they have received through CDSOA, overall and as a percentage of their sales. For the recipient companies in our case study, in fiscal year 2004, CDSOA disbursements as a percentage of company sales ranged from just over 1 percent to 21 percent with the larger recipients generally at the low end of this scale. We obtained the views of two non-recipients, one of which reported negative effects, while the other said it is too early to tell the extent of the harm that CDSOA has caused. One company commented that CDSOA is harmful because the antidumping duties it pays are transferred directly to a competitor. The company further stated that the money it is paying in duties limits its ability to invest in its U.S. operations. The other non- recipient company emphasized the size of the CDSOA disbursements in the bearings industry, but commented that it is still too early to know the injurious effect these disbursements will have on non-recipient producers. The leading non-recipient producers have not been eligible to receive CDSOA payments because they did not support the original cases. Table 6 provides bearings recipient and non-recipients’ responses to our questionnaire on CDSOA’s effects. Table 7 provides these companies’ responses to our question on CDSOA’s effect on their ability to compete in the U.S. market. We also asked companies to describe how they used the CDSOA payments that they received. However, the law does not require that distributions be used for any specific purpose. The bearings recipient companies varied in their responses to this question. One company responded that it has used the disbursements to rebuild production equipment, maintain employment levels, and add more technical personnel for pursuing bearings customers. A second company commented that it does not earmark funds for a specific project; thus the funds have been spent on debt reduction. The third company did not specify how it used the funds, reiterating that the disbursements were based on previous qualified expenditures and emphasizing that its investments in U.S. bearings production have exceeded the money it received through CDSOA. No clear trend emerged from these companies’ production and employment data over the 4 years that CDSOA has been in effect. One recipient’s net sales increased from 2001 to 2004, for example, while another’s declined. Similarly for employment, one recipient’s number of workers decreased over the 4 years, while another’s remained about the same. The responses from the non-recipients also did not show a clear trend for production or employment. For two of the three companies, employment declined, while all three companies’ net sales increased to varying degrees. Most of the bearings companies that we contacted indicated that they had both domestic and overseas production operations. Of the three recipient companies, only one reported that it imports CDSOA products, but its imports make up a small share of its overall sales. To obtain bearings companies’ views on CDSOA’s effects, we sent out a set of structured questions to certain CDSOA recipients and certain non- recipients in the bearings industry. CDSOA payments are made in this industry under multiple AD orders that were issued in different years. To identify CDSOA recipients, we obtained information from CBP about the companies that have received payments in each of the four years that disbursements have been made and the amount of disbursements they have received. Using this information, we developed a list of seven recipients and ranked them by their total CDSOA receipts. We obtained additional information from company representatives and CBP resulting in our combining certain recipients and treating them as three distinct companies. For example, CBP sometimes listed in its annual reports on CDSOA, as separate distributions, payments to entities that were divisions or subsidiaries of other companies that also received CDSOA distributions. We surveyed the three companies, and all of them provided completed surveys. The universe of bearings non-recipients is larger than the universe of recipients. We sought to obtain views from a comparable number of non- recipients as recipients. To identify these companies, we obtained information from associations or others that were knowledgeable about the industry. Specifically, we obtained information about non-recipient bearings companies by (1) identifying members of the American Bearings Manufacturing Association, (2) asking recipient companies to identify their competitors, and (3) conducting our own research. We surveyed three non- recipient companies, of which two provided completed surveys. These two non-recipients are multi-national companies that are among the leading global producers of bearings and have had a long-standing history of production in the United States. The views of the non-recipients that responded to our questions may not be representative of all non-recipients. For this case study, we defined the scope of the steel industry to include companies that produce steel by melting raw materials. The two main types of producers of raw steel are integrated mills and minimills. Integrated producers use older, blast furnaces to convert iron ore into steel. They mainly produce “flat” products, such as plate and hot-rolled steel that are used in transportation equipment, construction, and heavy machinery. The minimills are a scrap-based industry, producing steel from recycled metal products, such as crushed cars or torn-down buildings. They use newer, electric-arc furnaces, and account for almost all of the industry’s “long” production, including wire-rod and rebar. The top three domestic steel producers—Mittal, U.S. Steel, and Nucor—together account for about half of overall domestic steel production, which is approximately 100 million tons a year. A third, much smaller sector of the industry is the specialty, or stainless, sector. These producers also use electric-arc furnaces and represent about 2 percent of the overall industry output and about 10 percent of value. The steel industry is by far the largest user of AD/CV duty orders, with over 125 iron and steel mill orders in place as of June 2005. Several industrywide trends occurring at the same time as CDSOA disbursements are relevant. Between 1997 and 2003 period, 40 steel companies declared bankruptcy, with some of them ceasing operations altogether. CDSOA recipients were not immune from this general trend; several of them have declared bankruptcy and various firm consolidations have also occurred. The Asian financial crisis was an important factor in increasing steel imports to this country, as Asian demand for steel dropped and foreign steel companies increasingly looked to the United States as a market for their products. The surge in imports led to the filing of relief petitions on hot-rolled steel against Russia, Japan, and Brazil beginning in 1998. Companies subsequently filed relief petitions against 11 other countries. In 2002, the President also took action under section 201 of the Trade Act of 1974, which allows him to implement temporary relief when an industry has been seriously injured by surging imports. Under this authority the President announced a series of safeguard tariffs of up to 30 percent on a range of steel products. These tariffs, which were imposed in addition to the AD/CV duties, remained in place from March 2002 until late 2003. Much of the industry returned to profitability in 2004, when prices rose. Table 8 depicts the top 10 CDSOA recipients for steel in fiscal years 2001 through 2004. Recipient steel companies varied in their assessments of the payments’ effects, but generally agreed that they had a positive impact in the areas of net income and investing in plant, property, and equipment. For example, several recipients said disbursements enabled them to make investments needed to survive the steel crisis and be competitive in the future. The companies also generally stated that CDSOA disbursements have had little or no effect on prices, net sales, and market share. Some steel recipients also commented that CDSOA has not been a complete solution to the problems they faced due to unfairly traded imports. One recipient commented, for example, that while CDSOA payments could be presumed to have had a tangible benefit for the industry, they have not come close to erasing the years of financial injury brought on by unfairly traded steel products. Some steel companies acknowledged that the CDSOA disbursements have not been significant in relation to their size or capital expenditure needs. For each of the 13 steel companies in our case study, the CDSOA disbursements they received amounted to less than 1 percent of their net sales in fiscal year 2004. Table 9 provides steel recipients’ responses to our questionnaire on CDSOA’s effects. Table 10 provides these companies’ responses to our question on CDSOA’s effect on their ability to compete in the U.S. market. We also asked companies to describe how they used the CDSOA payments that they received. However, the law does not require that distributions be used for any specific purpose. The steel recipient companies generally did not provide specific replies to this question. General comments by these companies included that they used the CDSOA payments to make capital investments, reduce debt, and assisted in the acquisition of steel-making assets. Sales, profit, and income figures generally improved markedly for the steel companies between 2003 and 2004, as the overall industry enjoyed a strong rebound from the previous years. In some cases companies went from showing net losses to net income between these 2 years. Some companies also expanded greatly among all categories as they grew by acquiring the assets of other companies. Overall, some companies gained employees, while other companies lost them. None of the recipient steel companies responding to our questionnaire reported that they are involved in overseas production or importation of CDSOA products. To obtain steel companies’ views on CDSOA’s effects, we sent out a set of structured questions to certain steel CDSOA recipients and non-recipients. CDSOA payments are made in this industry under multiple steel and steel- related AD and CV orders that were issued over several years. For this case study, we defined the scope of the steel industry to only include companies that produce steel by melting raw materials. Our scope excludes companies that primarily make steel-related products (such as pipe or tubing) from purchased raw steel. As discussed below, we were not able to obtain information from steel non-recipients on CDSOA’s effects. To identify CDSOA recipients, we obtained information from CBP about the companies that have received payments, according to our definition of the industry, in each of the 4 years that disbursements have been made and the amount of disbursements they have received. We obtained information from representatives of the ITC, Commerce, and industry associations to determine precisely which companies fit under our definition of the steel industry. Using this information, we developed a list of 69 recipients and ranked them by their total CDSOA receipts. Because of time and resource constraints, we decided to survey the top 15 steel recipient companies that had received 90 percent of the distributions made under the orders included in our scope. Two of these companies had ceased operations. We surveyed the remaining 13 companies and received completed surveys from all of them. The 13 respondents accounted for about 72 percent of the CDSOA payments to this industry; their views may not be representative of all recipients, particularly those that received relatively small CDSOA receipts. The universe of steel non-recipients is larger than the universe of recipients. We sought to obtain views from a comparable number of non- recipients as recipients. To identify these companies, we obtained information from associations or others that were knowledgeable about the industry. Besides ITC, we spoke with several steel industry associations (American Iron and Steel Institute, Steel Manufacturers Association, and the Specialty Steel Industry of North America) to identify leading steel non- recipients. We also asked recipient companies to identify their competitors. Based on these meetings and our own research, we surveyed 12 leading non-recipient steel companies, from which we received 1 completed survey. However, this survey did not include comments or views on CDSOA’s effects. As a result, we are not able to present the views of steel non-recipient companies on CDSOA’s effects. Petroleum wax candles are produced in several forms including columns or pillars, wax-filled containers, tapers or dinner candles, votives, and novelty candles. They are sold to consumers through retail outlets, the largest percentage of which are through mass merchandisers (such as Wal-Mart or Target); these are followed by department stores, discount retailers, card and gift shops, and door-to-door sales through membership groups. The majority of petroleum wax candles are produced and imported for national markets. The number of domestic producers has grown from over 100 when the ITC performed its original investigation in 1986 to over 400 at the time of its second 5-year review in 2005. Only 10 domestic candle producers are eligible for CDSOA payments. Table 5 shows these companies’ CDSOA disbursements and claims. According to the ITC, these recipients, in addition to approximately 35 other candle producers, make up 70 percent of U.S. candle production. In 1985 a petition was filed by the National Candle Association (NCA) alleging that the U.S. candle industry was materially injured by dumped imports of petroleum wax candles from China. The ITC determined injury in 1986, and Commerce issued an antidumping duty order of 54 percent on all Chinese producers and exporters. The ITC conducted a 5-year, expedited review in 1999, and the duty doubled from 54 percent to 108 percent after another expedited review in 2004. U.S. producers’ share of the market by quantity (pounds) went from 43 percent in calendar year 1999 to 53 percent in calendar year 2004. Imports from China, which some perceive as lower-end candles, accounted for 20 percent in 1999, rising to 27 percent in 2004. U.S. producers and Chinese suppliers have both gained market share in recent years. U.S. producers’ share of candles dollar value was 66 percent in 1999 rising to 70 percent in 2004, while China’s share rose from 10 percent in 1999 to 14 percent in 2004. The ITC is presently conducting a full 5-year “sunset” review of this order, and recently presented its findings to Commerce. Also, Commerce is considering whether the scope of the order should be changed, inquiring whether mixed wax candles composed of petroleum wax and varying amounts of either palm or vegetable wax alter the product so that they are not subject to the current order. Table 11 depicts CDSOA recipients for candles in fiscal years 2001 through 2004. Recipients report that CDSOA distributions have had positive effects on their net income; on their property, plant and equipment; and on research and development. One of the larger recipients of CDSOA distributions claims that these payments have lessened the need to consider outsourcing their candle products from abroad. However, the company reported that because of the effects of dumped Chinese candles, they continue to lay off workers, though fewer than they may have absent the CDSOA funds. Other recipients claim to have developed new, better, and safer candles with research and development reinvestment of CDSOA disbursements. Fiscal year 2004 CDSOA disbursements as a percentage of company sales range from 0.4 percent to 34.7 percent for the 10 recipient candle companies, with most companies’ shares in the higher end of this range. Non-recipients report that CDSOA distributions to their competitors have had negative effects on their ability to compete in the market, on their gross profits, and on net income. They also reported very negative effects on industry competition. One non-recipient company has closed two of four domestic manufacturing facilities, eliminated or reduced shifts, and its released workers. Another non-recipient company claims that their CDSOA-recipient competitors could reduce selling prices. While the company matched competitors’ lower prices, they made no profit. Because of this, they have recently exited this segment of the candle business and released workers accordingly. Some non-recipients also expressed the view that their ineligibility for CSDOA disbursements is unfair. One non- recipient company joined the NCA as a leader of the organization a few years after the issuance of the order, but stated that they have no institutional memory of receiving an ITC questionnaire during its original investigation in 1986. This company said it has supported the order as well as NCA’s efforts to defend the order since joining the NCA. Another non- recipient is ineligible by virtue of being acquired by a firm that opposed the original investigation, and was unsuccessful in its legal challenge of this. Table 12 shows candle recipient and non-recipients’ responses to our questionnaire on CDSOA’s effects. Table 13 depicts these companies’ responses to our question on CDSOA’s effect on their ability to compete in the U.S. market. We also asked companies to describe how they used the CDSOA payments that they received. However, the law does not require that distributions be used for any specific purpose. Several recipients claim that they have used CDSOA funds to invest in new and better equipment, and in research and development. One recipient company reports that it has been able to offer employees consistent and comprehensive benefits packages due to CDSOA funds. For smaller candle companies—both recipients and non-recipient respondents alike—net sales have stagnated, as has employment of production and related workers. Some of the larger non-recipient respondents appear to have experienced some growth in these categories, while some of the larger recipients seem to have experienced some decline or stagnation in net sales and some growth or stagnation in production and employment. Most candle companies are strictly domestic producers; however, one non- recipient stated that it would start to import some of its candle products from Asia in order to keep its costs down. To obtain the views of candle companies on CDSOA’s effects, we sent out a set of structured questions to candle CDSOA recipients and certain non- recipient companies within the industry. CDSOA payments are made under one AD order that was issued in 1986. To identify CDSOA recipients, we obtained information from CBP about the companies that have received payments in each of the 4 years that disbursements have been made. Using this information, we developed a list of 11 recipients and ranked them by their total CDSOA receipts. One of these companies now receives CDSOA payments under the name of its parent company, leaving 10 distinct companies. We sent surveys to all recipient companies, and all of them provided completed surveys. The universe of candle non-recipients is larger than the universe of recipients. We sought to obtain views from a comparable number of non- recipients as recipients. To identify these companies, we obtained information from associations or others that were knowledgeable about the industry. Specifically, we (1) obtained a list of members of the NCA from its website; (2) corroborated this list with information from a recent ITC publication; and (3) obtained information about certain non-NCA members based on our own research. Because of time and resource constraints, most of the non-recipient candle companies we contacted are members of the NCA. Surveys were sent to non-recipient candle companies for which an E-mail address could be obtained either from the NCA list or from the company directly. We surveyed 26 non-recipient candle makers, of which 8 provided completed surveys. Respondents included two relatively large candle companies whose net candle sales were similar in magnitude to one of the largest candle CDSOA recipients, and several smaller candle companies whose net sales were similar to or slightly larger than several of the smaller CDSOA candle recipients. The views of these respondents may not be representative of all non-recipients. The bulk of pasta production in the United States is dry pasta, with production of frozen or refrigerated pasta constituting a smaller portion of the U.S. industry. After several decades of mergers and acquisitions, and the 2001 sale of one major producer’s production facilities and brand names to two of its competitors, the industry’s current structure reflects a high degree of concentration among a few large producers. The four largest U.S. producers as of 2001, based on ITC data, were American Italian Pasta Company, New World Pasta, Dakota Growers Pasta Company, and Barilla America, Inc. (a U.S. subsidiary of an Italian pasta company that was set up in 1998 after antidumping and countervailing duty orders on Italian dry pasta imports were issued). An industry expert estimated that these four companies currently account for about 80 percent of dry pasta production in the United States, with the remainder supplied by smaller or specialty companies. Three of the four are eligible for CDSOA disbursements, but Barilla America, Inc., whose share of U.S. production is growing and which said it only imports a small percentage of the pasta it sells here, is not. Overall demand for dry pasta in the United States has been declining since the late 1990s, a trend that has been exacerbated, according to dry pasta companies and industry experts, by diets that emphasize low-carbohydrate intake. Further, the industry has been experiencing decreased sales, excess capacity, and plant closures. Among the more significant indicators of the downturn, New World Pasta—a leading CDSOA recipient—filed for Chapter 11 bankruptcy protection in 2004. According to ITC, about three- fourths of U.S. consumption of dry pasta in 2000 was supplied by domestic producers, with the remainder supplied by imported products. At that time, the largest sources of imported pasta were Italy, Canada, Korea, and Mexico. Several U.S. producers petitioned for relief from rapidly growing imports in 1995. In 1996, Commerce issued antidumping and countervailing duty orders on certain pasta imports from Italy and Turkey. Initial AD duties rated from 0 to about 47 percent on Italian pasta and about 61 to 63 percent on Turkish pasta, while initial CV duties ranged from about 0 to 11 percent on Italian pasta and about 4 to 16 percent on Turkish pasta. Since Commerce issued the order, dry pasta imports from Italy have declined and Turkey is no longer a leading supplier of pasta to the United States. The ITC completed a sunset review in 2001 that extended the orders until 2006. The top seven CDSOA recipients have received about 99 percent of the payments made to the industry, with American Italian Pasta Company and New World Pasta/Hershey Foods receiving 70 percent of total payments. Table 14 shows total payments made to all dry pasta CDSOA recipients in fiscal years 2001 through 2004. The four pasta recipients that responded to our survey viewed the CDSOA program as having mostly positive effects on their companies. The two largest recipients did not respond to our survey, and we did not contact the three smallest recipients. All respondents cited the most positive company effects in the areas of profit; income; and investment in property, plant, and equipment; and most cited positive effects on net sales and ability to compete. Some recipient companies noted that the program has enhanced their ability to increase production through plant expansions and upgrades; improved their cash flow, allowing them more operating flexibility; reduced manufacturing costs; and enhanced some companies’ competitive position. Funds have also helped some companies develop new products. CDSOA disbursements to the pasta industry have been small compared to each company’s net sales. For example, fiscal year 2004 CDSOA payments to the pasta companies that responded to our survey represented about 1 percent or less of each company’s 2004 net sales. Among the six pasta non-recipients that responded to our survey, views about the effect of CDSOA funds were mixed. A few said the funds had impacted their companies negatively in certain areas or created an unfair competitive environment in the industry, while others thought effects were minimal or could not judge the program’s effects. About half of the non- recipients thought the program has had little or no effects for their companies in the areas of employment, prices, sales, investment, or market share. Some non-recipients thought the program had negatively impacted their company’s profits, income, and ability to compete. Some non- recipients said that the program has probably helped recipients cut prices, and that this has created an unfair advantage in the industry for recipients. One non-recipient stated that it has had to transfer substantial sums of money to its competitors because of CDSOA, and that these funds would likely have been used for product development, capital investment, and expansion at its U.S. facility. Table 15 provides pasta recipients’ and non-recipients’ responses to our questionnaire on CDSOA effects. Table 16 provides these companies’ responses to our question on CDSOA’s effects on their ability to compete in the U.S. market. We also asked companies to describe how they used the CDSOA payments that they received. However, the law does not require that distributions be used for any specific purpose. Recipients used CDSOA funds for a variety of purposes. For example, some said they used the funds to purchase new equipment or upgrade existing equipment; reduce manufacturing costs and improve cash flow; increase production capacity; and invest in research and product development. This, in turn, led to increased production and employment among some companies. One company that did not respond to our survey disclosed in its 2003 annual report that it used a significant portion of the funds to increased investment in brand building activities and to strengthen the company’s organization. One recipient noted that CDSOA funds have been helpful because margins in the industry are very thin and competition is strong. As CDSOA improved one company’s bottom line, it was able to obtain more attractive financing rates. Our information about the effect of CDSOA on net sales and employment in this industry is limited because the two largest companies did not respond to our survey. Although press coverage of the industry has noted generally declining net sales among U.S. dry pasta companies in recent years, the companies that responded to our questions reported general increases in net sales during 2001 through 2004. Specifically, two companies reported increased sales in the 2001 through 2004 time frame, and two companies reported fluctuating sales that were higher at the end of the period than at the beginning. Among recipient respondents, two companies’ employment levels generally increased, and two companies’ employment levels generally decreased since the implementation of CDSOA. Among non- recipient respondents, net sales and employment showed mixed trends. Three companies reported increased sales, one company reported fluctuating sales that were higher at the end of 2004, and two companies reported decreased net sales. Three companies reported generally increased employment levels and three reported general decreases. All of the recipient pasta companies that responded to our survey produce their product only in the United States. However, the top CDSOA recipients that did not respond to our survey produce pasta domestically and in other countries. Four of the non-recipients produce exclusively in the United States, and two produce both domestically and overseas. To obtain pasta companies’ views on CDSOA’s effects, we sent out a set of structured questions to certain pasta CDSOA recipients and non-recipients. CDSOA payments are made in this industry under two AD and two CV orders that were issued simultaneously. To identify CDSOA recipients, we obtained information from CBP about the companies that have received payments in each of the 4 years that disbursements have been made and the amount of disbursements they have received. Using this information, we developed a list of 11 recipients and ranked them by their total CDSOA receipts. CBP provided additional information that indicated there were actually 10 distinct companies. Because of time and resource constraints, we decided to survey the top seven companies that had received 99 percent of the total payments made under these orders, from which we received four completed surveys. The two pasta companies that are top CDSOA recipients did not respond to our survey. Our information about CDSOA effects for recipients is limited to the four pasta companies that responded, which together accounted for about 27 percent of CDSOA payments to this industry. The universe of dry pasta non-recipients is larger than the universe of recipients. We sought to obtain views from a comparable number of non- recipients as recipients, but we had difficulty identifying non-recipient dry pasta companies. To identify these companies, we obtained information from associations or others that were knowledgeable about the industry. Specifically, we obtained company names and contact information from (1) the website of the National Pasta Association, which presently carries out only limited activities on behalf of the industry; (2) an association management company that handles administrative matters for the National Pasta Association; (3) a directory of pasta companies published on http://www.bakingbusiness.com, a division of Milling and Baking News, which is a business news organization that ITC had identified as closely following the pasta industry; and (4) other pasta companies. Many of the companies we identified through these sources were not makers of dry pasta as defined in the orders, but were instead makers of egg noodles, fresh or refrigerated pasta, couscous, and boxed or frozen foods that use pasta, or were flour mills or other companies linked to the production of dry pasta. We surveyed eight non-recipient dry pasta manufacturers, from which we received six completed surveys. The respondents include the fourth-largest dry pasta manufacturer in the United States, several smaller pasta companies that produce durum wheat pasta, one company that produces wheat-free pasta, and one company that produces exclusively organic pasta. The views of these respondents may not be representative of all non-recipients. Dynamic random access memory (DRAM) semiconductors are considered commodity products and compete largely on the basis of price; DRAMs of similar density, access speed, and variety are generally interchangeable regardless of the country of fabrication. Today, four companies produce DRAMs in the United States: Micron Technologies is a U.S. company, Infineon Technologies is a spin-off of the German company Siemens, and Samsung Electronics and Hynix Semiconductor are Korean companies. All of these companies now have production facilities in the United States as well as abroad, but the latter three have entered the U.S. industry within the past decade. The DRAM industry is cyclical in nature, where demand is driven by investments in computers and other end-products. Fabrication facility costs are high and require complete replacement approximately every 10 years. Due to high fixed costs, chip manufacturers cannot afford to scale down production; they must constantly produce chips and invest or go out of business. One countervailing duty order is currently in effect for DRAMs produced by Hynix only. This duty order came into effect in 2003 and its duty rate is currently 44 percent. Micron Technology received the bulk of distributions in this industry because it was the sole recipient of duties from two antidumping orders dating from the 1990s on DRAMs and other kinds of chips. Payments were made to Micron on DRAMs of 1 megabit and above under one AD order issued in 1993 and revoked in 2000, as well as on an AD order on SRAMs (static random access memory chips) issued in 1998 and revoked in 2002. The vast majority of CDSOA disbursements to the industry (approximately $33 million) in fiscal years 2001 through 2004 were related to these orders. Infineon did not incorporate in the United States until 2000 and, therefore, did not participate in the earlier investigations. Both Infineon and Micron are eligible and received disbursements under the current order but Hynix and Samsung are not eligible because they opposed the petition. Because DRAMs are a technologically dynamic product, it is expected that Commerce will revoke these orders when the subject products are obsolete. New products open themselves to new petitions and orders, thereby allowing new potential CDSOA recipients. Table 17 depicts CDSOA recipients for DRAMs in fiscal years 2001-2004. The two recipients of CDSOA disbursements reported mixed effects. One recipient reported that, although at the time it was operating at a net loss, CDSOA distributions improved its profitability, investment, employment, and research and development. The company noted that it would be of greater help if payments were made soon after other countries began their unfair trade practices. Another recipient reported that disbursements were immaterial to their operations. Fiscal year 2004 CDSOA disbursements are equal to less than 1 percent of both companies’ sales. Table 18 presents DRAM recipients’ responses to our questionnaire on CDSOA’s effects. Table 19 shows companies’ responses to our question on CDSOA’s effect on their ability to compete in the U.S. market. We also asked companies to describe how they used the CDSOA payments that they received. However, the law does not require that distributions be used for any specific purpose. One recipient uses CDSOA distributions to fund U.S. operations and to invest in new U.S. production equipment. The other recipient also uses distributions in operations. Historically, the DRAM market is subject to periods of “boom and bust.” Both CDSOA recipients reported some net losses and have experienced slight declines in production and related workers during the past 4 fiscal years. One company has DRAM production facilities in three U.S. states as well as Japan, Italy, and Singapore. The other indicated that it has both domestic and foreign production facilities; they also noted that DRAMs manufactured in the United States can be sold abroad, and DRAMs manufactured abroad can in turn be sold here. To obtain the views of DRAM-producing companies on CDSOA’s effects, we sent a set of structured questions to the two CDSOA recipients. Current CDSOA payments on DRAMs are made on a CV order issued in 2003. To identify CDSOA recipients, we obtained information from CBP about the companies that have received payments in each of the 4 years that disbursements have been made. CBP identified two companies. We surveyed both recipient companies, and both provided completed surveys. To identify non-recipients, we consulted the recipient companies to identify their competitors, and we obtained information on domestic producers from the ITC’s final determination on DRAM and DRAM Modules from Korea. There are two U.S. subsidiaries of Korean companies that are considered domestic producers who opposed the petition for the current order. We attempted to contact these companies but were unsuccessful in our efforts. We did not attempt to contact a fifth company that is also considered a domestic producer; this company does not list the major DRAM producers as competitors, and has no fabrication facilities. ITC listed other domestic producers for the purposes of its investigation, but these companies have since ceased DRAM production or have ceased to exist. Crawfish are freshwater crustaceans that resemble lobsters but are considerably smaller. U.S. commercial production of crawfish is concentrated within a relatively small area of southern Louisiana, where crawfish are harvested in the wild by fishermen and farmed in ponds. Crawfish may be sold whole and live, whole and boiled, or as fresh or frozen tail meat. Whole crawfish and fresh tail meat is consumed primarily in Louisiana and neighboring states, where there is generally a preference for local products in season. Tail meat is also sold more broadly throughout the United States. U.S. producers supply whole crawfish and fresh and frozen tail meat, whereas imports, mainly from China, are primarily frozen tail meat. U.S. businesses that process whole crawfish into tail meat are primarily small, family-owned concerns. Inexpensive imports and poor harvests have driven many domestic crawfish processors out of business in recent years. It is estimated that there were over 100 processors in Louisiana in the 1980s and early 1990s, but that number has dropped by more than half. In 1996, the Crawfish Processors Alliance, an industry association, and the Louisiana Department of Agriculture and Fisheries, filed a petition alleging that U.S. processors of crawfish tail meat were being injured by dumped imports of crawfish tail meat from China. Significant imports of tail meat began in the mid-1990s, and ITC estimates that imports’ share of consumption grew from just over 60 percent in 1997 to about 87 percent in 2002. In 1997, Commerce issued an anti-dumping order on crawfish tail meat and imposed anti-dumping margins that ranged from about 92 to about 202 percent. Table 20 depicts the top 10 CDSOA recipients for crawfish in fiscal years 2001-2004. CDSOA recipient respondents in the crawfish tail meat processing industry stated that the program has generally had positive effects for the industry and their companies. Several recipient respondents credited CDSOA with saving the domestic crawfish processing industry. Because of the program, they said, businesses remained open, employees kept their jobs, and crawfish fishermen continued to fish. The areas in which positive effects were most often cited were income; profits; investment in property, plants, and equipment; employment; and ability to compete. The program was generally seen as having little or no effect on prices, research and development, and market share. Many recipients stated that the program had encouraged them to purchase and process more crawfish and freeze more tail meat for sale in the off-season, leading to increased employment among some processors and higher sales volumes for crawfish farmers and fishermen. Many respondents noted the poor collection rate and enforcement of the AD order for crawfish and viewed the CDSOA program as providing their only effective relief from dumped imports. (CBP disbursed about $9.8 million to crawfish processors in fiscal year 2003 but reported that the uncollected duties related to crawfish in that year were about $85.4 million. In fiscal year 2004, CBP disbursed about $8.2 million to the industry, but uncollected duties rose to about $170 million. Nearly two-thirds of all uncollected duties in fiscal year 2004 were related to the crawfish order.) Recipients complained that widespread non-payment of duties means Chinese crawfish continues to enter the U.S. market unabated. In its 2003 review to evaluate continuation of the AD order, ITC found that Chinese tail meat undersold (was sold at a lower price) domestic tail meat to the same degree with the AD order in place as it had before the order was issued, suggesting that the order has not affected the price of imported tail meat. Although CDSOA disbursements in this industry have been small compared to certain other industries, these payments have been significant for some recipients when compared to net sales. Among the 16 recipients that responded to our survey, their fiscal year 2004 CDSOA disbursement as a percent of their 2004 net sales ranged from a low of about 4 percent for one company to a high of about 350 percent for another. Among the other respondents, four companies’ fiscal year 2004 disbursement was about 15 to 18 percent of their net sales that year, five companies’ disbursement was about 27 to 33 percent of their net sales, and four companies’ disbursement was between 52 and 96 percent of their net sales. One company did not report any net sales information to us. Non-recipients crawfish processors that responded to our survey said that the CDSOA program has helped recipient companies, but has harmed non- recipient companies by creating conditions of unfair competition among domestic processors. Most non-recipients cited negative effects for their companies in terms of ability to compete, net sales, profits, income, investment, and employment, which are generally the areas where recipients saw positive effects. Several non-recipients stated that they were unable to compete with the CDSOA recipients. For example, several non- recipients said that recipient companies were offering tail meat for sale at prices that were below the cost of production and were able to do so because their CDSOA funds would compensate them for any losses. In such conditions, some non-recipients said they cannot operate profitably and some decided to stop producing tail meat in recent years. Table 21 provides crawfish recipients and non-recipients’ responses to our questionnaire on CDSOA’s effects. Table 22 provides these companies’ responses to our question on CDSOA’s effect on their ability to compete in the U.S. market. We also asked companies to describe how they used the CDSOA payments that they received. However, the law does not require that distributions be used for any specific purpose. Recipient companies reported a wide range of uses for the funds. For example, most of the companies that reported this information said they purchased or upgraded equipment, buying new or larger delivery trucks, boilers, ice machines, freezers, coolers, and vacuum-pack machines. Several companies bought more crawfish to peel and hired more employees, thereby increasing their production of tail meat. Several companies said that they made investments and repairs to their plants, such as installing or expanding docks for receiving shipments of whole crawfish. Several also paid off long-standing company and personal debts. For example, the head of one small family-run company said he paid off mortgages on the plant and his residence, bought new equipment, and made needed repairs without incurring new financing costs. One company said that it started a pension plan for its employees. More than half of the recipient companies that we surveyed had growing net sales in the 2001 through 2004 time frame. Other companies’ net sales fluctuated, decreased, or were relatively stable. Several respondents said that one of the most significant outcomes of the CDSOA program was to encourage them to purchase and process more crawfish and freeze more tail meat for sale in the off-season, thereby improving their year-round cash flow. Most non-recipients that responded to our survey did not provide net sales information. More than half of the crawfish recipient respondents also reported growth in employment levels, and some of these increases were significant. One company quadrupled the number of production and related workers during the 2001 through 2004 period (from 28 to 111) and the number of such workers at three other companies doubled. Several stated CDSOA enabled them to hire more people. Three recipients reported net decreases in the number of production and related workers in this time period. Non- recipients also generally did not report employment information. Survey respondents said they process tail meat exclusively in the United States. We did not gather any information that disclosed whether, in the course of doing business, any of these processors also import or offer imported tail meat for sale. To obtain crawfish tail meat processing companies’ views on CDSOA’s effects, we sent out a set of structured questions to certain crawfish CDSOA recipients and non-recipients. CDSOA payments are made in this industry under one AD order. To identify CDSOA recipients, we obtained information from CBP about the companies that have received payments in each of the three years that disbursements have been made and the amount of disbursements they have received. Using this information, we developed a list of 35 recipients and ranked them by their total CDSOA receipts. CBP provided additional information that indicated that certain companies had received funds under different names in different years. Because of time and resource constraints, we decided to survey 20 of the top recipients that had received about 90 percent of the total payments made under this order. We received 16 completed surveys. These 16 companies accounted for about 73 percent of CDSOA payments to this industry; their views may not be representative of all recipients, particularly those that received relatively small CDSOA disbursements. The size of the universe of crawfish non-recipients not known. We sought to obtain views from a comparable number of non-recipients as recipients, but we had difficulty identifying non-recipient crawfish companies. To identify these companies, we obtained information from associations or others that were knowledgeable about the industry. Specifically, we obtained contact information for current and former tail meat processors that are non-recipients from (1) a law firm that represents the Crawfish Processors Alliance, an entity that was a petitioner in this case; (2) the Louisiana Department of Agriculture and Fisheries, an entity that was a petitioner in this case; (3) the Louisiana Department of Health and Hospitals, which licenses and inspects processors; and (4) certain other tail meat processors. We lacked accurate contact information for several of these companies. We surveyed 17 current and former processors, from which we received 9 completed surveys. The views of these respondents may not be representative of all non-recipients. Softwood lumber generally comes from conifers or evergreen trees including pine, spruce, cedar, fir, larch, Douglas fir, hemlock, cypress, redwood, and yew. Softwood is easy to saw and used in structural building components. It is also found in other products such as mouldings, doors, windows, and furniture. Softwood is also harvested to produce chipboards and paper. U.S. softwood lumber producers are generallly located in the southeast and northwest, with the northwest softwood lumber being comparable to Canadian softwood lumber. CDSOA disbursements to the softwood lumber industry went to 143 companies in fiscal years 2003 and 2004. According to one estimate, about half of the softwood lumber companies are eligible to receive these disbursements. Canada’s share of the U.S. lumber market rose from less than 3 billion board feet (BBF) and 7 percent of the market in the early 1950s to more than 18 BBF per year and 33 percent of the market in the late 1990s. In 2003, U.S. imports of softwoods were 49,708 thousands of cubic meters, and the ratio of these imports to consumption was 37.4 percent. Since 1981, the United States and Canada have been involved in several softwood lumber disputes, leading to, among other things, a 15 percent Canadian tax on lumber exports in 1986; a countervailing duty of 6.51 percent on Canadian imports in 1992, which ended in 1994; and a 1996 Softwood Lumber Agreement restricting Canadian exports for five years, until 2001. The U.S. again imposed antidumping and countervailing duties on Canadian imports in 2002. From May 2002 to December 2004 most Canadian softwood lumber exported to the United States was subject to a combined antidumping and countervailing duty of 27 percent. In December 2004 this combined duty was reduced to 21 percent. These two duty orders funded about $5.4 million in CDSOA disbursements to U.S. softwood lumber companies in fiscal years 2003 and 2004. Leading U.S. softwood lumber producers are among the industry’s top CDSOA recipients. However, major U.S. producers are also among those ineligible to receive CDSOA disbursements. CBP has received over $3.7 billion in deposits to cover estimated duties from softwood lumber imports from Canada. Table 23 depicts the top 10 CDSOA softwood lumber recipients for fiscal years 2003-2004. Recipient and non-recipient companies generally noted that, because CDSOA disbursements had been so small in fiscal years 2003-2004, totaling about $5.4 million, they had had little or no effect on their companies. Although recipient companies vary greatly in their overall size, these companies do not vary significantly in terms of the amount they have received through CDSOA as a percentage of their sales in fiscal year 2004. Specifically, CDSOA disbursements to company sales amounted to less than 1 percent for the recipient companies in our study. However, some recipient and non-recipient companies emphasized that, if the United States ever were to liquidate and disburse the large amount of softwood lumber duties currently being held in deposit by Treasury, these disbursements would have major effects on both recipient and non- recipient companies. One recipient company noted that these disbursements would have positive effects on its company, while a non- recipient company emphasized negative effects. Because capital is a major function in competitiveness, a non-recipient company stated that, if recipient companies were to invest large CDSOA disbursements on new mills, they would be able to dramatically increase their efficiency, output, and market share. Table 24 provides softwood lumber recipients and non-recipients’ responses to our questionnaire on CDSOA’s effects. Recipient and non-recipient companies generally reported that the CDSOA disbursements had had no effect on their companies’ ability to compete in the U.S. market. Table 25 presents these companies responses to our question on CDSOA’s effect on their ability to compete in the U.S. market. We also asked companies to describe how they used the CDSOA payments that they received. However, the law does not require that distributions be used for any specific purpose. Overall, companies noted that they had used the payments for a variety of purposes, such as paying debt, past qualifying expenditures, general operating expenses, general corporate expenses, and capital investment. Others noted that the payments had been too small to track their use in any area. Overall, recipient and non-recipient companies we contacted vary significantly in size. Both show slight increase in net sales and employment over the 4 years that CDSOA has been in effect. Leading U.S. producers are among the CDSOA recipient and non-recipient companies. Most recipient companies we contacted produced CDSOA-related products domestically. Some non-recipient companies we contacted produced these products domestically. Others produced them both domestically and abroad. To obtain softwood lumber companies’ views on CDSOA’s effects, we sent out questionnaires to certain softwood lumber CDSOA recipients and non- recipients. CBP made CDSOA payments to recipients in this industry in fiscal years 2003 and 2004 under an AD order and a CV order both issued in 2002. To identify CDSOA recipients, we obtained information from CBP about the companies that had received CDSOA payments in the 2 fiscal years and the amount of disbursements they had received. Using this information, we developed a list of 143 recipients and ranked them by their total CDSOA receipts in the 2 fiscal years. Because of time and resource constraints, we decided to survey the top 14 recipients that had received about 60 percent of the total softwood lumber payments. CBP provided contact information on these companies to us. From these 14 companies, we received 13 completed surveys. These 13 companies accounted for about 59 percent of all softwood lumber disbursements. Their views may not be representative of all recipients, particularly those that received relatively small CDSOA disbursements. Given that about half of the industry is eligible to receive CDSOA disbursements, we sought to obtain views from a comparable number of recipients and non-recipients. To identify non-recipient companies, we obtained information from public and private sources that are knowledgeable about the industry. Specifically, we obtained information on non-recipients from the ITC and softwood lumber companies. We surveyed 15 companies and we received six completed surveys from them. These respondents included a wide range of top non-recipients, including one of the largest companies in the industry. However, their views may not be representative of all non-recipients. Kim Frankena served as Assistant Director responsible for this report, and Juan Tapia-Videla was the Analyst-in-Charge. In addition to those named above, the following individuals made significant contributions to this report: Shirley Brothwell, Ming Chen, Martin de Alteris, Carmen Donohue, John Karikari, Casey Keplinger, Jeremy Latimer, and Grace Lui. The team benefited from the expert advice and assistance of Jamie McDonald, Jena Sinkfield, Tim Wedding, and Mark Speight.
Between fiscal years 2001 and 2004, the Continued Dumping and Subsidy Offset Act (CDSOA) provided over $1 billion funded from import duties to U.S. companies deemed injured by unfair trade. Some supporters state CDSOA helps U.S. companies compete in the face of continuing unfair trade. Some opponents believe CDSOA recipients receive a large, unjustified windfall from the U.S. treasury. Also, 11 World Trade Organization (WTO) members lodged a complaint over the law at the WTO. This report assesses (1) key legal requirements guiding and affecting agency implementation of CDSOA; (2) problems, if any, U.S. agencies have faced in implementing CDSOA; and (3) which companies have received CDSOA payments and their effects for recipients and non-recipients; and describes (4) the status of WTO decisions on CDSOA. Congress enacted CDSOA to strengthen relief to injured U.S. producers. The law's key eligibility requirements limit benefits to producers that filed a petition for relief or that publicly supported the petition during a government investigation to determine whether injury had occurred. This law differs from trade remedy laws, which generally provide relief to all producers in an industry. Another key CDSOA feature requires that Customs and Border Protection (CBP) disburse payments within 60 days after the beginning of a fiscal year, giving CBP limited time to process payments and perform desired quality controls. This time frame, combined with a dramatic growth in the program workload, presents implementation risks for CBP. CBP faces three key implementation problems. First, processing of company claims and CDSOA payments is problematic because CBP's procedures are labor intensive and do not include standardized forms or electronic filing. Second, most companies are not accountable for the claims they file because they do not have to support their claims and CBP does not systematically verify the claims. Third, CBP's problems in collecting duties that fund CDSOA have worsened. About half of the funds that should have been available for disbursement remained uncollected in fiscal year 2004. Most of the CDSOA payments went to a few companies with mixed effects. About half of these payments went to five companies. Top recipients we surveyed said that CDSOA had beneficial effects, but the degree varied. In four of seven industries we examined, recipients reported benefits, but some non-recipients noted CDSOA payments gave their competitors an unfair advantage. These views are not necessarily representative of the views of all recipients and non-recipients. Because the United States has not brought CDSOA into compliance with its WTO obligations, it faces additional tariffs on U.S. exports covering a trade value of up to $134 million based on 2004 CDSOA disbursements. Recently, Canada, the European Union, Mexico, and Japan imposed additional duties on various U.S. exports. Four other WTO members may follow suit.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Navy ordnance business area, which consists of the Naval Ordnance Center (NOC) headquarters and subordinate activities, such as Naval weapons stations, operates under the revolving fund concept as part of the Navy Working Capital Fund. It provides various services, including ammunition storage and distribution, ordnance engineering, and missile maintenance, to customers who consist primarily of Defense organizations, but also include foreign governments. Revolving fund activities rely on sales revenue rather than direct congressional appropriations to finance their operations and are expected to operate on a break-even basis over time—that is, to neither make a profit nor incur a loss, but to recover all costs. During fiscal year 1996, the Navy ordnance business area reported revenue of about $563 million and costs of about $600 million, for a net operating loss of about $37 million. In accordance with current Department of Defense (DOD) policy, this loss and the $175 million the business area lost during fiscal years 1994 and 1995 will be recouped by adding surcharges to subsequent years’ prices. As discussed in our March 1997 report, higher-than-expected overhead costs were the primary cause of the losses that the business area incurred during fiscal years 1994 through 1996. We also testified on this problem in May 1997, and recommended that the Secretary of the Navy develop a plan to streamline the Naval ordnance business area’s operations and reduce its overhead costs. The Navy has initiated a restructuring of the business area that, according to the Secretary of the Navy, is “akin to placing it in receivership.” The objective of our audit of the Navy ordnance business area was to assess the Navy’s efforts to reduce costs and streamline its operations. Our current audit of the restructuring of the Navy ordnance business area is a continuation of our work on the business area’s price increases and financial losses (GAO/AIMD/NSIAD-97-74, March 14, 1997). In that report we recommended that the Secretary of Defense direct the Secretary of the Navy to develop a plan to streamline the Navy ordnance business operations and reduce its infrastructure costs, including overhead. This plan should (1) concentrate on eliminating unnecessary infrastructure, including overhead, (2) identify specific actions that need to be accomplished, (3) include realistic assumptions about the savings that can be achieved, (4) establish milestones, and (5) clearly delineate responsibilities for performing the tasks in the plan. To evaluate the actions being taken or considered by the NOC to streamline its operations and reduce costs, we (1) used the work that we performed in analyzing the business area’s price increases and financial losses and (2) analyzed budget reports to identify planned actions and discussed the advantages and disadvantages of the planned actions with Navy, OSD, U.S. Transportation Command, and Joint Staff officials. In analyzing the actions, we determined (1) if specific steps and milestones were developed by the NOC to accomplish the actions, (2) whether the initiatives appeared reasonable and could result in improved operations, (3) what dollar savings were estimated to result from the implementation of the actions, (4) whether the actions went far enough in reducing costs and improving operations, and (5) what other actions not being considered by the NOC could result in further cost reductions or streamlined operations. We did not independently verify the financial information provided by the Navy ordnance business area. We performed our work at the Office of the DOD Comptroller and Joint Staff, Washington, D.C.; Offices of the Assistant Secretary of Navy (Financial Management and Comptroller), Naval Sea Systems Command, Naval Air Systems Command, and Headquarters, Defense Finance and Accounting Service, all located in Arlington, Virginia; Headquarters, U.S. Atlantic Fleet, Norfolk, Virginia; Naval Ordnance Center Headquarters, Indian Head, Maryland; Naval Ordnance Center Atlantic Division, Yorktown, Virginia; Naval Ordnance Center Pacific Division, Seal Beach, California; Naval Weapons Station, Yorktown, Virginia; Naval Weapons Station, Charleston, South Carolina; Naval Weapons Station, Earle, New Jersey; Naval Weapons Station, Seal Beach, California; Naval Weapons Station, Concord, California; Naval Weapons Station Detachment, Fallbrook, California; Naval Warfare Assessment Division, Corona, California; and U.S. Transportation Command, Scott Air Force Base, Illinois. Our work was performed from June 1996 through September 1997 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report. The Under Secretary of Defense (Comptroller) provided us with written comments, which we incorporated where appropriate. These comments are reprinted in appendix I. The Navy has incorporated a goal to reduce annual costs by $151 million into its ordnance business area’s budget estimate and has identified the major actions that will be taken to achieve this goal. Our analysis of available data indicates that the planned actions should result in substantial cost reductions and more streamlined operations. However, we cannot fully evaluate the reasonableness of the cost reduction goal at this time because the Navy does not expect to finalize the cost reduction plan until October 1997. During the fiscal year 1998 budget review process, OSD officials worked with the Navy to formulate a restructuring of the Navy ordnance business area. According to the budget estimate the Navy submitted to the Congress in February 1997, this restructuring will allow the ordnance business area to achieve substantial cost and personnel reductions without adversely affecting ordnance activities’ ability to satisfy their customers’ peacetime and contingency requirements. Specifically, the budget estimate indicated that between fiscal years 1996 and 1999, the business area’s civilian and military fiscal year end strengths will decline by 18 percent and 23 percent, respectively, and its annual costs will decline by $151 million, or 25 percent. The budget also indicated that the business area will increase its fiscal year 1998 prices in order to recover $224 million of prior year losses and achieve a zero accumulated operating result by the end of fiscal year 1998. The Navy’s fiscal year 1998 budget submission also indicated that the planned restructuring of the business area (1) is based on an assessment of whether current missions should be retained in the business area, outsourced to the private sector, or transferred to other organizations and (2) will make fundamental changes in how the business area is organized and conducts its business. Our assessment of the individual actions—most of which are expected to be initiated by October 1997 and completed during fiscal year 1998—shows that the Navy is planning to reduce costs by eliminating or consolidating redundant operations and reducing the number of positions in the business area. These actions, which are listed below, should help to streamline the Navy ordnance operations and reduce costs. Properly sizing the business area’s workforce to accomplish the projected workload by eliminating about 800 positions, or about 18 percent of the total, before the end of October 1997. Enhancing the business area’s ability to respond to unanticipated workload changes by increasing the percentage of temporary workers in the work force from 8 percent to 20 percent. Enhancing the business area’s ability to identify redundant ordnance engineering capability and to streamline its information resource functions by consolidating management responsibility for these areas by October 1, 1997. Reducing overall operating costs by significantly cutting back on operations at the Charleston and Concord Weapons Stations, beginning in October 1997. Eliminating redundant capability and reducing costs by consolidating (1) some weapons station functions, such as safety and workload planning, at fewer locations, (2) inventory management functions at the Inventory Management and Systems Division, and (3) maintenance work on the Standard Missile at the Seal Beach Naval Weapons Station. Reducing overhead contract costs, such as utilities and real property maintenance during fiscal year 1998. Enhancing business area managers’ ability to focus on their core ordnance missions of explosive safety, ordnance distribution, and inventory management by transferring east coast base support missions to the Atlantic Fleet on October 1, 1997. The Navy’s planned restructuring of its ordnance business area will reduce overhead costs and is an important first step toward the elimination of the redundant capability both within the business area and between the business area and other organizations. However, as discussed in the following sections, our analysis indicates that there are opportunities for additional cost reductions by (1) developing and implementing a detailed plan to eliminate redundant ordnance engineering capability, (2) converting military guard positions to civilian status, and (3) implementing two actions that Navy ordnance officials are currently considering. Navy ordnance officials plan to consolidate management responsibility for the business area’s nine separate ordnance engineering activities under a single manager on October 1, 1997. This will allow this manager to have visibility over all of the business area’s engineering resources and should facilitate more effective management of these engineering resources. However, it will not result in any savings unless action is also taken to eliminate the redundant ordnance engineering capability that previous studies have identified both within the ordnance business area and between the business area and other Navy organizations. For example, a 1993 Navy study estimated that 435 work years, or $22 million, could be saved annually by reducing Navy-wide in-service ordnance engineering functions from 20 separate activities to 8 consolidated activities. However, Navy ordnance officials stated that these consolidations were never implemented. They also stated that although they did not know why the consolidations were not implemented, they believe it was because (1) the Navy’s ordnance engineering personnel are managed by the NOC and three different major research and development organizations and (2) the Navy did not require these four organizations to consolidate their ordnance in-service engineering functions. Since 1954, DOD Directive 1100.4 has required the military services to staff positions with civilian personnel unless the services deem a position military essential for reasons such as combat readiness or training. This is primarily because, as we have previously reported, on average, a civilian employee in a support position costs the government about $15,000 per year less than a military person of comparable pay grade. Our analysis showed that the percentage of military personnel in the NOC workforce is about six times greater than in other Navy Working Capital Fund activities, with most of these positions being military guards such as personnel who guard access to the weapons station at the main entrance. Further, Navy ordnance officials indicated that they know of no reason why the guard positions should not be converted to civilian status. In fact, these officials said that they would prefer to have civilian guards since they are cheaper than military guards, and they noted that all of their activities already have some civilian security positions. Consequently, the Navy can save about $6.8 million annually by converting the NOC’s guard positions to civilian status (based on the $15,000 per position savings estimate). NOC officials told us that they reviewed the need for all of their military positions, and indicated that they plan to eliminate some of these positions. However, they stated that they do not plan to convert any military guard positions to civilian status. A Navy Comptroller official told us that (1) all of the NOC’s guard functions will probably be transferred to the Atlantic and Pacific fleets as part of the ordnance business area restructuring and (2) the fleet commanders, not the NOC, should, therefore, decide whether the military guard positions should be converted to civilian status. Navy ordnance officials are currently considering two additional actions—further consolidating the business area’s missile maintenance work and charging individual customers for the storage of ammunition—that would result in additional cost reductions and a more efficient operation, if implemented. As discussed below, consolidating missile maintenance work would allow the business area to reduce the fixed overhead cost that is associated with this mission, and charging customers for ammunition storage services would give customers an incentive to either relocate or dispose of unneeded ammunition and, in turn, could result in lower storage costs. The Navy ordnance business area, which has had a substantial amount of excess missile maintenance repair capacity for several years, is being forced to spread fixed missile maintenance overhead costs over a declining workload base that is expected to account for only 3 percent of the business area’s total revenue in fiscal year 1998. This problem, which is caused by factors such as force structure downsizing, continues even though the business area recently achieved estimated annual savings of $2.3 million by consolidating all maintenance work on the Standard Missile at one location. The following table shows the substantial decline in work related to four specific types of missiles. NOC officials are currently evaluating several alternatives for consolidating missile maintenance work, including (1) consolidating all work on air launched missiles at one Naval weapons station, (2) transferring all or part of the business area’s missile maintenance work to the Letterkenny Army Depot, Ogden Air Logistics Center and/or a private contractor, and (3) accomplishing all or part of the work in Navy regional maintenance centers. According to DOD, the evaluation of these alternatives should be completed in the spring of 1998. Based on our discussions with Navy ordnance and maintenance officials, the NOC’s evaluations of maintenance consolidation alternatives should identify the total cost of the various alternatives, including onetime implementation costs and costs that are not included in depot maintenance sales prices, such as the cost of shipping items from coastal locations to inland depots and/or contractor plants and assess each alternative’s potential impact on readiness. The Navy ordnance business area incurs costs to store ammunition for customers that are not required to pay for this storage service. Instead, this storage cost is added to the price charged to load ammunition on and off Naval ships and commercial vessels. As shown in the following figure, the business area’s inventory records indicate that 51,231 tons, or about 43 percent, of ammunition stored at the weapons stations was not needed as of May 1, 1997, because (1) there is no requirement for it or (2) the quantity on hand exceeds the required level. If the business area charged customers for ammunition storage, the costs of the storage service would (1) be charged to the customers that benefit from this service and (2) provide a financial incentive for customers to either relocate or dispose of unneeded ammunition. This, in turn, could allow the business area to reduce the number of locations where ammunition is stored and thereby reduce operating costs. This approach has been adopted by the Defense Logistics Agency, which also performs receipt, storage, and issue functions, and the agency stated that instituting such user charges has helped to reduce infrastructure costs by allowing it to eliminate unneeded storage space. In addition, we recently recommended such an approach in our report, Defense Ammunition: Significant Problems Left Unattended Will Get Worse (GAO/NSIAD-96-129, June 21, 1996). Navy ordnance officials told us that they are currently considering charging customers for the storage of ammunition and are taking steps to do so. These officials informed us that they (1) have discussed DLA’s experience in charging a storage cost with DLA officials, (2) have discussed this matter with the torpedo program manager and sent a letter addressing the cost to move the torpedoes off the weapons stations, (3) are drafting similar letters to the other ordnance program managers, and (4) are in the process of determining ammunition storage costs for use in developing storage fees. Most aspects of the Navy’s planned restructuring of its ordnance business area appear to be cost-effective alternatives. However, DOD budget documents indicate that the Navy’s fiscal year 1998 budget submission for its ordnance business area did not adequately consider the impact that planned personnel reductions would have on the business area’s ability to support non-Navy customers during mobilization. These documents also indicate that the Navy was proposing to reduce the operating status of some weapons stations, including Concord. However, OSD officials were concerned with the Navy’s proposal because these weapons stations would handle a majority of all DOD-wide, Army, Air Force, and U.S. Transportation Command explosive cargo in the event of a major contingency; have 10 times the explosive cargo capacity of the ports considered for are having their facilities expanded by the Army to accomplish additional U.S. Transportation Command work; and have specialized explosive storage areas that must be retained to support current inventories of Navy missiles. OSD officials concluded that no alternative to these ports exists and that DOD must, therefore, keep these ports operational. The Deputy Secretary of Defense agreed with this assessment and, in December 1996, directed the Navy not to place any port in a functional caretaker status or reduce its ordnance handling capability until a detailed plan is (1) coordinated within OSD, the Joint Staff, and the other Military Departments and (2) approved by the Secretary of Defense. According to U.S. Transportation Command and Navy ordnance officials, a May 1997 DOD-wide paper mobilization exercise validated the OSD officials’ concerns about Concord Naval Weapons Station performing its mobilization mission. Specifically, the exercise demonstrated that, among other things, (1) the Concord Naval Weapons Station is one of three ports that are essential to DOD for getting ordnance items to its warfighters during mobilization and (2) if Concord is not sufficiently staffed or equipped, there could be a delay in getting ordnance to the warfighter during mobilization. According to Navy ordnance, OSD, the Joint Staff, and U.S. Transportation Command officials, although there is widespread agreement that Concord is needed by all of the military services to meet ammunition out-loading requirements during mobilization, there is no agreement on how to finance the personnel that will be needed in order to accomplish this mission. The Army and Air Force do not believe they should subsidize the operations of a Navy base. At the same time, Navy officials do not believe they should finance the entire DOD mobilization requirement at Concord because (1) most of their facilities in the San Francisco Bay area have been closed and Concord is, therefore, no longer needed by the Navy during peacetime, (2) the Army and Air Force need Concord more than the Navy does, and (3) Concord does not receive enough ship loading and unloading work during peacetime to keep the current work force fully employed. Accordingly, the Navy plans to retain some personnel at Concord, but has shifted all of its peacetime ship loading and unloading operations out of Concord and plans to gradually transfer ammunition currently stored at Concord to other locations. Navy, OSD, and Joint Staff officials informed us that several actions are needed to ensure that Concord has sufficient, qualified personnel to load ammunition onto ships: (1) revalidate the ammunition out-loading mobilization requirements for Concord, (2) determine the minimum number of full-time permanent personnel that Concord needs during peacetime in order to ensure that it can quickly and effectively expand its operations to accomplish its mobilization mission (the core workforce), (3) ensure that Concord’s core workforce is sufficiently trained to accomplish its mobilization mission, and (4) determine a method, either through a direct appropriation or the Working Capital Funds, to finance the Concord’s mobilization requirements. To the Navy’s credit, it has acted to reduce its ordnance business area’s annual cost by $151 million and has incorporated this cost reduction goal into the business area’s budget estimate. Our analysis of available data indicates that, in general, the planned actions should result in substantial cost reductions and more streamlined Navy ordnance operations. The Navy could reduce its cost further and prevent a possible degradation of military readiness by taking the additional actions recommended in this report. Further, the Navy still needs to ensure that a final restructuring plan is completed so that it can tie together all of its planned actions and establish specific accountability, schedules, and milestones as needed to gauge progress. In order for the Concord Weapons Station to accomplish its mobilization mission, we recommend that the Secretary of Defense revalidate the amount of ammunition Concord Weapons Station needs to load onto ships during mobilization, direct the Secretary of the Navy to determine the minimum number of personnel Concord Weapons Station needs during peacetime in order to ensure that it can quickly and effectively expand its operations to accomplish its mobilization mission, and ensure that Concord’s core workforce is sufficiently trained to accomplish its mobilization mission. We recommend that the Secretary of the Navy incorporate into the NOC’s detailed cost reduction plan (1) specific actions that need to be accomplished, (2) realistic assumptions about the savings that can be achieved, (3) milestones, and (4) clearly delineated responsibilities for performing the tasks in the plan; evaluate the cost-effectiveness of (1) consolidating all or most of the business area’s missile maintenance workload at one location and/or (2) transferring all or some of this work to public depots or the private sector; develop and implement policies and procedures for charging customers for ammunition storage services; evaluate the appropriateness of converting military guard positions to direct the NOC Commander to determine if it would be cost-beneficial to convert non-guard military positions to civilian status; and eliminate the excess ordnance engineering capability that previous studies have identified both within the NOC and between the NOC and other Navy organizations. In its written comments on this report which identifies the actions the Navy ordnance business area is taking to reduce costs and streamline its operations, DOD agreed fully with five of our eight recommendations. It partially concurred with the remaining three recommendations, as discussed below. In our draft report, we recommended that the Secretary of Defense direct the Secretary of the Navy to (1) determine the minimum number of personnel Concord Weapons Station needs during peacetime in order to ensure that it can quickly and effectively expand its operation to accomplish its mobilization mission and (2) ensure that this core workforce is sufficiently trained to accomplish its mobilization mission. In partially concurring with this recommendation, DOD agreed that both of these tasks should be accomplished and that the Navy should be responsible for identifying the peacetime manning requirement. However, it indicated that this core workforce cannot be adequately trained for its mobilization mission unless it is given the appropriate amount and type of work during peacetime. DOD further stated it will take steps during the fiscal year 1999 budget process to ensure that adequate and funded workload is provided to Concord. We agree with DOD’s comment and revised our final report to recommend that DOD act to ensure that the core workforce is sufficiently trained. Concerning our recommendation to charge customers for ammunition storage services, the Navy agreed that action should be taken to (1) store only necessary ammunition at its weapons stations and (2) transfer excess ammunition to inland storage sites or disposal. The Navy believes that this can be accomplished without imposing a separate fee for storing ammunition. However, Navy records show that 51,231 tons, or about 43 percent, of ammunition stored at weapons stations was not needed as of May 1997. As stated in this report, because of the persistent nature of this problem, we continue to believe that charging customers for ammunition storage will provide the financial incentive for customers to relocate or dispose of unneeded ammunition. Finally, concerning our recommendation to convert military guard positions to civilian positions, the Navy stated that it is in the process of transferring the Navy ordnance east coast security positions to the Atlantic Fleet and that it plans to transfer the west coast security positions to the Pacific Fleet. It believes that the two Fleet Commanders need time to evaluate the appropriateness of converting the military guard positions to civilian positions. We agree with DOD’s comment that this decision should be made by the Fleet Commanders and have revised our recommendation accordingly. As part of this evaluation, the Navy needs to consider the cost of the guard positions since a civilian employee in a support position costs the government about $15,000 per year less than a military person of comparable pay grade. We are sending copies of this report to the Ranking Minority Member of your Subcommittee; the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services; the Senate Committee on Appropriations, Subcommittee on Defense; the House Committee on Appropriations, Subcommittee on National Security; the Senate and House Committees on the Budget; the Secretary of Defense; and the Secretary of the Navy. Copies will also be made available to others upon request. If you have any questions about this report, please call Greg Pugnetti at (202) 512-6240. Other major contributors to this report are listed in appendix II. Karl J. Gustafson, Evaluator-In-Charge Eddie W. Uyekawa, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed financial and management issues related to the ordnance business area of the Navy Working Capital Fund, focusing on: (1) the Navy's proposed and ongoing actions to reduce the business area's costs; and (2) additional cost reduction opportunities. GAO noted that: (1) the Navy is in the process of developing the cost reduction plan GAO recommended in its March 1997 report and has proposed and begun implementing a number of actions to reduce its ordnance business area's annual operating costs by $151 million, or 25 percent, between fiscal year 1996 and 1999; (2) this is a significant step in the right direction and should result in substantial cost reductions and more streamlined operations; (3) GAO's review of the business area's operations and discussions with the Office of the Secretary of Defense (OSD) and Navy ordnance officials indicate that the Navy has both an opportunity and the authority to further reduce Navy ordnance costs; (4) specifically: (a) redundant ordnance engineering capability exists within the business area and other Navy organizations; (b) military personnel are performing work that could be performed by less expensive civilian employees; (c) redundant missile maintenance capability exists; and (d) no financial incentive exists for customers to store only needed ammunition (the business area's inventory records show that 43 percent of the ammunition stored was unneeded as of May 1, 1997) since they do not directly pay for storage costs; (5) while most of the planned cost reduction actions appear to be appropriate, it remains to be seen whether the business area will reduce costs by $151 million; (6) in addition, GAO's review of available data indicates that one of the cost reduction actions--the planned personnel reductions--may adversely affect the Concord Naval Weapons Station's ability to load ships during mobilization, thus creating potential readiness problems; and (7) these personnel reductions are likely to have little impact on the Navy, but could have a significant impact on the Army and Air Force, which would rely heavily on Concord during a major contingency operation.
You are an expert at summarizing long articles. Proceed to summarize the following text: Federal responsibilities for assisting states in preparing for emergencies include developing national strategies, policies, and guidelines and providing funding to assist states in developing their emergency preparedness plans and programs. A critical element of emergency preparedness is preparing health care systems for medical surge in a mass casualty event, and consideration of hospital capacity, alternate care sites, electronic medical volunteer registries, and altered standards of care is key to this task. DHS is responsible for developing national strategies, policies, and guidelines related to emergency preparedness. Additionally, DHS administers the Homeland Security Grant Program, which currently consists of four programs—the State Homeland Security Program, Urban Areas Security Initiative, Metropolitan Medical Response System, and Citizens Corps Program. While these programs generally award funds to states and municipalities for the prevention and detection of terrorist acts, some funds can be spent on medical response, including medical surge activities. HHS has the principal responsibility for helping states to prepare for medical surge. In December 2006, PAHPA established ASPR within HHS in order to enhance coordination of public health and medical surge. The act reauthorized and gave ASPR authority over the Hospital Preparedness Program, which provides funds annually to 62 entities—the 50 states, 4 municipalities, 5 U.S. territories, and 3 Freely Associated States of the Pacific—through cooperative agreements in order to strengthen their emergency readiness capabilities. Also, beginning in fiscal year 2009, HHS will require that states provide a 5 percent match to the amount of the federal cooperative agreement funding, through either state funds or in- kind contributions, such as office space or computer support for the program. In 2010 and subsequent years, the matching requirement will increase to 10 percent. As part of the 2006 Hospital Preparedness Program, ASPR required all cooperative agreement recipients to submit midyear progress reports that include data on 15 sentinel indicators, 13 of which are related to medical surge. For example, one of the sentinel indicators is the number of hospitals that have the capacity to maintain at least one patient with a suspected highly infectious disease in a negative pressure isolation room. PAHPA also gave ASPR authority for the Emergency System for Advance Registration of Volunteer Health Professionals (ESAR-VHP). ESAR-VHP supports state-based electronic databases designed to register health care personnel who volunteer to provide medical care in an emergency for the purpose of verifying their credentials. In order to continue to receive Hospital Preparedness Program funds, states must participate in ESAR- VHP by fiscal year 2009. Under PAHPA, HHS is required to link state electronic medical volunteer registries into a national registry. DOD and VA do not have a federal responsibility in assisting states in planning and preparing for medical surge in a mass casualty event. However, since their hospitals are accredited by the Joint Commission, they are required to participate in at least one annual emergency preparedness exercise with their local community. In addition, because they are part of the local community, they would play a role in planning for and responding to local mass casualty events. According to Homeland Security Presidential Directive 21 (HSPD-21) Public Health and Medical Preparedness, issued in October 2007, mass casualty health care is a critical element of public health and medical preparedness. HSPD-21 is one of a series of executive orders released since September 11, 2001, establishing a national strategy to help protect the nation in the event of terrorist attacks or other catastrophic health events. It states that mass casualty health care capability needs to be different from “day-to-day” public health and medical operations, which “cannot meet the needs created by a catastrophic health event.” It also states that the nation must develop a disaster medical capability that, among other things, is rapid, flexible, sustainable, integrated, and coordinated, and delivers appropriate treatment in the most ethical manner with available capabilities. The four key components we identified follow: Hospital capacity: Following a mass casualty event, hospitals may need the ability to adequately care for a large number of additional patients. Strategies to increase hospital capacity include deferring elective procedures, applying more stringent triage for admitting patients, discharging patients early with follow-up by home health care personnel, and adding additional beds and equipment in areas of the hospital that are not normally used for inpatient care, such as outpatient examining rooms. Alternate care sites: A mass casualty event could overwhelm hospitals’ capacity and require the establishment of alternate sites to provide health care services. Alternate care sites deliver medical care outside hospital settings for patients who would normally be treated as inpatients, and triage patients in order to sort those who need critical attention and immediate transport to the hospital from those with less serious injuries. In addition, alternate care sites manage unique considerations that might arise in the context of mass casualty events, including the delivery of chronic care; the distribution of vaccines; or the quarantine, grouping, or sequestration of patients potentially infected with an easily transmissible infectious disease. The development of alternate care sites involves several issues, including the level and scope of medical care to be delivered, the physical infrastructure required, staffing requirements for the delivery of such care, the medical equipment and supplies needed, and the management systems required to integrate such facilities with the overall delivery of health care. Additionally, there are two types of alternate care sites—fixed and mobile. Fixed facilities are nonmedical buildings that, because of their size or proximity to a hospital, can be adapted to provide medical care. Mobile medical facilities are either specialized units with surgical and intensive care capabilities that are based on tractor-trailer platforms or fully equipped hospitals stored in container systems that can be set up quickly. Electronic medical volunteer registries: In a time of emergency, it can be difficult for state and hospital officials who are organizing a response to use medical volunteers unless they have been preregistered to determine who is qualified to provide medical assistance. For example, immediately after the attacks on September 11, 2001, thousands of people spontaneously arrived in New York City to volunteer their assistance— many of whom volunteered to provide medical assistance to the victims of the attacks. However, authorities were unable to distinguish medically qualified from unqualified volunteers. Generally, an electronic medical volunteer registry would (1) preregister health care volunteers, (2) apply emergency credentialing standards to these registered volunteers, and (3) allow for the verification of the identity, credentials, and qualifications of registered volunteers in an emergency. Altered standards of care: In a mass casualty event, routine resource shortages would be significantly magnified and hospitals would have limited access to many needed resources, such as health care providers, equipment and supplies, and pharmaceuticals. As a result, it could be necessary to alter standards of medical care in a manner that is different from normal day-to-day circumstances and appropriate to the situation. For example, because of an influx of a large number of patients in a mass casualty event, adequate staffing of health care providers would be hindered by the current shortages of health care providers. Workforce shortages could result in hospitals changing their established standards of care, such as nurse-to-patient care ratios. The federal government has provided funding, guidance, and other assistance to help states prepare their regional and local health care systems for medical surge in a mass casualty event. From fiscal years 2002 through 2007, the federal government awarded the states about $2.2 billion through ASPR’s Hospital Preparedness Program to support activities to meet their preparedness priorities and goals, including medical surge. Further, the federal government developed, or contracted with experts to develop, guidance that was provided for states to use when preparing for medical surge. In addition, the federal government provided other assistance, such as conferences for states. From fiscal years 2002 through 2007, HHS awarded states about $2.2 billion through ASPR’s Hospital Preparedness Program to support activities to strengthen their hospital emergency preparedness capabilities, including medical surge goals and priorities. (See app. III for Hospital Preparedness Program cooperative agreement funding by state.) ASPR’s 2007 Hospital Preparedness Program guidance specifically authorized states to use funds on activities such as the development of a fully operational electronic medical volunteer registry in accordance with ESAR-VHP guidance and the establishment of alternate care sites. We cannot report state-specific funding for four key components—hospital capacity, alternate care sites, electronic medical volunteer registries, and altered standards of care—because state expenditure reports did not disaggregate the dollar amount spent on specific activities related to these components. During fiscal years 2003 through 2007, DHS’s Homeland Security Grant Program also awarded the states funds that were used for a broad variety of emergency preparedness activities and may have included medical surge activities. However, most of these DHS grant funds were not targeted to medical surge activities, and states do not report the dollar amounts spent on these activities. The federal government developed, or contracted with experts to develop, guidance for states to use in preparing for medical surge. DHS developed overarching guidance, including the National Preparedness Guidelines and the Target Capabilities List. The National Preparedness Guidelines describes the tasks needed to prepare for a medical surge response to a mass casualty event, such as a bioterrorist event or natural disaster, and establishes readiness priorities, targets, and metrics to align the efforts of federal, state, local, tribal, private-sector, and nongovernmental entities. The Target Capabilities List provides guidance on building and maintaining capabilities, such as medical surge, that support the National Preparedness Guidelines. The medical surge capability includes activities and critical tasks needed to rapidly and appropriately care for the injured and ill from mass casualty events and to ensure that continuity of care is maintained for non-incident-related injuries or illnesses. In addition, ASPR provided states with specific guidance related to preparing for medical surge in a mass casualty event, including annual guidance for its Hospital Preparedness Program cooperative agreements, guidance for developing ESAR-VHP-compliant electronic medical volunteer registries, and guidance to develop a hospital bed tracking system. The Hospital Preparedness Program cooperative agreement guidance included activities to assist states in following DHS’s guidelines and meeting its targets. ASPR’s ESAR-VHP guidelines provide states with common definitions, standards, and protocols, which can aid in forming a national network to facilitate the deployment of medical volunteers for any emergency among states. For example, ESAR-VHP registration guidelines categorize medical volunteers by profession, ranging from physicians to mental health counselors. ESAR-VHP guidelines also include four different levels of credentialing based on verification of each volunteer’s qualifications. ASPR provided guidance to states for the Hospital Available Beds for Emergencies and Disasters (HAvBED) system, which is an inpatient bed tracking system designed to allow emergency response entities to know where and what type of additional hospital beds are available, in order to know which hospitals still have capacity to receive patients. HAvBED reports the number of beds vacant/available at the aggregate state level to HHS. To enhance consistency among state- reported data, HAvBED provides standard definitions of beds and data elements each system must incorporate when reporting bed availability during a mass casualty event. Additionally, HHS worked through AHRQ and contracted with nonfederal entities to develop publications for states to use when preparing for medical surge. For example, AHRQ published the document Mass Medical Care with Scarce Resources: A Community Planning Guide to provide states with information that would help them in their efforts to prepare for medical surge, such as specific circumstances they may face in a mass casualty event. This publication notes that the state may be faced with allocating medical resources during a mass casualty event, such as determining which patients will have access to mechanical ventilation. The publication recommends that the states develop decision-making guidelines on how to allocate these medical resources. The RAND Corporation developed the publication Learning from Experience: The Public Health Response to West Nile Virus, SARS, Monkeypox, and Hepatitis A Outbreaks in the United States, which provides states with information on challenges that they may face in a disease outbreak or bioterrorist attack. AHRQ also published Reopening Shuttered Hospitals to Expand Surge Capacity, which contains an action checklist that can be used by states and local entities to identify organizations that have an interest or responsibility in preparing for medical surge, and to determine what resources each could provide. (See app. III for a list of federal guidance.) To support states’ efforts to prepare for medical surge, the federal government also provided other assistance such as conferences and electronic bulletin boards for states to use in preparing for medical surge. States were required to attend annual conferences for Hospital Preparedness Program cooperative agreement recipients, where ASPR provided forums for discussion of medical surge issues. (See app. III for a list of federal conferences.) Additionally, ASPR’s Web site contained links to related published documents, and states were given access to an ASPR- operated electronic bulletin board to communicate with other states on medical surge issues related to the Hospital Preparedness Program. Furthermore, ASPR project officers and CDC subject matter experts were available to provide assistance to states on issues related to medical surge. For example, CDC’s Division of Healthcare Quality Promotion developed cross-sector workshops for local communities to bring their emergency management, medical, and public health officials together to focus on emergency planning issues, such as developing alternate care sites. Many states have made efforts related to three of the key components for preparing for medical surge, that is, increasing hospital capacity, planning for alternate care sites, and developing electronic medical volunteer registries, but fewer have implemented the fourth, planning for altered standards of care. More than half of the 50 states were meeting or close to meeting the criteria for the five medical-surge-related sentinel indicators for hospital capacity. In our 20-state review, we found that all were developing bed reporting systems and almost all of the states with DOD and VA hospitals were engaging in various levels of coordination with those hospitals in an effort to expand their hospital capacity. Of the 20 states, 18 reported that they were in the process of selecting alternate care sites that used either fixed or mobile medical facilities. Additionally, 15 of the 20 states had begun registering volunteers in electronic medical volunteer registries. However, only 7 of the 20 states had adopted or were drafting altered standards of care for specific medical interventions to be used in response to a mass casualty event. More than half of the states met or were close to meeting the criteria for the five surge-related sentinel indicators for hospital capacity that we reviewed from the Hospital Preparedness Program 2006 midyear progress reports, the most recent available data at the time of our analysis. (See table 1 for the five sentinel indicators and the associated criteria.) Twenty- four of the states reported that all of their hospitals were participating in the state’s program funded by the ASPR Hospital Preparedness Program, with another 14 states reporting that 90 percent or more of their hospitals were participating. Forty-three of the 50 states have increased their hospital capacity by ensuring that at least one health care facility in each defined region could support initial evaluation and treatment of at least 10 patients at a time (adult and pediatric) in negative pressure isolation within 3 hours of an event. Regarding individual hospitals’ isolation capabilities, 32 of the 50 states met the requirement that all hospitals in the state that participate in the Hospital Preparedness Program be able to maintain at least one suspected highly infectious disease case in negative pressure isolation; another 10 states had that capability in 90 to 99 percent of their participating hospitals. Thirty-seven of the 50 states reported meeting the criteria that within 24 hours of a mass casualty event, their hospitals would be able to add enough beds to provide triage treatment and stabilization for another 500 patients per million population; another 4 states reported that their hospitals could add enough beds for from 400 to 499 patients per million population. Finally, 20 states reported that all their participating hospitals had access to pharmaceutical caches that were sufficient to cover hospital personnel (medical and ancillary), hospital- based emergency first responders, and family members associated with their facilities for a 72-hour period; another 6 states reported that from 90 to 99 percent of their participating hospitals had sufficient pharmaceutical caches. (See app. IV for further information.) In our further review of 20 states, all 20 states reported that they had developed or were developing bed reporting systems to track their hospital capacity—the first of four key components related to preparing for medical surge. Eighteen of the 20 states reported that they had systems in place that could report the number of available hospital beds within the state. All 18 of these states reported that their systems met ASPR HAvBED standards. For example, in early 2005 one state completed development of a statewide Web-based bed tracking system designed to track the emergency status of all health care facilities. The system has the capacity to present information by individual facility as well as by county. The 2 states that reported that they did not have a system that could meet HAvBED requirements said that they would meet the requirements by August 8, 2008. Our review also found that of the 10 states with DOD hospitals, 9 reported coordinating with DOD hospitals to plan for emergency preparedness and increase hospital capacity. For example, in one state DOD hospital officials served on state-level emergency preparedness committees and participated in training and exercises. The remaining state said it could not report whether the DOD hospitals participated in such activities because these activities were coordinated at the local level. Eight of the 10 states also reported that DOD hospitals in their state would accept civilian patients in the event of a mass casualty event if resources were available. The 2 remaining states did not know whether their DOD hospitals would accept civilian patients, although one of these states said that there had been discussions about this possibility between the state and DOD. Of the 19 states that have VA hospitals, all reported that at least some of the VA hospitals took part in the states’ hospital preparedness programs or were included in planning and exercises for medical surge. For example, VA hospitals in one state were participating in state, regional, and local planning for emergency preparedness along with other hospitals in an effort to increase surge capacity and come closer to the state’s goal of 500 beds for every 1 million population, a VA official said. In another state, a VA hospital was planning with state emergency preparedness officials and DOD hospitals to prepare for any mass casualty event that could occur during a major public event taking place in the state later that year. VA officials stated that individual hospitals cannot precommit resources— specific numbers of beds and assets—for planning purposes, but can accept nonveteran patients and provide personnel, equipment, and supplies on a case-by-case basis during a mass casualty event. Twelve of the 19 states reported that VA hospitals would accept or were likely to accept nonveteran patients in the event of a medical surge if space were available and veterans’ needs had been met. Four of the 19 states reported that their VA hospitals would not accept nonveteran patients in the event of a medical surge, 2 states reported that they did not know if the VA hospitals would accept nonveteran patients, and 1 state reported that some of its VA hospitals would take nonveteran patients and others would not. In planning to increase hospital capacity, most of the 20 states we surveyed reported that they used federal guidance and technical assistance. Eleven states reported that they used ASPR’s Hospital Preparedness Program cooperative agreement guidance, and 9 states used ASPR’s Medical Surge Capacity and Capabilities Handbook. Three states also reported that they used CDC’s Public Health Emergency Preparedness Program cooperative agreement guidance. In addition, 2 states reported that they consulted with ASPR project officers when planning for hospital capacity. Eighteen of the 20 states reported that they were in the process of selecting alternate care sites, and the 2 remaining states reported that they were in the early planning stages in determining how to select sites. Of the 18 states, 10 reported that they had also developed plans for equipping and staffing some of the sites. For example, one state had developed standards and guidance for counties to use when implementing fixed alternate care sites and had stockpiled supplies and equipment for these sites. The counties were responsible for identifying and operating these sites. According to state officials, while most counties were still identifying fixed sites, some counties had established memorandums of understanding with various facilities, including churches, schools, military facilities, and shopping malls. In addition, the state purchased three state-run mobile medical facilities, each with 200 beds, which were stored in the northern, central, and southern parts of the state. Another state, which expects significant transportation difficulties during a natural disaster, had acquired six mobile medical tent facilities of either 20 or 50 beds that were stored at hospital facilities across the state. This state also planned to identify fixed facility alternate care sites, which would provide medical services to people who could not take care of themselves at home but did not need to be in a hospital. Each of these fixed sites was expected to serve 1,000 casualties. One of the 2 states that were in the early planning stages was helping local communities formalize site selection agreements, and the second state had drafted guidance for alternate care sites that was expected to be released early in 2008. Most states reported using AHRQ guidance when planning for alternate care sites. For example, 18 states reported that they used AHRQ’s guidance, such as Rocky Mountain Regional Care Model for Bioterrorist Events, Alternate Care Site Selection Tool, and Reopening Shuttered Hospitals to Expand Surge Capacity. A few states used other federal guidance, such as DHS’s National Incident Management System and National Disaster Management System guidance, when planning alternate care sites. Five states also reported that they used DOD guidance when planning alternate care sites, including DOD’s Modular Emergency Medical System. Fifteen of the 20 states reported that they had begun registering medical volunteers and identifying their medical professions in an electronic registry, and the remaining 5 states were developing their electronic registries and had not registered any volunteers. For 2006, ESAR-VHP guidance identified seven categories of health care professionals ranging from physicians to mental health counselors that should be included in the states’ registries. Of the 15 states that reported that they had begun registering volunteers, 3 states had registered volunteers in more than eight categories, 3 states had registered volunteers in five to seven categories, and the remaining 9 states had registered volunteers in four or fewer categories, often concentrating on nurses. Officials from 4 of the 5 remaining states that had not begun registering volunteers reported that they anticipated registering volunteers by the spring or summer of 2008. An official from the other state reported that state officials did not know when they would begin to register volunteers. Of the 15 states that reported they were registering volunteers, 12 reported they had begun to verify the volunteers’ medical qualifications, though few had conducted the verification to assign volunteers to the highest level, Level 1. If a volunteer is assigned to Level 4, it means that the state has not verified any medical qualifications, such as licenses or certifications in medical subspecialties. Three of the 15 states had registered volunteers solely at Level 4. Seven of the 12 states had credentialed some volunteers no higher than Level 3, meaning they had verified the licenses of some of the volunteers. For example, one state had verified the credentials and assigned all of its 1,498 registered volunteers at Level 3. Another 3 of the 12 states had assigned volunteers to no higher than Level 2, meaning these states had conducted additional verification of medical qualifications, such as degrees. For example, one state had assigned its registered volunteer nurses at Level 2. The remaining 2 states had assigned a small number of volunteers at Level 1. For example, one state had assigned 2 of 955 volunteers at Level 1. At Level 1, all of a volunteer’s medical qualifications, which identify their skills and capabilities, have been verified and the volunteer is ready to provide care in any setting, including a hospital. Nineteen of the 20 states reported that they used ASPR’s ESAR-VHP Interim Technical and Policy Guidelines, Standards, and Definitions when developing registries. Eight of the 20 states also reported that they used information obtained from the annual ESAR-VHP conferences to help develop their volunteer medical registry systems. In our 20-state review of efforts related to the fourth key component, we found that 7 states had adopted or were drafting altered standards of care for specific medical issues. Three of the 7 states had adopted some altered standards of care guidelines. For example, one state had prepared a standard of care for the allocation of ventilators in an avian influenza pandemic, which one state official reported would also be applicable during other types of emergencies. Another state issued guidelines in February 2008 for allocating scarce medical resources in a mass casualty event that call for suspending or relaxing state laws covering medical care and for explicit rationing of health care to save the most lives, and require that the same allocation guidelines be used across the state. For example, during a mass casualty event in this state, hospitals could ignore their nurse-patient ratios and nurses could be assigned to jobs outside their specific area of expertise. In addition, nonlicensed individuals, or retired health care providers whose licenses had lapsed, could be recruited to provide emergency care. For example, a nonmedical hospital employee who had experience as a military medic could get an emergency credential to stitch up wounds or start intravenous lines. According to an official, the state had not completed all of the guidelines for allocation of scarce resources that it planned to develop. The state recently convened a panel of ethicists and providers to address which specific categories of patients would receive scarce resources, such as vaccines and ventilators, when shortages existed. Of the 13 states that had not adopted or drafted altered standards of care, 11 states were beginning discussions with state stakeholders, such as medical professionals and lawyers, related to altered standards of care, and 2 states had not addressed the issue. One state reported that its state health department planned to establish an ethics advisory board to begin discussion on altered standards of care guidelines. Another state had developed a “white paper” discussing the need for an altered standards of care initiative and planned to fund a symposium to discuss this initiative. Six of the seven states that had adopted or were drafting altered standards of care guidelines reported using AHRQ documents, such as Altered Standards of Care in Mass Casualty Events and Mass Medical Care with Scarce Resources: A Community Planning Guide. Officials from one state reported that they had also used CDC documents and the federal government’s pandemic influenza Web site when planning for altered standards of care. While the Hospital Preparedness Program has been operating since 2002, state officials in the 20 states we surveyed reported that they faced continuing challenges in preparing for medical surge in a mass casualty event. Even though many states have made efforts to increase hospital capacity, provide care at alternate care sites, identify and use medical volunteers, and develop appropriate altered standards of care, they expressed concerns related to all four of these key components of medical surge. State officials also noted concerns related to programmatic and regulatory issues involved in preparing for medical surge in a mass casualty event. State officials raised several concerns related to their ability to increase hospital capacity, including maintaining adequate staffing levels during mass casualty events, a problem that was more acute in rural communities. While 19 of 20 states we surveyed reported that they could increase numbers of hospital beds in a mass casualty event, some state officials were concerned about staffing these beds because of current shortages in medical professionals, including nurses and physicians. Some state officials reported that their states faced problems in increasing hospital capacity because many of their rural areas had no hospital or small numbers of medical providers. For example, officials from a largely rural state reported that in many of the state’s medically underserved areas hospitals currently have vacant beds because they cannot hire medical professionals to staff them. In addition, these officials reported that because their hospitals did not provide pediatric intensive care or burn care services and instead transferred these patients to neighboring states, the state might not be able to provide these services during a mass casualty event. State officials also reported that as time passed and no mass casualty events occurred, increasing hospital capacity for a mass casualty event seemed to be a waning priority for hospital chief executive officers. State officials reported that it was difficult to continue to engage private-sector hospital chief executive officers in emergency preparedness activities at a time when these hospitals were facing day-to-day financial problems. For example, officials from one state reported that hospitals in the state were consolidating and closing, and officials from another state reported that fewer hospitals were applying for ASPR Hospital Preparedness Program funds. Officials from two other states reported that progress in preparing emergency plans had slowed, especially for the smaller rural facilities, because the Hospital Preparedness Program allows states to use these funds to hire staff to assist with emergency planning but prohibits hospitals from doing so. According to officials from one of these states, hospital staff have had limited time to spend on emergency planning activities because they must first attend to the operational needs of the hospital. Some state officials reported that it was difficult to identify appropriate fixed facilities for alternate care sites. Officials from two states reported that some small, rural communities had few facilities that would be large enough to house an alternate care site. Officials from some states also reported that some of the facilities that could be used as alternate care sites had already been allocated for other emergency uses, such as emergency shelters. State officials also reported concerns about reimbursement for medical services provided at alternate care sites, which are not accredited health care facilities. During the response to Hurricane Katrina, the Secretary of HHS waived a number of statutory and regulatory requirements related to medical care, and this waiver allowed for reimbursement of medical care provided in alternate care sites. However, officials from several states said that hospitals would prefer to know ahead of time under what circumstances they would receive reimbursement from the Centers for Medicare & Medicaid Services (CMS) for medical care provided in alternate care sites during a mass casualty event. State officials said that having such information would make planning and exercising easier and more realistic. CMS officials told us it would be very difficult to provide specific guidance that would apply to all medical surge events and that the agency preferred to issue guidance on a case-by-case basis following visits to alternate care sites by CMS or Joint Commission officials during the emergency. For example, after Hurricane Katrina, CMS officials visited alternate care sites and the Secretary of HHS relaxed reimbursement requirements for medical care provided in a hospital parking lot, the convention center, and a department store. State officials also told us they were unclear how certain federal laws and regulations that relate to medical care—specifically, the privacy rule issued by HHS under the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and the Emergency Medical Treatment and Labor Act (EMTALA)—would apply in a mass casualty event, especially if the care were provided in an alternate care site and not a hospital. EMTALA requires hospital emergency rooms at Medicare-participating hospitals to screen and treat for emergency medical conditions all individuals who seek treatment. The HIPAA privacy rule prohibits the unauthorized disclosure of individually identifiable health information by health care providers and certain other entities. The Social Security Act authorizes the Secretary of HHS to waive EMTALA and certain requirements under the HIPAA privacy rule during national emergencies, such as a mass casualty event. Federal guidance published in 2006 describes circumstances where provisions related to emergency treatment and privacy protections were temporarily suspended. AHRQ’s publication Providing Mass Medical Care with Scarce Resources: A Community Planning Guide states that requiring hospitals to adhere to EMTALA requirements during a mass casualty event could be unworkable because of the large number of casualties. It notes that during Hurricane Katrina, HHS temporarily suspended the application of EMTALA in affected regions. This allowed hospitals to provide individuals’ medical screening examination at, or transfer them to, alternate care sites, such as a convention center and department store. During Hurricane Katrina, HHS also temporarily relaxed the sanctions and penalties arising from noncompliance with certain provisions of the HIPAA privacy rule, including the requirements to obtain a patient’s agreement to speak with family members or friends. HHS provided details of these waivers on its Hurricane Katrina Web site. Some states reported that medical volunteers might be reluctant to join a state electronic medical volunteer registry if it is used to create a national medical volunteer registry. PAHPA requires ASPR to use the state-based registries to create a national database. According to state officials, some volunteers do not want to be part of a national database because they are concerned that they might be required to provide services outside their own state. Officials from one state reported that since PAHPA was enacted, recruiting of medical volunteers was more difficult and that the federal government should clarify whether national deployment is a possibility. ASPR officials said that they would not deploy medical volunteers nationally without working through the states. Finally, some states expressed concerns about coordination among programs that recruit medical volunteers for emergency response. Officials from one state reported that federal volunteer registration requirements for the Medical Reserve Corps (MRC) and the ESAR-VHP programs had not been coordinated, resulting in duplication of effort for volunteers. For example, the volunteers registered in the MRC units in that state also were expected by the state to register in the state electronic medical volunteer registry. Officials from a second state reported that a volunteer for one program that recruits medical volunteers is often a potential volunteer for another such program, which could result in volunteers being double-counted. For example, an emergency medical technician registered in the electronic medical volunteer registry may also volunteer for an MRC unit, a Disaster Medical Assistance Team (DMAT), and the American Red Cross. This may cause staffing problems in the event of an emergency when more than one volunteer program is activated. Some state officials reported that they had not begun work on altered standards of care guidelines, or had not completed drafting guidelines, because of the difficulty of addressing the medical, ethical, and legal issues involved. For example, HHS estimates that in a severe influenza pandemic almost 10 million people would require hospitalization, which would exceed the current capacity of U.S. hospitals and necessitate difficult choices regarding rationing of resources. HHS also estimates that almost 1.5 million of these people would require care in an intensive care unit and about 740,000 people would require mechanical ventilation. Even with additional stockpiles of ventilators, there would likely not be a sufficient supply to meet the need. Since some patients could not be put on a ventilator, and others would be removed from the ventilator, standards of care would have to be altered and providers would need to determine which patients would receive them. In addition, some state officials reported that medical volunteers are concerned about liability issues in a mass casualty event. Specifically, state officials reported that hospitals and medical providers might be reluctant to provide care during a mass casualty event, when resources would be scarce and not all patients would be able to receive care consistent with established standards. According to these officials, these providers could be subject to liability if decisions they made about altering standards of care resulted in negative outcomes. For example, allowing staff to work outside the scope of their practice, such as allowing nurses to diagnose and write medical orders, could place these individuals at risk of liability. While some states reported using AHRQ’s Mass Medical Care with Scarce Resources: A Community Planning Guide to assist them as they developed altered standards of care guidelines, some states also reported that they needed additional assistance. States said that to develop altered standards of care guidelines they must conduct activities such as collecting and reviewing published guidance and convening experts to discuss how to address the medical, ethical, and legal issues that could arise during a mass casualty event. Four states reported that, when developing their own guidelines on the allocation of ventilators, they were using guidance from another state. This state estimated that a severe influenza pandemic would require nearly nine times the state’s current capacity for intensive care beds and almost three times its current ventilator capacity, which would require the state to address the rationing of ventilators. In March 2006 the state convened a workgroup to consider clinical and ethical issues in the allocation of mechanical ventilators in an influenza pandemic. The state issued guidelines on the rationing of ventilators that include both a process and an evaluation tool to determine which patients should receive mechanical ventilation. The guidelines note that the application of this process and evaluation tool could result in withdrawing a ventilator from one patient to give it to another who is more likely to survive—a scenario that does not explicitly exist under established standards of care. Additionally, some states suggested that the federal government could help their efforts in several ways, such as by convening medical, public health, and legal experts to address the complex issues associated with allocating scarce resources during a mass casualty event, or by developing demonstration projects to reveal best practices employed by the various states. Recently, the Task Force for Mass Critical Care, consisting of medical experts from both the public and the private sectors, provided guidelines for allocating scarce critical care resources in a mass casualty event that have the potential to assist states in drafting their own guidelines. The task force’s guidelines, which were published in a medical journal in May 2008, provide a process for triaging patients that includes three components—inclusion criteria, exclusion criteria, and prioritization of care. The exclusion criteria include patients with a high risk of death, little likelihood of long-term survival, and a corresponding low likelihood of benefit from critical care resources. When patients meet the exclusion criteria, critical care resources may be reallocated to patients more likely to survive. Many state officials raised concerns about other federal programmatic and regulatory challenges, such as program funding cycles, decreased federal funding for hospital emergency preparedness, and new requirements for state matching funds. State officials reported that ASPR’s Hospital Preparedness Program’s single-year funding cycles had made planning and operating state emergency preparedness programs challenging, in part because it is difficult to plan and implement program activities in a single year. One state official suggested that using a 3-year funding cycle for the Hospital Preparedness Program would allow for long-term planning with more realistic work plans. It would also allow for more time for program development and less time spent on program administration. ASPR officials said that they were aware of the concern and were considering a transition to a multiyear funding cycle beginning in 2009. Another concern expressed by some state officials was that federal funding for ASPR’s Hospital Preparedness Program had decreased while program requirements had increased, making it difficult for states to plan for maintenance of emergency preparedness systems, meet new requirements, and replace expired supplies. Hospital Preparedness Program funds decreased about 18 percent from fiscal year 2004 to fiscal year 2007. Finally, many state officials were concerned about the new requirement for matching funds. Beginning in fiscal year 2009, states that want to receive ASPR’s Hospital Preparedness Program funds will have to match 5 percent of the federal funds with either state funds or in-kind contributions. Though states have begun planning for medical surge in a mass casualty event, only 3 of the 20 states in our review have developed and adopted guidelines for using altered standards of care. HHS has provided broad guidance that establishes a framework and principles for states to use when developing their specific guidelines for altered standards of care. However, because of the difficulty in addressing the related medical, ethical, and legal issues, many states are only beginning to develop such guidelines for use when there are not enough resources, such as ventilators, to care for all affected patients. In a mass casualty event, such guidelines would be a critical resource for medical providers who may have to make repeated life-or-death decisions about which patients get or lose access to these resources—decisions that are not typically made in routine circumstances. Additionally, these guidelines could help address medical providers’ concerns about ethics and liability that may ensue when negative outcomes are associated with their decisions. In its role of assisting states’ efforts to plan for medical surge, HHS has not collected altered standards of care guidelines that some states and medical experts have developed and made them available to other states. Once a mass casualty event occurs, difficult choices will have to be made, and the more fully the issues raised by such choices are discussed prior to making them, the greater the potential for the choices to be ethically sound and generally accepted. To further assist states in determining how they will allocate scarce medical resources in a mass casualty event, we recommend that the Secretary of HHS ensure that the department serve as a clearinghouse for sharing among the states altered standards of care guidelines that have been developed by individual states or medical experts. We requested comments on a draft of this report from HHS, DHS, DOD, and VA. These agencies’ comments are reprinted in appendixes V, VI, VII, and VIII, respectively. In commenting on this draft, HHS said our report was a fair representation of the progress that has been made to improve medical surge capacity. HHS was silent regarding our recommendation that the department serve as a clearinghouse for sharing among the states altered standards of care guidelines developed by individual states or medical experts. HHS provided technical comments, which we incorporated where appropriate. In commenting on this draft, DHS concurred with our findings and raised two issues. With regard to the phrase “altered standards of care,” DHS said that the definition of standard of care implies that the standard does not change but “rather it is the type, or level, of care that is altered,” and that this distinction highlights the need to prepare the public “for a different look to health care” in a mass casualty incident. We agree that efforts to inform the public would be beneficial because of the need for enhanced public awareness about how medical care might be delivered in an emergency, but our report focused on addressing states’ concerns about the medical, ethical, and legal issues involved in drafting altered standards of care guidelines. DHS also characterized our recommendation as calling for “passive guidance” and suggested that HHS may need to explore the possibility of producing guidance to direct states’ discussion on rationing of scarce resources. However, we believe a clearinghouse role is more appropriate for HHS than a directive role because the delivery of medical care is a state, local, and private function. DOD concurred with our findings and conclusions. VA concurred with our findings and said that inconsistencies from state to state regarding VA medical centers’ stance toward treating nonveterans in an emergency stem from the centers’ varying capabilities to provide emergency medical treatment. VA said, for example, that not all medical centers provide emergency services or have the same level of emergency supplies. Nevertheless, VA confirmed its authority to provide care in emergency situations and specifically acknowledged that it is authorized to provide emergency care to nonveterans on a humanitarian basis. Finally, VA also highlighted its federal role in responding to disasters under Emergency Support Function #8, the Robert T. Stafford Disaster Relief and Emergency Assistance Act, and the National Response Framework, which was beyond the scope of our report. As arranged with your offices, unless you release its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies of this report to the Secretary of HHS and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report or need additional information, please contact me at (202) 512-7114 or bascettac@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report were Karen Doran, Assistant Director; Jeffrey Mayhew; Roseanne Price; Lois Shoemaker; and Cherie’ Starck. 4. Number of beds statewide, above the current daily staffed bed capacity, that awardee is capable of surging beyond within 5. Number of participating hospitals statewide that have access to pharmaceutical caches sufficient to cover hospital personnel (medical and ancillary), hospital-based emergency first responders and family members associated with their facilities for a 72-hour period. 6. Number of participating hospitals statewide that have the capacity to maintain at least one suspected highly infectious disease 7. Number of awardees’ defined regions that have regional facilities to support the initial evaluation and treatment of at least 10 adult and pediatric patients at a time in negative pressure isolation within 3 hours post-event. 8. Number of ambulatory and nonambulatory persons that can be decontaminated within a 3-hour period, statewide. 9. Number of health care personnel, statewide, trained through competency-based programs. 10. Number of hospital lab personnel, statewide, trained in the protocols for referral of clinical samples and associated information. 11. Functional state-based ESAR-VHP system in place that allows qualified, competent volunteer health care professionals to register for work in hospitals or other facilities during an emergency situation. 12. Number of volunteer health professionals by discipline and credentialing level currently registered in the state-based ESAR-VHP system. 13. Number of drills conducted during the fiscal year 2006 budget period that included hospital personnel, equipment, or facilities. 14. Number of tabletop exercises conducted during the fiscal year 2006 budget period that included hospital personnel, equipment, or facilities. 15. Number of functional exercises conducted during the fiscal year 2006 budget period that included hospital personnel, equipment, or facilities. The five sentinel indicators that were analyzed in this report for hospital capacity are 2, 4, 5, 6, and 7. To determine what assistance the federal government has provided to help states prepare their regional and local health care systems for medical surge in a mass casualty event, particularly related to four key components—hospital capacity, alternate care sites, electronic medical volunteer registries, and altered standards of care—we reviewed and analyzed national strategic planning documents and identified links among federal policy documents on emergency preparedness. We also reviewed and analyzed studies and reports related to medical surge capacity issued by the Congressional Research Service, the Department of Health and Human Services’ (HHS) Office of Inspector General, the Agency for Healthcare Research and Quality (AHRQ), the Centers for Disease Control and Prevention (CDC), the Office of the Assistant Secretary for Preparedness and Response (ASPR), the Joint Commission, and other experts. In addition, we obtained and reviewed documents from ASPR to determine the amount of funds awarded to states through its Hospital Preparedness Program’s cooperative agreements. We did not review funding documents from the Department of Homeland Security’s (DHS) Homeland Security Grant Program because the agency does not track the dollar amount spent on medical surge activities. We interviewed officials from ASPR, CDC, and DHS to identify and document criteria and guidance given to state and local entities to plan for medical surge and to learn how federal funds were awarded and utilized. To determine what states have done to prepare for medical surge in a mass casualty event, particularly related to four key components, we obtained and analyzed the 2006 and 2007 ASPR Hospital Preparedness Program cooperative agreement applications and 2006 midyear progress reports (the most current available information—generally effective through March 2007—at the time of our data collection) for the 50 states. We also reviewed the 15 sentinel indicators for the Hospital Preparedness Program. We analyzed the 5 medical-surge-related sentinel indicators for which criteria to evaluate performance were identified and which were reported by the states in their 2006 midyear progress reports. Although ASPR’s 2006 guidance for these reports does not provide specific criteria with which to evaluate performance on these indicators, we identified criteria to analyze the data provided for 5 of them from either ASPR’s previous program guidance or DHS’s Target Capabilities List, which includes requirements related to preparing for medical surge. All 5 of the medical-surge-related sentinel indicators we analyzed were related to one of the four key components—hospital capacity. See appendix I for a list of the 15 sentinel indicators. In addition, we obtained and reviewed 20 states’ emergency preparedness planning documents relating to medical surge and interviewed state officials from these states regarding their activities related to hospital capacity, alternate care sites, electronic medical volunteer registries, and altered standards of care. We also interviewed these state officials to determine what federal guidance or tools they used and to identify the Department of Defense (DOD) and the Department of Veterans Affairs (VA) hospitals’ participation in state planning. Finally, we obtained and reviewed DOD and VA policies and interviewed officials to further understand their policies regarding participation with state and local entities in emergency preparedness planning and responding to mass casualty events. To determine what concerns states identified as they prepared for medical surge in a mass casualty event, we interviewed emergency preparedness officials from the 20 states and focused our questions on their efforts related to four key components of medical surge we identified. We also asked what further assistance states might need from the federal government to help prepare their health care systems for medical surge. We did not validate the sentinel indicator data the 50 states reported to ASPR; however, if data for specific indicators were missing or obviously incorrect (e.g., a percentage was greater than 100 percent), we contacted state officials for clarification. We did not examine the accuracy of other self-reported information contained in the midyear progress reports or Hospital Preparedness Program applications from the 20 states we reviewed. During interviews with officials from the 20 states, we discussed the completeness of information provided in their progress reports and applications about four key components related to preparing for medical surge. For each interview, we used a question set that contained open- ended questions. The state emergency preparedness officials we interviewed provided varying levels of detail to answer our questions. Thus our information from these interviews is illustrative and is intended to provide a general description of what the 20 states have done to prepare for medical surge in a mass casualty event and is not generalizable to all 50 states. We conducted our work from May 2007 through May 2008 in accordance with generally accepted government auditing standards. Tables 2, 3, and 4 provide information on ASPR’s Hospital Preparedness Program funding and on guidance and other assistance for states to use in preparing for medical surge. Figures 1 through 5 provide data for the five surge-related sentinel indicators for hospital capacity from ASPR’s Hospital Preparedness Program 2006 midyear progress reports.
Potential terrorist attacks and the possibility of naturally occurring disease outbreaks have raised concerns about the "surge capacity" of the nation's health care systems to respond to mass casualty events. GAO identified four key components of preparing for medical surge: (1) increasing hospital capacity, (2) identifying alternate care sites, (3) registering medical volunteers, and (4) planning for altering established standards of care. The Department of Health and Human Services (HHS) is the primary agency for hospital preparedness, including medical surge. GAO was asked to examine (1) what assistance the federal government has provided to help states prepare for medical surge, (2) what states have done to prepare for medical surge, and (3) concerns states have identified related to medical surge. GAO reviewed documents from the 50 states and federal agencies. GAO also interviewed officials from a judgmental sample of 20 states and from federal agencies, as well as emergency preparedness experts. Following a mass casualty event that could involve thousands, or even tens of thousands, of injured or ill victims, health care systems would need the ability to "surge," that is, to adequately care for a large number of patients or patients with unusual medical needs. The federal government has provided funding, guidance, and other assistance to help states prepare for medical surge in a mass casualty event. From fiscal years 2002 to 2007, the federal government awarded the states about $2.2 billion through the Office of the Assistant Secretary for Preparedness and Response's Hospital Preparedness Program to support activities to meet their preparedness priorities and goals, including medical surge. Further, the federal government provided guidance for states to use when preparing for medical surge, including Reopening Shuttered Hospitals to Expand Surge Capacity, which contains a checklist that states can use to identify entities that could provide more resources during a medical surge. Based on a review of state emergency preparedness documents and interviews with 20 state emergency preparedness officials, GAO found that many states had made efforts related to three of the key components of medical surge, but fewer have implemented the fourth. More than half of the 50 states had met or were close to meeting the criteria for the five medical-surge-related sentinel indicators for hospital capacity reported in the Hospital Preparedness Program's 2006 midyear progress reports. For example, 37 states reported that they could add 500 beds per million population within 24 hours of a mass casualty event. In a 20-state review, GAO found that all 20 were developing bed reporting systems and most were coordinating with military and veterans hospitals to expand hospital capacity, 18 were selecting various facilities for alternate care sites, 15 had begun electronic registering of medical volunteers, and fewer of the states--7 of the 20--were planning for altered standards of medical care to be used in response to a mass casualty event. State officials in GAO's 20-state review reported that they faced challenges relating to all four key components in preparing for medical surge. For example, some states reported concerns related to maintaining adequate staffing levels to increase hospital capacity, and some reported concerns about reimbursement for medical services provided at alternate care sites. According to some state officials, volunteers were concerned that if state registries became part of a national database they might be required to provide services outside their own state. Some states reported that they had not begun work on or completed altered standards of care guidelines due to the difficulty of addressing the medical, ethical, and legal issues involved in making life-or-death decisions about which patients would get access to scarce resources. While most of the states that had adopted or were drafting altered standards of care guidelines reported using federal guidance as they developed these guidelines, some states also reported that they needed additional assistance.
You are an expert at summarizing long articles. Proceed to summarize the following text: The DD(X) destroyer is a multimission surface ship designed to provide advanced land attack capability in support of forces ashore and contribute to U.S. military dominance in littoral operations. Among its planned features is the ability to engage land targets from long ranges using its 155-millimeter guns and Tomahawk land attack cruise missiles. The ship will also feature reduced radar, acoustic, and heat signatures to increase survivability in the littorals. In November 2001, the Navy restructured the program to focus on developing and maturing a number of transformational technologies. These technologies will provide a baseline to support development of a range of future surface ships such as the future cruiser and the Littoral Combat Ship. The DD(X) program is managing risk by designing, developing, and testing 10 engineering development models for the program’s critical technologies. Each of the 10 engineering development models represents an experimental subsystem of DD(X) and may incorporate more than one transformational technology. The key events in the DD(X) schedule are shown in figure 1. The program completed its system-level preliminary design review March 2004 and is currently in system design. The next major event occurs in March 2005, when the Navy will seek authority to commit research, development, test and evaluation funds for detailed design and construction of the lead ship. The program’s system-level critical design review will be held late in fiscal year 2005 after the lead ship authorization and will assess design maturity. The current contract for design and development of DD(X) ends in September 2005. Further design and development activities, including detailed design and construction, will take place under a new contract to be awarded in March 2005. The Conference Report to the fiscal year 2005 Defense Appropriations Act states that the funds appropriated for DD(X) in the act are limited to design and advanced procurement requirements for the first two ships. The Conference Report further directs that no funds are available for the procurement of materials dependent upon delivery of key DD(X) technologies unless those technologies have undergone testing. The Conference Report also states that the Navy should complete land-based testing of the advanced gun system and integrated power system prior to the completion of the critical design review. The Navy is developing 12 technologies for DD(X) using 10 engineering development models. Engineering development models seek to demonstrate key DD(X) subsystems and may involve more than one critical technology (see table 1). The engineering development models are the most significant aspect of the program’s risk reduction strategy. To demonstrate technologies, each DD(X) development model follows a structured approach for design, development, and testing. Initially, requirements for each of the development models are defined and recorded in a common database. The risk of not meeting these requirements is assessed and strategies are formulated to reduce these risks. Once designs are formulated, components are tested to build knowledge about a subsystem’s viability. In testing, the performance of engineering development models is confirmed. It is these tests that provide confidence in a technology’s ability to operate as intended. Once the technology is demonstrated, the subsystem can be integrated into the ship’s system design. Our reviews of commercial and Department of Defense acquisition programs have identified a number of specific practices that ensure that high levels of knowledge are achieved at key junctures in development and used to make investment decisions. The most important practice is achieving a high level of technology maturity at the start of system development. A technology reaches full maturity when its performance is successfully demonstrated in its intended environment. Maturing a technology to this level before including it into system design and development can reduce risk by creating confidence that a technology will work as expected and allows the developer to focus on integrating mature technologies into the ship design. This improves the ability to establish realistic cost, schedule, and performance objectives as well as the ability to meet them. Including the technologies in system development before reaching maturity raises the risk of discovering problems late that can increase the cost and time needed to complete design and fabrication. The Navy’s use of engineering development models to mature DD(X) technologies represents a disciplined process for generating the information needed for development, and corresponds with portions of the best practices approach. In using engineering development models, the Navy seeks to achieve high levels of technology maturity by first defining the requirements and risks of a developmental technology and then executing a series of tests to reduce these risks and prove the utility of a technology in its intended environment. The progress of technology maturity is recorded and communicated clearly through the use of established metrics, affording the program manager and others readily available information for use in decision making. The program’s schedule, however, does not allow most engineering development models to generate sufficient knowledge before key decisions are made. None of the DD(X) technologies included in the 10 engineering development models were mature at the start of system design and none are expected to be mature at the March 2005 decision to authorize detailed design and construction of the lead ship. Under the current schedule, 7 of the 10 subsystems will not be demonstrated until the end of program’s critical design review in August 2005 or beyond. The decision to authorize award of the contract for detail design and construction of the lead ship will thus be made before the technologies are proven and the design is stable. By the end of the critical design review, only three subsystems are expected to have completed testing: the autonomic fire suppression system, the hull form, and the infrared mockups. The integrated power system, peripheral vertical launch system, and total ship computing environment complete testing just after the critical design review. The remaining four subsystems complete testing well after critical design review or are not tested as fully integrated systems until after installation on the first ship. The Navy is aware of the risks presented by its schedule but stated that exit criteria have been established for milestone decisions which ensure requirements will be met. Program officials further stated that, according to the Department of Defense acquisition policy, technologies for ships do not have to be mature until shipboard installation. Our reviews of commercial best practices identified a second critical practice that increases a program’s chances of success: achieving design stability by the system-level critical design review. For a stable design, subsystems are integrated into a product that meets the requirements of the user. Design stability requires detailed knowledge of the form, fit, and function of all technologies as well as the integration of individual, fully matured subsystems. Stability of design allows for testing to prove system reliability and leads into production planning. Most of the testing of the engineering development models will take place in the months immediately before and after critical design review and beyond. Even if the models proceed with complete success, they will not be done in time to achieve design stability at the critical design review. If problems are found in testing–as has been the case with other programs– they could result in changes in the design, delays in product delivery, and increases in product cost. Detailed knowledge about subsystems and their component technologies is necessary for developing the system design. If this information is not available and assumptions about operating characteristics have to be made, redesign may be necessary when reliable information becomes available. This can increase the schedule and the costs of system design. Unstable system design could also affect construction. Higher construction costs are likely to be incurred if work is done inefficiently or if changes result in rework. One example of the consequences of technology and design immaturity already apparent in the DD(X) program is the development of the dual band radar and its impact on the integrated deckhouse. The dual band radar consists of two separate radar technologies and will not complete testing until fiscal year 2008. Due to this lengthy period of testing, the dual band radar may not be installed until the first ship is afloat. Contractors have stated that this schedule has led to the need for increased funding. Because the dual band radar will not be fully tested by critical design review, program officials have had to make some assumptions about where in the deckhouse it will be placed. If the weight of the radar increases or if other technical factors cause it to be relocated, a redesign effort may be needed to assure that requirements are met. As the deckhouse forms a significant portion of the DD(X), redesign could have an impact on the ship as a whole. Other shipbuilding programs have developed strategies that call for maturing critical technologies while still providing decision makers with relatively high levels of knowledge at key decision points. For example, the CVN-21 future aircraft carrier program has a risk reduction strategy that defines a timeline for making decisions about a technology’s maturity. The majority of these decisions are made early in the system design phase prior to the system critical design review. This should allow the system design to proceed in integrating technologies with the assurance that they will work in their intended environment. Lead ship authorization occurs after critical design review so that the maturity of the design can be demonstrated before a decision is made. The DD(X) program entered its system design phase without the majority of its technologies completing their design or component testing stage. These activities include events like design reviews for the integrated power system and damage testing on components of the peripheral vertical launch system. The only development model beyond these initial stages is the hull form, which has completed its initial tests and simulations and is now entering a second design and test phase. Testing subsystems to demonstrate whether they will work in their intended environment is scheduled to begin for most development models in fiscal year 2005 and will continue, in some cases, beyond fiscal year 2006, as shown in table 2. Four of the 10 engineering development models—the total ship computing environment, the peripheral vertical launch system, the hull form, and the infrared mockups—are progressing as planned toward demonstrating complete subsystems. However, challenges have arisen with other development models. The impact of some of these challenges has been mitigated with minimal change to the program, but others remain unresolved or have resulted in rescheduling and cost growth. Only two of the engineering development models, the hullform and the integrated power system, have fallback technologies that could be used if current technologies do not meet requirements. All other engineering development models could necessitate system level redesign if they fail to mature technologies to meet requirements. We have already noted the challenges with the dual band radar and its impact on the integrated deckhouse. Other challenges are highlighted below. Program officials agreed with our assessment of DD(X) program risks, but believe these risks can be mitigated through use of fallback technologies and design budgeting. Design budgeting refers to the practice of building in extra margins, such as weight and space, to accommodate growth as the design matures. Appendix I provides details on the status of all 10 engineering development models. The integrated power system is currently exceeding its weight allowances by a significant amount and has used up its entire additional design margin for weight. This means that any further increases in weight could affect other systems or result in an unplanned and unbudgeted weight reduction program. The power system has also experienced some software compliance issues with the total ship computing environment. Program officials have defined the software issue and are working toward a solution. In addition, the testing schedule for the power system has been altered due to changes to the dual band radar. Program officials had planned to test the two subsystems together in at-sea tests on a surrogate vessel. When the delays in testing for dual band radar occurred, at-sea tests for the power system were cancelled. To compensate for the loss of knowledge that was to be gained by this testing, the program office plans for increased fidelity in land based testing. Plans for the integrated power system do include the use of a fallback technology. Use of the technology would require some trade-offs in performance, weight, and noise requirements. In their comments on this report, the Department of Defense stated that the Navy has allocated additional margin from the total ship design to account for weight growth in the integrated power system. While this adjustment in overall ship margin does not directly impact the overall ship design, it may leave less space for future growth in other systems. While the early tests of the autonomic fire suppression system exceeded expectations, by sustaining significant damage and still controlling the fire, some challenges have arisen that delayed later testing. Like the power system, the fire suppression system experienced compatibility issues with the total ship computing environment. These issues have been recognized and the program office has identified solutions to resolve them. These software compatibility issues caused a delay in the system tests that pushed their completion beyond the system-level critical design review. The current testing plans for the integrated undersea warfare system include testing of the dual frequency sonar array for internal interference, the ability of the high frequency portion of this array to detect mines, and the software necessary to integrate all functions and reduce the sailor’s workload. Though these tests may prove the functionality of components and technologies within the undersea warfare system, they do not demonstrate the system as a whole. As a result, when the current series of testing concludes in May 2005, the undersea warfare system will not have demonstrated operations in the intended environment. While development of the advanced gun system is proceeding as planned and has even overcome early challenges in design and development, the current plans do not include fully demonstrating the maturity of the subsystem. Land based testing of the gun system, including the automated mount and magazine, is planned for the summer of 2005 and flight tests for the munition are set to complete in September of 2005. However, the two technologies will not be tested together until after ship installation. Program officials cited lack of adequate test facilities as the reason for the separate tests. In commenting on a draft of this report, DOD stated that it is appropriate to undertake a reasonable amount of risk in the DD(X) lead ship, given the long production lead time in shipbuilding. It noted that the DD(X) risk mitigation approach represents the management of finite resources to achieve innovation and to implement a cost effective test plan designed to address those risks. DOD stated that the DD(X) schedule supports readiness of all the engineering development models in time for ship installation, which for shipbuilding programs, is the most relevant point of reference for technology maturity as DOD policy indicates technologies for ships do not have to be mature until that time. DOD concluded that the DD(X) engineering development models are on track to support a milestone B decision in March 2005 to authorize the lead ship and to achieve maturity prior to installation. DOD pointed out that it had selected exit criteria for that decision to provide for assessments of critical technologies and that results of all required tests will be available for the decision. DOD made specific comments on individual engineering development models, which we address elsewhere in the report. As noted in the draft report, we believe the approach the Navy has taken to demonstrate DD(X) technologies through the engineering development models is both structured and disciplined. However, the short amount of time between lead ship authorization and ship launch (5 years and 3 months), together with the fact that virtually every major subsystem on the ship depends on a new technology or novel use of existing technologies, frame a challenge that involves significant risk. While tests on some key subsystems are scheduled to be conducted by the milestone B decision, these tests are to demonstrate the functionality of components but not the subsystems. Thus, the full demonstration of technology maturity and the resolution of unknowns will continue beyond the milestone decision. Our past work on best practices has shown that technological unknowns discovered later in product development lead to cost increases and schedule delays. Two key factors that can mitigate the effect of such risks—time in the schedule to address problems and the availability of backup technologies—are largely unavailable for the DD(X) program. While DOD policy allows for technologies to mature up to the point of ship installation, it does not necessarily follow that this is a best practice. In fact, DD(X) will proceed from the start of development to initial capability in about the same time as other non-shipbuilding systems for which DOD does call for demonstration of technology maturity before development start. We plan to provide copies of this report to the Senate Armed Services Committee; the Senate Committee on Appropriations, Subcommittee on Defense; and the House Committee on Appropriations, Subcommittee on Defense. We also will provide copies to the Director, Office of Management and Budget; the Secretary of Defense; and the Secretary of the Navy. We will make copies available to others upon request. If you or your staff have any questions concerning this report, please contact me on (202) 512-4841; or Karen Zuckerstein, Assistant Director, on (202) 512-6785. Major contributors to this report are J. Kristopher Keener, Angela D. Thomas, and Karen Sloan. DD(X) DD(X) DD(X) DD(X) DD(X) DD(X) DD(X) The following are GAO’s comments on the Department of Defense’s letter dated August 20, 2004. 1. Change to the ship schedule incorporated into the body of the report. 2. The period from February to July includes only testing of the permanent magnet motor, one component of the integrated power system. The date in the report was changed to July 2005 to reflect the beginning of full system testing of the integrated power system. 3. This is not a GAO conclusion. The statement is based on statements provided by the Navy as well as industry contractors. 4. Our discussion of the technology and design maturity of the dual band radar and the integrated deckhouse deals with the impact of the Navy’s decision to change radar frequency, not the reason for the decision. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The DD(X) destroyer--a surface ship intended to expand the Navy's littoral warfare capabilities--depends on the development of a number of new technologies to meet its requirements. The Navy intends to authorize detailed design and construction of the first ship in March 2005. GAO's past work has shown that developing advanced systems that rely heavily on new technologies requires a disciplined, knowledge-based approach to ensure cost, schedule, and performance targets are met. Best practices show, for example, that a program should not be launched before critical technologies are sufficiently matured--that is, the technology has been demonstrated in its intended environment--and that a design should be stabilized by the critical design review. Given the complexity of the DD(X) system and the number of new technologies involved, GAO was asked to describe the Navy's acquisition strategy for DD(X) and how it relates to best practices, and how efforts to mature critical technologies are proceeding. To reduce program risk, the Navy plans to build and test 10 developmental subsystems, or engineering development models, that comprise DD(X)'s critical technologies. While using these models represents a structured and disciplined approach, the program's schedule does not provide for the engineering development models to generate sufficient knowledge before key decisions are made. None of the technologies in the 10 engineering development models was proven to be mature when system design began, as best practices advocates. Moreover, the Navy does not plan to demonstrate DD(X) technology maturity and design stability until after the decision to authorize construction of the lead ship, creating risk that cost, schedule, and performance objectives will not be met. With many of the tests to demonstrate technology maturity occurring around the time of critical design review in late fiscal year 2005, there is the risk that additional time and money will be needed to address issues discovered in testing. Some of the technologies are progressing according to the Navy's plans, while others have experienced challenges. Four of the 10 engineering development models--the total ship computing environment, the peripheral vertical launch system, the hull form, and the infrared mockups--are progressing as planned toward demonstrating complete subsystems. However, four other models--the integrated power system, the autonomic fire suppression system, the dual band radar, and the integrated deckhouse--have encountered some problems. At this point, the most serious appear to be the schedule delay in the dual band radar resulting from the Navy's decision to change one radar type and the additional weight of the integrated power system. The two remaining engineering development models--the integrated undersea warfare system and the advanced gun system--are progressing as planned, but will not culminate in the demonstration of complete subsystems before being installed on the first ship. While the Navy has fallback technologies for the hull form and the integrated power system, it does not have such plans for the other eight engineering development models.
You are an expert at summarizing long articles. Proceed to summarize the following text: While numerous military aircraft provide refueling services, the bulk of U.S. refueling capability lies in the Air Force fleet of 59 KC-10 and 543 KC-135 aircraft. These are large, long-range aircraft that have counterparts in the commercial airlines, but which have been modified to turn them into tankers. The KC-10 is based on the DC-10 aircraft, and the KC-135 is similar to the Boeing-707 airliner. Because of their large numbers, the KC-135s are the mainstay of the refueling fleet, and successfully carrying out the refueling mission depends on the continued performance of the KC-135s. Thus, recapitalizing this fleet of KC-135s will be crucial to maintaining aerial refueling capability, and it will be a very expensive undertaking. There are two basic versions of the KC-135 aircraft, designated the KC-135E and KC-135R. The R model aircraft have been re-fitted with modern engines and other upgrades that give them an advantage over the E models. The E-model aircraft on average are about 2 years older than the R models, and the R models provide more than 20 percent greater refueling capacity per aircraft. The E models are located in the Air National Guard and Air Force Reserve. Active forces have only R models. Over half the KC-135 fleet is located in the reserve components. The rest of the DOD refueling fleet consists of Air Force HC- and MC-130 aircraft used by special operations forces, Marine Corps KC-130 aircraft, and Navy F-18 and S-3 aircraft. However, the bulk of refueling for Marine and Navy aircraft comes from the Air Force KC-10s and KC-135s. These aircraft are capable of refueling Air Force and Navy/Marine aircraft, as well as some allied aircraft, although there are differences in the way the KC-10s and KC-135s are equipped to do this. The KC-10 aircraft are relatively young, averaging about 20 years in age. Consequently, much of the focus on modernization of the tanker fleet is centered on the KC-135s, which were built in the 1950s and 1960s, and now average about 43 years in age. While the KC-135 fleet averages more than 40 years in age, the aircraft have relatively low levels of flying hours. The Air Force projects that E and R models have lifetime flying hours limits of 36,000 and 39,000 hours, respectively. According to the Air Force, only a few KC-135s would reach these limits before 2040, but at that time some of the aircraft would be about 80 years old. Flying hours for the KC-135s averaged about 300 hours per year between 1995 and September 2001. Since then, utilization is averaging about 435 hours per year. According to Air Force data, the KC-135 fleet had a total operation and support cost in fiscal year 2001 of about $2.2 billion. The older E model aircraft averaged total costs of about $4.6 million per aircraft, while the R models averaged about $3.7 million per aircraft. Those costs include personnel, fuel, maintenance, modifications, and spare parts. The Air Force has a goal of an 85 percent mission capable rate. Mission capable rates measure the percent of time on average that an aircraft is available to perform its assigned mission. KC-135s in the active duty forces are generally meeting the 85 percent goal for mission capable rates. Data on the mission capable rates for the KC-135 fleet are shown in table 1. Mission capable rate (percent) For comparison purposes, the KC-10 fleet is entirely in the active component, and the 59 KC-10s had an average mission capable rate during the same period of 81.2 percent. By most indications, the fleet has performed very well during the past few years of high operational tempo. Operations in Kosovo, Afghanistan, Iraq, and here in the United States in support of Operation Noble Eagle were demanding, but the current fleet was able to meet the mission requirements. Approximately 150 KC-135s were deployed to the combat theater for Operation Allied Force in Kosovo, about 60 for Operation Enduring Freedom in Afghanistan, and about 150 for Operation Iraqi Freedom. Additional aircraft provided “air bridge” support for movement of fighter and transport aircraft to the combat theater, for some long-range bomber operations from the United States, and, at the same time, to help maintain combat air patrols over major U.S. cities since September 11, 2001. Section 8159 of the Department of Defense Appropriations Act for fiscal year 2002, which authorized the Air Force to lease the KC-767A aircraft, also specified that the Air Force could not commence lease arrangements until 30 calendar days after submitting a report to the House and Senate Armed Services and Appropriations Committees (1) outlining implementation plans and (2) describing the terms and conditions of the lease and any expected savings. The Air Force has stated that it will not proceed with the lease until it receives approval from all of the committees of the New Start Notification. The Air Force also submitted the report of the proposed lease to the committees as required by section 8159. I will now summarize the key points that the Air Force made in this report to the committees: The Air Force pointed out that aerial refueling helps to support our nation’s ability to respond quickly to operational demands anywhere around the world. This is possible because aerial refueling permits other aircraft to fly farther, stay aloft longer, and carry more weapons, equipment, or supplies. The Air Force indicated that KC-135 aircraft are aging and becoming increasingly costly to operate due to corrosion, the need for major structural repair, and increasing rates of inspection to ensure air safety. Moreover, the report indicates that the Air Force believes it is incurring a significant risk by having 90 percent of its aerial refueling capability in a single, aging airframe. The Air Force considered maintaining the current fleet until about 2040 but concluded that the risk of a “fleet-grounding” event made continued operation of the fleet unacceptable, unless it began its re-capitalization immediately. The Air Force considered replacing the KC-135 (E model) engines with new engines but rejected this changeover since it would not address the key concern of aircraft corrosion and other age-related concerns. The Air Force eventually plans to replace all 543 KC-135 aircraft over the next 30 years and considered lease and purchase alternatives to acquire the first 100 aircraft. The Air Force added traditional procurement funding to the fiscal year 2004-2009 Future Years Defense Program in order that 100 tankers would be delivered between fiscal years 2009 and 2016. Conversely, the report states that under the lease option, all 100 aircraft could be delivered from fiscal years 2006 to 2011. To match that delivery schedule under a purchase option, the Air Force stated that it would have to reprogram billions of dollars already committed to other uses. Office of Management and Budget Circular A-94 directs a comparison of the present value of lease versus purchase before executing a lease. In its report, the Air Force estimated that purchasing would be about $150 million less than leasing on a net present value basis. The Air Force plans to award a contract to a special purpose entity created to issue bonds needed to raise sufficient capital to purchase the new aircraft from Boeing and to lease them to the Air Force. The lease will be a three-party contract between the government, Boeing, and the special purpose entity. The entity is to issue bonds on the commercial market based on the strength of the lease and not the creditworthiness of Boeing. Office of Management and Budget Circular A-11 requires that an operating lease meet certain terms and conditions including a prohibition on paying for more than 90 percent of the fair market value of the asset over the life of the lease at the time that the lease is initiated. The report to Congress states that the Defense Department believes the proposed lease meets those criteria. If Boeing sells comparable aircraft during the term of the contract to another customer for a lower price than that agreed to by the Air Force, the government would receive an “equitable adjustment.” The report also states that Boeing has agreed to a return-on-sales cap of 15 percent and that an audit of its internal cost structure will be conducted in 2011, with any return on sales exceeding 15 percent reimbursed to the government. According to the report, if the government were to terminate the lease, it must do so for all of the delivered aircraft and may terminate any planned aircraft for which construction has not begun, must give 12-months advance notification prior to termination, return the aircraft, and pay an amount equal to 1 year’s lease payment for each aircraft terminated. If termination occurs before all aircraft have been delivered, the price for the remaining aircraft would be increased to include unamortized costs incurred by the contractor that would have been amortized over the terminated aircraft and a reasonable profit on those costs. The government will pay for and the contractor will obtain commercial insurance to cover aircraft loss and third party liability, as part of the lease agreement. Aircraft loss insurance is to be in the amount of $138.4 million per aircraft in calendar year 2002 dollars. Liability insurance will be in the amount of $1 billion per occurrence per aircraft. If any claim is not covered by insurance, the Air Force will indemnify the special purpose entity for any claims from third parties arising out of the use, operation, or maintenance of the aircraft under the contract. At the expiration of the lease, the Air Force will return the aircraft to the special purpose entity after removing, at government expense, any Air Force unique configurations. The contractor will warrant that each aircraft will be free from defects in materials and workmanship, and the warranty will be of 36 months duration and will commence after construction of the commercial Boeing 767 aircraft, but before they have been converted into aerial refueling aircraft. Upon delivery to the Air Force, each KC-767A aircraft will carry a 6-month design warranty, 12-month material and workmanship warranty on the tanker modification, and the remainder of the original warranty on the commercial components of the aircraft, estimated to be about 2 years. Because we have only had the Air Force report for a few days, we do not have any definitive analytical results. However, we do have a number of questions and observations about the report that we believe are important for the Congress to explore in reaching a decision on the Air Force proposal. 1. What is the full cost to acquire and field the KC767A aircraft under the proposed lease (and assuming the exercising of an option to purchase at the conclusion of the lease)? While the report includes the cost of leasing, the report does not include the costs of buying the tankers at the end of the lease. The report shows a present value of the lease payments of $11.4 billion and a present value of other costs, such as military construction and operation and support costs of $5.8 billion. This totals to $17.2 billion. If the option to purchase were exercised, the present value of those payments would be $2.7 billion. Adding these costs to the present value of the lease payments and other costs, this would total $19.9 billion in present value terms. The costs of the leasing plan have also been presented as $131 million per plane for the purchase price, with $7.4 million in financing costs per plane, both amounts in calendar year 2002 dollars. If the option to purchase were exercised, the price paid would be $35.1 million per plane in calendar year 2002 dollars. Adding all of these costs together, the cost of leasing plus buying the planes at the end of the lease would total $173.5 million per plane in calendar 2002 dollars or $17.4 billion for the 100 aircraft. 2. How strong is the Air Force’s case for the urgency of this proposal? As far back as our 1996 report, we said that the Air Force needed to start planning to replace the KC-135 fleet, but until the past year and a half, the Air Force had not placed high priority on replacement in its procurement budget. While the KC-135 fleet is old and is increasingly costly to maintain due mainly to age-related corrosion, there has been no indication that mission capable rates are falling or that the aircraft cannot be operated safely. By having 90 percent of its refueling fleet in one aircraft type, the Air Force for some years now has been accepting the risk of fleet wide problems that could ground the entire fleet; it is really a question of how much risk and how long the Air Force and the Congress are willing to accept that risk. 3. How will the special purpose entity work? Under the Air Force proposal, the 767 aircraft would be owned by a special purpose entity and leased to the Air Force. This is a new concept for the Air Force, and the details of the workings of this entity have not been presented in detail. It is important for the Congress to understand how this concept will work and how the government’s interests are protected under such an arrangement. For example, what audit rights does the government have? Will financial records be available for public scrutiny? 4. What process did the AF follow to assure itself that it obtained a reasonable price? Because this aircraft is being acquired under the Federal Acquisition Regulations, the Air Force is required to assure itself through market analysis and other means that the price it is paying is reasonable and fair. To assess this issue, we would need to know how much of the $131 million purchase price is comprised of the basic 767 commercial aircraft and how much represents the cost of modifications to convert it to a tanker. There is an ample market for commercial 767s, and the Air Force should have some basis for comparison to assess the reasonableness of that part of the price. The cost of the modifications is more difficult to assess, and the Air Force has not provided us the data to analyze this cost. It would be useful for the Congress to understand the process the Air Force followed. 5. Does the proposed lease comply with the OMB criteria for an operating lease? Office of Management and Budget Circular A-11 provides criteria that must be met for an operating lease. The Air Force report says that the proposal complies with the criteria, but the report points out that one of the criteria is troublesome for this lease. This criterion, in particular, provides that in order for an agreement to be considered an operating lease, the present value of the minimum lease payments over the life of the lease cannot exceed 90 percent of the fair market value of the asset at the inception of the lease. Depending on the fair market value used, the net present value of the lease payments in this case may exceed 90 percent of initial value. Specifically, if the fair market value is considered to include the cost of construction financing, then the lease payments would represent 89.9 percent. If the fair market value were taken as $131 million per aircraft, which is the price the special purpose entity will pay to Boeing, then the lease payments would represent 93 percent. We do not have a position at this time on which is the more valid approach, but we believe the Air Force was forthright in presenting both figures in its report. Congress will need to consider whether this is an important issue and which figure is most appropriate for this operating lease. 6. Did the Air Force comply with OMB guidelines for lease versus purchase analysis in its report? A-94 specifies how lease versus purchase analysis should be conducted. Our preliminary analysis indicates that the Air Force followed the prescribed procedures, but we have not yet had time to validate the Air Force’s analysis or the reasonableness of the assumptions. The Air Force reported that under all assumptions and scenarios considered, leasing is more expensive than purchasing, but by only about $150 million under its chosen assumptions. In a footnote, however, the report points out that if the comparison were to a multiyear procurement, the difference in net present value would be $1.9 billion favoring purchase. 7. Why does the proposal provide for as much as a 15 percent profit on the aircraft? The Air Force report indicates that Boeing could make up to 15 percent profit on the 767 aircraft. However, since this aircraft is basically a commercial 767 with modifications to make it a military tanker, a question arises about why the 15 percent profit should apply to the full cost. One financial analysis published recently said that Boeing’s profit on commercial 767s is in the range of 6 percent. Did the Air Force consider having a lower profit margin on that portion of the cost, with the 15 percent profit applying to the military-specific portion? This could lower the cost by several million dollars per aircraft. In addition to the questions and observations presented above on the Air Force report to the Congress, we believe there are a number of additional considerations that Congress may want to explore, including the following: What is the status of the lease negotiations? The Air Force has informed us that the lease is still in draft and under negotiation. We believe it is important for the Congress to have all details of the lease finalized and available to assure that there are no provisions that might be disadvantageous to the government. Just last Friday, the Air Force let us read the draft lease in the Pentagon but has not provided us with a copy of it, so we have not had time to review it in detail. What other costs are associated with this lease agreement? In addition to the lease payments, the Air Force has proposed about $600 million in military construction, and it has negotiated with Boeing for training costs and maintenance costs related to the lease agreement that could total about $6.8 billion over the course of the lease. In addition, AF documents indicate that there are other costs for things like insurance premiums (estimated to be about $266 million) and government contracting costs. Given the cost of the maintenance agreement, how has the Air Force assured itself that it received a good price? The Air Force estimates that the maintenance agreement with Boeing will cost between $5 billion and $5.7 billion during the lease period. It has negotiated an agreement with Boeing as part of the lease negotiations, covering all maintenance except flight-line maintenance to be done by Air Force mechanics. This represents an average of about $50 million per aircraft, with each aircraft being leased for 6 years, or over $8 million per year. We do not know how the Air Force determined that this was a reasonable price or whether competition might have yielded a better value. A number of commercial airlines and maintenance contractors already maintain the basic 767 commercial aircraft. What happens when the lease expires? At the end of each 6-year lease, the aircraft are supposed to be returned to the owner, the special purpose entity, or be purchased by the Air Force for their residual value, estimated at about $44 million each in then-year dollars. If the aircraft were returned, the Air Force tanker fleet would be reduced, and the Air Force would have to find some way to replace the lost capability even though lease payments would have paid almost the full cost of the aircraft. In addition, the Air Force would have to pay an additional estimated $778 million if the entire 100 aircraft were returned; this provision is intended to cover the cost of removing military-specific items. For these reasons, returning the aircraft would probably make little sense, and the Congress would almost certainly be asked to fund the purchase of the aircraft at their residual value when the leases expire. How is termination liability being handled? If the lease is terminated prematurely, the Air Force must pay Boeing 1 year’s lease payment. Ordinarily, under budget scoring rules, the cost of the termination liability would have to be obligated when the lease is signed. Because this could amount to $1 billion to $2 billion for which the Air Force would have to have budget authority, this requirement was essentially waived by Section 8117 of the Fiscal Year 2003 Department of Defense Appropriation Act. This means that if the lease were terminated, the Air Force would have to find the money in its budget to pay the termination amount or come to Congress for the appropriation. If the purpose of the lease is to “kick-start” replacement of the KC-135 fleet—as the Air Force has stated—why are 100 aircraft necessary, as stipulated under this lease arrangement? The main advantage of the lease, as pointed out by the Air Force, is that it would provide aircraft earlier than purchasing the aircraft and without disrupting other budget priorities. It is not clear, however, why 100 aircraft is the right number to do this. Section 8159 authorized up to 100 aircraft to be leased for up to 10 years. The Air Force has negotiated a shorter lease period, but stayed with the full 100 aircraft to be acquired from fiscal years 2006 to 2011. The “kick-start” occurs in the early years, and by fiscal year 2008 the Air Force would have 40 new aircraft delivered. We do not know to what extent the Air Force (1) considered using the lease for some smaller number of aircraft and then (2) planned to use the intervening time to adjust its procurement budget to begin purchasing rather than leasing. Such an approach would provide a few years to conduct the Tanker Requirements Study and the analysis of alternatives that the Air Force has said it will begin soon. In the coming weeks, we will continue to look into these questions in anticipation of future hearings by the Senate Armed Services Committee and the Senate Commerce Committee. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions that you or Members of the Committee may have. Contacts and Staff Acknowledgments For future questions about this statement, please contact me at (757) 552-8111 or Brian J. Lepore at (202) 512-4523. Individuals making key contributions to this statement include Kenneth W. Newell, Tim F. Stone, Joseph J. Faley, Steve Marrin, Kenneth Patton, Charles W. Perdue, and Susan K. Woodward. Military Aircraft: Information on Air Force Aerial Refueling Tankers. GAO-03-938T. Washington, D.C.: June 24, 2003. Air Force Aircraft: Preliminary Information on Air Force Tanker Leasing. GAO-02-724R. Washington, D.C.: May 15, 2002. U.S. Combat Air Power: Aging Refueling Aircraft Are Costly to Maintain and Operate. GAO/NSIAD-96-160. Washington, D.C.: Aug. 8, 1996. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the Air Force's report on the planned lease of 100 Boeing 767 aircraft modified for aerial refueling. These aircraft would be known by a new designation, KC-767A. Section 8159 of the Department of Defense Appropriations Act for fiscal year 2002 authorizes the Air Force to lease up to 100 KC-767A aircraft. We received the report required by section 8159 when it was sent to the Congress on July 10. We subsequently received a briefing from the Air Force and some of the data needed to review the draft lease and lease versus purchase analysis. However, we were permitted to read the lease for the first time on July 18 but were not allowed to make a copy and so have not had time to fully review and analyze the terms of the draft lease. As a result, this testimony today will be based on very preliminary work. It will (1) describe the condition of the current aerial refueling fleet, (2) summarize the proposed lease as presented in the Air Force's recent report, (3) present our preliminary observations on the Air Force lease report, and (4) identify related issues that we believe deserve further scrutiny. The KC-10 aircraft are relatively young, averaging about 20 years in age. Consequently, much of the focus on modernization of the tanker fleet is centered on the KC-135s, which were built in the 1950s and 1960s, and now average about 43 years in age. While the KC-135 fleet averages more than 40 years in age, the aircraft have relatively low levels of flying hours. The Air Force projects that E and R models have lifetime flying hours limits of 36,000 and 39,000 hours, respectively. According to the Air Force, only a few KC-135s would reach these limits before 2040, but at that time some of the aircraft would be about 80 years old. Flying hours for the KC-135s averaged about 300 hours per year between 1995 and September 2001. Since then, utilization is averaging about 435 hours per year. The Air Force eventually plans to replace all 543 KC-135 aircraft over the next 30 years and considered lease and purchase alternatives to acquire the first 100 aircraft. Office of Management and Budget Circular A-94 directs a comparison of the present value of lease versus purchase before executing a lease. In its report, the Air Force estimated that purchasing would be about $150 million less than leasing on a net present value basis. The Air Force plans to award a contract to a special purpose entity created to issue bonds needed to raise sufficient capital to purchase the new aircraft from Boeing and to lease them to the Air Force. The lease will be a three-party contract between the government, Boeing, and the special purpose entity. Office of Management and Budget Circular A-11 requires that an operating lease meet certain terms and conditions including a prohibition on paying for more than 90 percent of the fair market value of the asset over the life of the lease at the time that the lease is initiated. According to the report, if the government were to terminate the lease, it must do so for all of the delivered aircraft and may terminate any planned aircraft for which construction has not begun, must give 12-months advance notification prior to termination, return the aircraft, and pay an amount equal to one year's lease payment for each aircraft terminated. If termination occurs before all aircraft have been delivered, the price for the remaining aircraft would be increased to include unamortized costs incurred by the contractor that would have been amortized over the terminated aircraft and a reasonable profit on those costs. At the expiration of the lease, the Air Force will return the aircraft to the special purpose entity after removing, at government expense, any Air Force unique configurations. The contractor will warrant that each aircraft will be free from defects in materials and workmanship, and the warranty will be of 36 months duration and will commence after construction of the commercial Boeing 767 aircraft, but before they have been converted into aerial refueling aircraft. Upon delivery to the Air Force, each KC-767A aircraft will carry a 6-month design warranty, 12-month material and workmanship warranty on the tanker modification, and the remainder of the original warranty on the commercial components of the aircraft, estimated to be about 2 years. Because we have only had the Air Force report for a few days, we do not have any definitive analytical results. However, we do have a number of questions and observations about the report that we believe are important for the Congress to explore in reaching a decision on the Air Force proposal.
You are an expert at summarizing long articles. Proceed to summarize the following text: moved from retrospective, cost-and-charge-based reimbursements to prospective systems and fee schedules designed to contain cost growth. The August 1997 passage of BBA dramatically changed the existing paradigm, setting Medicare on a course toward a more competitive and consumer-driven model. HCFA, the agency charged with administering the program, must accomplish this transition while continuing to oversee the processing of about 900 million claims annually. BBA contained over 350 separate Medicare and Medicaid mandates, the majority of which apply to the Medicare program. The Medicare mandates are of widely varying complexity. Some, such as the Medicare+Choice expansion of beneficiary health plan options and the implementation of PPSs for SNFs, home health agencies, and hospital outpatient services, are extraordinarily complex and have considerable budgetary and payment control implications. Others, such as updating the conversion factor for anesthesia payments, are relatively minor. Although most implementation deadlines are near term—over half had 1997 or 1998 deadlines—several are not scheduled to be implemented until 2002. Overall, BBA required HCFA to implement about 240 unique Medicare changes. Since August 1997, about three-quarters of the mandates with a July 1998 deadline have been implemented. HCFA’s recent publication of the Medicare+Choice and SNF PPS regulations are examples of the progress HCFA has made in implementing key mandates. The remaining 25 percent missed the BBA implementation deadline, including establishment of a quality-of-care medical review process for SNFs and a required study of an alternative payment system for certain hospitals. It is clear that HCFA will continue to miss implementation deadlines as it attempts to balance the resource demands generated by BBA provisions with other competing objectives. BBA-mandated changes. Finally, the need to modernize its multiple automated claims processing and other information systems, a task complicated by the Year-2000 computer challenges, is competing with other ongoing responsibilities. HCFA has proposed that the Department of Health and Human Services seek legislative relief by delaying implementation of certain BBA provisions—those requiring major computer system changes that also coincide with Year-2000 computer renovations. According to HCFA’s computer contractor, simultaneously pursuing both BBA implementation and Year-2000 system changes risks the failure of both activities and threatens HCFA’s highest priority—uninterrupted claims payments. The contractor advised HCFA to seek relief from competing requirements, which could allow the agency to focus instead on Year-2000 computer system renovations. The BBA provisions to be delayed by the computer renovations include updates to the October 1999 inpatient hospital PPS rate and the January 2000 physician fee schedule, hospital outpatient PPS limits on outpatient therapy services, and billing changes for SNFs. The appendix lists other BBA mandates that are being postponed. the new PPS rates, which cover both services previously billed by the SNF and by certain outside providers. Without this provision, it may be more difficult to adequately monitor whether bills for SNF residents are being submitted appropriately. BBA establishes a new Medicare+Choice program, which will significantly expand the health care options that can be marketed to Medicare beneficiaries beginning in the fall of 1998. In addition to traditional Medicare and HMOs, beneficiaries will be able to enroll in preferred provider organizations, provider-sponsored organizations, and private fee-for-service plans. Medical savings accounts will also be available to a limited number of beneficiaries under a demonstration program. The goal is a voluntary transformation of Medicare via the introduction of new plan options. Capitalizing on changes in the delivery of health care, these new options are intended to create a market in which different types of health plans compete to enroll and serve Medicare beneficiaries. Recognizing that consumer information is an essential component of a competitive market, BBA mandated a national information campaign with the objective of promoting informed plan choice. From the beneficiary’s viewpoint, information on available plans needs to be (1) accurate, (2) comparable, (3) comprehensible, and (4) readily accessible. Informed beneficiary choice will be critical since BBA phases out the beneficiary’s right to disenroll from a plan on a monthly basis and moves toward the private sector practice of annual reconsideration of plan choice. campaign” that includes comparative data on the available health plan choices. This publicity campaign will support what is to become an annual event each November—an open enrollment period in which beneficiaries may review the options and switch to a different health plan. As in the past, health plans will continue to provide beneficiaries with marketing information that includes a detailed description of covered services. In fact, HCFA comparative summaries will refer beneficiaries to health plans for more detailed information. HCFA is taking a cautious approach and testing the key components of its planned information campaign. This caution is probably warranted by the important role played by information in creating a more competitive Medicare market and by the agency’s inexperience in this type of endeavor. In March 1998, the agency introduced a database on the Internet called “Medicare Compare,” which includes summary information on health plans’ benefits and out-of-pocket costs. The toll-free telephone number will be piloted in five states—Arizona, Florida, Ohio, Oregon and Washington—and gradually phased in nationally during 1999. Because of some concerns about its readability, HCFA has also decided to pilot a new beneficiary handbook in the same five states instead of mailing it to all beneficiaries this year. The handbook, a reference tool with about 36 pages, will describe the Medicare program in detail, providing comparative information on both Medicare+Choice plans as well as the traditional fee-for-service option. For beneficiaries in all other states, HCFA will send out a five- to six-page educational pamphlet that explains the Medicare+Choice options but contains no comparative information. This schedule will allow HCFA to gather and incorporate feedback on the effectiveness of and beneficiary satisfaction with the different elements of the information campaign into its plans for the 1999 open enrollment period. comparative information on Medicare HMOs. Among other things, we recommended that HCFA produce plan comparison charts and require plans to use standard formats and terminology in benefit descriptions. In developing comparative information for Medicare Compare, HCFA attempted to use information submitted by health plans as part of the contracting process. Like beneficiaries, HCFA had difficulty reconciling information from different HMOs because it was not standardized across plans. HCFA’s Center for Beneficiary Services, the new unit responsible for providing information to Medicare enrollees, has been forced to recontact HMOs and clarify benefit descriptions. Recognizing that standardized contract information would reduce the administrative burden on both health plans and different HCFA offices that use the data, the agency has accelerated the schedule for requiring standard formats and language in contract benefit descriptions. Although originally targeted by 2001, the new timetable calls for contract standardization beginning with submissions due in the spring of 1999. If available on schedule, standardized contracts should facilitate the production of comparative information for the introduction of the annual open enrollment period in November 1999. that the use of nonformulary drugs may result in substantially higher out-of-pocket costs. Only five of eight Tampa plans mention mammograms in their benefit summaries—even though all plans covered mammograms. Most plans listed mammograms under the “preventive service” benefit category. One plan, however, included them under hospital outpatient services. Consistent presentation is important because beneficiaries may rely on plans’ benefit summaries when comparing coverage and out-of-pocket cost information. Federal employees and retirees can readily compare benefits among health plans in the Federal Employees Health Benefits Program because the Office of Personnel Management requires that plan brochures follow a common format and use standard terminology. It is encouraging that HCFA wants to accelerate a similar requirement for Medicare+Choice plans. In the fall of 1999, HCFA expects to require health plans to use standard formats and terminology to describe covered services in the summary-of-benefits portion of the marketing materials. Comparative data on quality and performance are a key component of the information campaign mandated by BBA and an essential underpinning of quality-based competition. Recognizing that the measurement and reporting of such comparative data is a “work in progress,” the act directed broad distribution of such information as it becomes available. Categories of information specifically mentioned by BBA include beneficiary health outcomes and satisfaction, the extent to which health plans comply with Medicare requirements, and plan disenrollment rates. While disenrollment rates could be prepared for publication in a matter of months, other types of quality-related information have accuracy or reliability problems or are still being developed. immature health plan information systems and ambiguities in the HEDIS measurement specifications. Though committed to making the HEDIS information available as quickly as possible, HCFA emphasized that its premature release would be unfair to both plans and beneficiaries. Finally, efforts have been under way for some time to develop measures that actually demonstrate the quality of the care delivered—often referred to as “outcome” measures. As noted, the current HEDIS measures look at how frequently a health plan delivers specific services, such as immunizations, not at outcomes. The development and dissemination of reliable health outcome measures is a much more complicated task and remains a longer-term goal. Before passage of BBA, HCFA had funded a survey to measure and report beneficiaries’ satisfaction with their HMOs. For example, Medicare enrollees were asked how easy it was to gain access to appropriate care and how well their physicians communicated with them about their health status and treatment options. HCFA plans to make the survey results available on its Medicare Compare Internet site this fall and to include the data in mailings to beneficiaries during the fall 1999 information campaign. We believe that the usefulness of HCFA’s initial satisfaction survey for identifying poor performing plans is limited because it surveyed only those individuals satisfied enough with their plan to remain enrolled for at least 12 months. HCFA is planning a survey of those who disenrolled, which could help distinguish among the potential causes of high disenrollment rates in some plans, such as quality and access issues or beneficiary dissatisfaction with the benefit package. Houston, Texas, the highest disenrollment rate was nearly 56 percent, while the lowest was 8 percent. The large range in disenrollment rates among HMOs suggests that this single variable could be a powerful tool in alerting beneficiaries about potentially significant differences among plans and the need to seek additional information before making a plan choice. Questions have been raised by health plan representatives and others about the estimated cost of the information campaign. The campaign is to be financed primarily from user fees—that is, an assessment on participating health plans. We are conducting a review of HCFA’s information campaign plans at your request and that of the Senate Committee on Finance. Our work began recently, and since then HCFA has modified its plans significantly, affecting the estimated costs of different components. While we cannot yet make an overall assessment, it is clear that the operation of the toll-free number is the most expensive component and, because of a lack of prior experience, is the most difficult cost to estimate. The cost of the toll-free number comprises 44 percent of the total information campaign budget. HCFA projects fiscal year 1998 costs of $50.2 million to support set up as well as operations during fiscal year 1999. All but $4 million will come from user fees collected from existing Medicare HMOs. For fiscal year 2000, operations costs are projected to grow to $68 million. important that the toll-free number meet beneficiaries’ reasonable needs or expectations. However, until HCFA actually gains experience with the toll-free number, it has no firm basis to judge either the duration of the calls or the type of information beneficiaries will find useful. The phased implementation of the toll-free numbers should give HCFA a better idea of what beneficiaries want and may necessitate adjustments to current plans. Ultimately, the design of this and other aspects of the information campaign should be driven less by cost and more by how effective they are in meeting beneficiary needs and contributing to the intended transformation of the Medicare program. Consequently, we will be looking at (1) whether the estimated cost of the planned activities is appropriate and efficient in the near term, and (2) whether, over the longer term, the impact and effectiveness of these activities might be increased. On July 1, 1998, HCFA began phasing in a Medicare PPS for SNFs, as directed by BBA. Under the new system, facilities receive a payment for each day of care provided to a Medicare-eligible beneficiary (known as the per diem rate). This rate is based on the average daily cost of providing all Medicare-covered SNF services, as reflected in facilities’ 1995 costs. Since not all patients require the same amount of care, the per diem rate is “case-mix” adjusted to take into account the nature of each patient’s condition and expected care needs. Previously, SNFs were paid the reasonable costs they incurred in providing Medicare-allowed services. There were limits on the costs that were reimbursed for the routine portion of care, that is, general nursing, room and board, and administrative overhead. Payments for capital costs and ancillary services, such as rehabilitation therapy, however, were virtually unlimited. Cost-based reimbursement is one of the main reasons the SNF benefit has grown faster than most components of the Medicare program. Because providing more services generally triggered higher payments, facilities have had no incentive to restrict services to those necessary or to improve their efficiency. care for beneficiaries for less than the case-mix adjusted payment will benefit financially. Those with costs higher than the per diem amount will be at risk for the difference between costs and payments. The PPS for hospitals is credited with controlling outlays for inpatient hospital care. Similarly, the Congressional Budget Office (CBO) estimates that over 5 years the SNF PPS could save $9.5 billion compared with what Medicare would have paid for covered services. Although HCFA met the deadline for issuing the implementing regulations for the new SNF per diem payment system, features of the system and inadequate data used to establish rates could compromise the anticipated savings. As noted in previous testimony, design choices and data reliability are key to implementing a successful payment methodology. We are concerned that the system’s design preserves the opportunity for providers to increase their compensation by supplying potentially unnecessary services. Furthermore, the per diem rates were computed using data that overstate the reasonable cost of providing care and may not appropriately reflect the differences in costs for patients with different care needs. In addition, as a part of the system, HCFA’s regulation appears to have initiated an automatic eligibility process—that is, a new means of determining eligibility for the Medicare SNF benefit, that could expand the number of beneficiaries who will be covered and the length of covered stays. The planned oversight is insufficient, increasing the potential for these aspects of the regulations to compromise expected savings. Immediate modifications to the regulations and efforts to refine the system and monitor its performance could ameliorate our concerns. (physical, occupational, or speech therapy), to assign patients to the different groups. Categorizing patients on the basis of expected service use conflicts with a major objective of a PPS—to break the direct link between providing services and receiving additional payment. A SNF has incentives to reduce the costs of the patients in each case-mix group. Because the groups are largely defined by the services the patient is to receive, a facility could do this by providing the minimum level of services that characterize patients in that group (see table 1). This would reduce the average cost for the SNF’s patients in that case-mix group, but not lower Medicare payments for these patients. For patients needing close to the maximum amount of therapy services in a case-mix group, facilities could maximize their payments relative to their costs by adding more therapy so that the beneficiary was categorized in the next higher group. An increase in daily therapy from 140 to 144 minutes, for example, would change the case-mix category of a patient with moderate assistance needs from the “very high” to the “ultra high” group, resulting in a per diem payment that was about $60 higher. By thus manipulating the minutes of therapy provided to its rehabilitation patients, a facility could lower the costs associated with each case-mix category and increase its Medicare payments. Rather than improve efficiency and patient care, this might only raise Medicare outlays. care needed using methods that are less susceptible to manipulation by a SNF. Nevertheless, being able to classify patients appropriately is critical to ensuring that Medicare can control its SNF payments and that SNFs are adequately compensated for their mix of patients. We are also concerned that the data underlying the SNF rates overstate the reasonable costs of providing services and may not appropriately reflect costs for patients with different care needs. The rates to be paid SNFs are computed in two steps. First, a base rate reflecting the average per diem costs of all Medicare SNF patients is calculated from 1995 Medicare SNF cost report data. This base rate may be too high, because the reported costs are not adequately adjusted to remove unnecessary or excessive costs. Second, a set of adjustors for the 44 case-mix groups is computed using information on the costs of services used by about 4,000 patients. This sample may simply be too small to reliably estimate these adjustors. Most of the cost data used to set the SNF prospective per diem rates were not audited. At most, 10 percent of the base year—1995—cost reports underwent a focused audit in which a portion of the SNFs’ expenses were reviewed. Of particular concern are therapy costs, which are likely inflated because there have been no limits on cost-based payments. HCFA staff report that Medicare has been paying up to $300 per therapy session. These high therapy costs were incorporated in the PPS base rates. Even if additional audits were to uncover significant inappropriate costs, HCFA maintains that it has no authority to adjust the base rates after the July 1, 1998, implementation of the new payment system. The adjustors for each category of patients are based on data from two 1-day studies of the amount of nursing and therapy care received by fewer than 4,000 patients in 154 SNFs in 12 states. Almost all Medicare patients will be in 26 of the 44 case-mix groups. For about one-third of these 26 groups, the adjustors are based on fewer than 50 patients. Given the variation in treatment patterns among SNFs, such a small sample may not be adequate to estimate the average resource costs for each group. As a result, the case-mix adjusted rates may not vary appropriately to account for the services facilities are expected to provide—rates will be too high for some types of patients and too low for others. Medicare’s SNF benefit is for enrollees who need daily skilled care on an inpatient basis following a minimum 3-day hospitalization. Before implementation of the prospective per diem system, SNFs were required to certify that each beneficiary met these criteria. With the new payment system, the method for establishing eligibility for coverage will also change. Facilities will assign each patient to one of the case-mix groups on the basis of an assessment of the patient’s condition and expected service use, and the facility will certify that each patient is appropriately classified. Beneficiaries in the top 26 of the 44 case-mix groups will automatically be deemed eligible for SNF coverage. If facilities do not continue to assess whether beneficiaries meet Medicare’s coverage criteria, “deeming” could represent a considerable new cost to the program. Some individuals who are in one of these 26 deemed categories may only require custodial or intermittent skilled care, but HCFA’s regulations appear to indicate that they could still receive Medicare coverage. Medical review nurses who work with HCFA payment contractors indicated in interviews that some patients included in the 26 groups would not necessarily need daily skilled care. This may be particularly true at a later point in the SNF stay, since SNF coverage can only begin after a 3-day hospitalization. Individuals with certain forms of paralysis or multiple sclerosis who need extensive personal assistance may also need daily skilled care immediately following a hospital stay for pneumonia, for example. After a certain period, however, their need for daily skilled care may end, but their Medicare coverage will continue because of deeming. Similarly, certain patients with minor skin ulcers will be deemed eligible for Medicare coverage, whereas previously only those with more serious ulcers believed to require daily care were covered. Thus, more people could be eligible and Medicare could be responsible for longer stays unless HCFA is clear that Medicare coverage criteria have not been changed. Deeming eligibility would not be a problem if all patients in a case-mix group met Medicare’s coverage criteria. To redefine the patient groups in this way would require additional research and analysis. However, an immediate improvement would be for HCFA to clarify that Medicare will only pay for those patients that the facility certifies meet Medicare SNF coverage criteria. Whether a SNF patient is eligible for Medicare coverage and how much will be paid are based on a facility’s assessment of its patients. Yet, HCFA has no plans to monitor those assessments to ensure they are appropriate and accurate. In contrast, when Texas implemented a similar reimbursement system for Medicaid, the state instituted on-site reviews to monitor the accuracy of patient assessments and to determine the need for training assessors. In 1989, the first year of its system’s operation, Texas found widespread over-assessment. Through continued on-site monitoring, the error rate has dropped from about 40 percent, but it still remains at about 20 percent. The current plans for collecting patient assessment information actually discourage rather than facilitate oversight. A SNF will transmit assessment data on all its patients, not just those eligible for Medicare coverage, to a state agency that will subsequently send copies to HCFA. However, the claim identifying the patient’s category for Medicare payment is sent to the HCFA claims contractor that pays the bill. At the time it is processing the bill, the claims contractor will not have access to data that would allow confirmation that the patient’s classification matches the assessment. To some extent, the implementation of the SNF prospective per diem system reduces the opportunities for fraud in the form of duplicate billings or billing for services not provided. Since a SNF is paid a fixed per diem rate for most services, it would be fraudulent to bill separately for services included in the SNF per diem. Yet, the new system opens opportunities to mischaracterize patients or to assign them to an inappropriate case-mix category. Also, as was the case with the former system, methods to ensure that beneficiaries actually receive required services could be strengthened. As with the implementation of any major payment policy change, HCFA should increase its vigilance to ensure that fraudulent practices discovered in nursing homes, similar to problems noted in our prior work, do not resurface. BBA workload alone, implementation delays were probably inevitable. And now, HCFA has been advised by its contractor that its highest priority—uninterrupted claims processing through the timely completion of Year-2000 computer renovations—may be jeopardized by some BBA mandates that also require computer system changes. Though HCFA is implementing what will become an annual information campaign associated with Medicare+Choice, it has little experience in planning and coordinating such an undertaking. The ability of the campaign to provide accurate, comparable, comprehensive, and readily accessible information will help to determine the success of the hoped for voluntary movement of Medicare beneficiaries into less costly, more efficient health care delivery systems. While BBA computer system-related delays may jeopardize some anticipated program savings, slower Medicare expenditure growth is also at risk because of weaknesses in the implementation of other mandates. HCFA could take short-term steps to correct deficiencies in the new SNF PPS. However, longer-term research is needed to implement a payment system that fully realizes the almost $10 billion in savings projected by CBO. Mr. Chairman, this concludes my statement. I will be happy to answer any questions that you or Members of the Subcommittee may have. Collection of non-inpatient encounter data from plans SHMO: Plan for integration of part C and SHMO Medicare subvention: Project for military retirees Reporting and verification of provider identification numbers (employer identification numbers and Social Security numbers) Maintaining savings from temporary reductions in capital payments for PPS hospitals SNF consolidated billing for part B services Payment update for hospice services Update to conversion factor 1/1/99Implementation of resource-based practice expense RVUs Implementation of resource-based malpractice RVUs Prospective payment fee schedule for ambulance services Application of $1,500 annual limit to outpatient rehabilitation therapy services (continued) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Health Care Financing Administration's (HCFA) implementation of Medicare provisions contained in the Balanced Budget Act of 1997 (BBA), focusing on: (1) an overview of how HCFA's implementation has progressed since GAO's earlier testimony; (2) the efforts to inform Medicare beneficiaries about the expanded health plan choices available to them in 1999, commonly referred to as the information campaign; and (3) the prospective payment system (PPS) for skilled nursing facilities (SNF), which began a 3-year phase-in in June 1998. GAO noted that: (1) HCFA is making progress in meeting the legislatively established implementation schedules; (2) since the passage of BBA in August 1997, almost three-fourths of the mandates with a July 1998 deadline have been implemented; (3) however, HCFA officials have acknowledged that many remaining BBA mandates will not be implemented on time; (4) HCFA maintains that these delays will have a minimal impact on anticipated Medicare program savings; (5) given the concurrent competition for limited resources and the differing importance and complexity of the many BBA mandates, the success or failure of HCFA's implementation efforts should not be judged solely on meeting deadlines; (6) rather, any assessment should consider whether the agency is meeting congressional objectives while taking a reasoned management approach to identifying critical BBA tasks, keeping them on track, and integrating them with other agency priorities; (7) complying with the BBA mandate to conduct an information campaign that provides beneficiaries with the tools to make informed health plan choices poses significant challenges for HCFA and participating health plans; (8) in implementing the Medicare plus Choice program, HCFA must now assemble the necessary comparative information about these options and find an effective means to disseminate it to beneficiaries; (9) a parallel goal of the information campaign is to give beneficiaries information about the quality and performance of participating health plans to promote quality-based competition among plans; (10) HCFA has accelerated its goals for obtaining standardized information from plans, and GAO believes health plan disenrollment rates provide an acceptable short-term substitute measure of plan performance; (11) the campaign is to be financed primarily from user fees; (12) HCFA has met the July 1, 1998, implementation date for phasing in a new payment system for SNFs; (13) GAO is concerned, however, that payment system design flaws and inadequate underlying data used to establish payment rates may compromise the system's ability to meet the twin objectives of slowing spending growth while promoting the delivery of appropriate beneficiary care; (14) in the short term, the new payment system could be improved if HCFA clearly stated that SNFs are responsible for insuring that the claims they submit are for beneficiaries who meet Medicare coverage criteria; and (15) in the longer term, further research to improve the patient grouping methodology and new methods to monitor the accuracy of patient assessments could substantially improve the performance of the new payment system.
You are an expert at summarizing long articles. Proceed to summarize the following text: Meth is relatively easy and cheap to make today by individuals with little knowledge of chemistry or laboratory skills or equipment. PSE, an ingredient used in OTC and prescription cold and allergy medications, is the key substance needed to make the dextrorotatory methamphetamine (d-meth) illicitly produced in most domestic meth labs today. The difference between a PSE molecule and a d-meth molecule is a single oxygen atom. Meth cooks make d-meth by using common household products to remove this oxygen atom to produce meth as shown in figure 1. Meth cooks have used two primary processes known as the Nazi/Birch In recent years, meth cooks have and Red P methods to make d-meth.developed a variation of the Nazi/Birch method known as the One Pot or Shake and Bake method that produces meth in one step where ingredients are mixed together in a container such as a 2-liter plastic bottle. Another process for making meth is known as the P-2-P method, which produces a less potent form of meth known as racemic or dl- meth.half as potent as the d-meth made with PSE. Initial federal efforts to address a growing meth lab and abuse problem primarily focused on increasing meth-trafficking penalties and regulating the bulk importation, exportation, and distribution of meth precursor chemicals such as PSE. In 2004, Oklahoma was the first state to pass a law to control the retail sale of PSE products by requiring customers to present photo IDs and pharmacists to keep the product behind the counter and log all sales. By November 2005, over 30 other states had passed laws related to the control of the retail sale of PSE products. In 2006, the CMEA was enacted, which included measures designed to control the availability of meth precursor chemicals such as PSE by regulating the retail sale of OTC products containing these chemicals. The CMEA placed restrictions on the sale of these products, including (1) requiring these products to be kept behind the counter or in a locked cabinet where customers do not have direct access; (2) setting a daily sales limit of 3.6 grams and a monthly purchase limit of 9 grams per customer regardless of the number of transactions; and (3) requiring sellers to maintain a logbook, written or electronic, to record sales of these products and verify the identity of purchasers. The CMEA does not prohibit states from taking actions to establish stricter sales limits or further regulate the sale of PSE products. Since the passage of the CMEA, some states have implemented electronic systems to track sales of products containing PSE. Through these systems, retailers report sales of PSE products to a centralized database that can be used to determine whether individuals are exceeding the purchase limitations of the CMEA or state laws. Reported information typically includes the date and grams purchased, as well as the name, address, and other identifying information of the purchaser. Most tracking systems are stop sale systems that would query the database, notify the retailer whether the pending sale would violate federal or state purchase limitations, and deny sales where limits have already been reached. As of December 2012, 19 states were using stop sale tracking systems. Seventeen of these states were using a system called the NPLEx that is endorsed and funded by PSE manufacturers through CHPA.by another vendor. Two states were using systems developed in-house or Some states and localities have taken additional steps to regulate PSE sales. Oregon, Mississippi, and 63 Missouri cities or counties have passed laws or ordinances requiring individuals to obtain a prescription from a health care provider in order to purchase PSE products. While a prescription is required, an in-person encounter with a health care provider may not be necessary to obtain the prescription. There is no set limit to how much PSE can be prescribed. Both Oregon and Mississippi require that prescriptions for PSE products be entered into the states’ prescription drug monitoring program, a program that allows for pharmacists and prescribers to electronically look up how much PSE product has been prescribed to a patient. Figure 2 shows the states with prescription-only laws and ordinances and electronic tracking systems, including the dates these systems were implemented. According to DEA data on meth lab incidents, after peaking in 2004, the number of lab incidents nationwide declined through 2007 after the implementation of state and federal regulations on PSE product sales. As shown in figure 3, the number of lab incidents peaked in 2004, with states reporting over 24,000 lab incidents nationally. However, beginning in 2005, the number of incidents began to decline sharply and reached a low of about 7,000 incidents in 2007. While there may be multiple factors at work that resulted in this decline such as region-specific factors, federal, state, and local law enforcement officials attribute the primary cause of the decline to the restrictions on purchases of PSE products imposed at both the federal and state levels from 2004 through 2006. The impact of these restrictions was to reduce the accessibility of PSE for use in illicit meth labs, which in turn resulted in fewer labs during this period. After reaching a low in 2007, the number of meth lab incidents reported nationally increased over the next few years. National trends show that meth lab incidents have increased since 2007, reaching more than 15,000 at the end of 2010 –more than double the number of reported incidents for 2007. Federal, state, and local law enforcement officials attribute this rising trend primarily to two factors: The emergence of a new technique for smaller-scale production. A production method popularly called the One Pot method, which simplified the entire meth production process down to a single 2 liter plastic bottle and enhanced the ability of individuals to make their own meth, began to emerge in 2007. With this method, meth addicts are capable of manufacturing their own meth quicker and with less PSE, chemicals, and equipment than required by traditional meth- manufacturing methods, although this method also produces less meth than the traditional manufacturing methods. According to DEA data, more than 87 percent (43,726) of the labs seized with a capacity reported from 2008 through 2011 have been smaller capacity (less than 2 ounce) labs and about 74 percent (39,049) used the Nazi/Birch manufacturing process, of which the One Pot method is a variation. Less than 0.5 percent (219) of the labs seized during this period were super labs (labs producing 10 pounds or more of meth per batch), less than 13 percent (6,473) used the Red P method, and only 0.05 percent (26) of the labs seized during this period used the P-2-P method, which does not require PSE as a precursor chemical. Use of a method for meth producers to circumvent PSE sales restrictions. Another key factor federal, state, and local officials attribute to the increase in meth labs in recent years is the use of a method known as smurfing to work around PSE sales restrictions. Smurfing—which is discussed in greater detail later in this report— essentially involves the coordinated effort by individuals or groups of individuals to purchase the maximum per person legal allowable amount of PSE products and then aggregate their purchases for the use in meth production or for sale to a meth producer. Federal, state, and local officials stated that consequently, using this technique, meth producers have been able to obtain the PSE product they need to make meth despite the federal and state sales restrictions. This, in turn, has led to the proliferation of more labs. Further examination of data trends at the regional level reveals that the number of meth lab incidents varies greatly among regions of the country. Specifically, while the number of meth lab incidents continues to be low in the Northeast and declines in the number of meth labs incidents have been maintained in the West since PSE sales restrictions went into place, the South and Midwest regions have experienced significant increases overall in the number of incidents since 2007. Further, the South and Midwest have also had more lab incidents than the West and Northeast since 2003 (see fig.4). In general, these trends are consistent across all categories of lab types and capacities, except for incidents involving the P-2-P labs and labs of larger capacities (10 pounds or greater), for which the West tended to report higher numbers of incidents overall. Figure 5 shows lab incidents by state for the last decade (see app. II for this information by state). Move mouse over state names for meth lab incidents. N.Dak. Vt. Minn. N.H. S.Dak. Wis. N.Y. Mass. Mich. R.I. Nebr. Pa. Conn. N.J. Ill. Ind. Del. Colo. W.Va. Kans. Mo. Va. Md. D.C. Ky. N.C. Tenn. Okla. N.Mex. Ark. S.C. Miss. Ala. Ga. La. Tex. Fla. Meth labs can have a significant impact on a community’s health care system when labs catch on fire or explode, causing serious injuries and burns to meth cooks and other individuals that require costly medical treatment. Mixing chemicals in meth labs creates substantial risks of explosions, fires, chemical burns, and toxic fume inhalation. These burns and related injuries resulting from these events can be more serious than burns and injuries sustained through non-meth-lab-related causes. For example, a 2008 study conducted of meth and non-meth burn patients that received treatment in one hospital burn unit in Kalamazoo, Michigan, from 2001 through 2005, found that meth lab patients tended to have more frequent inhalation injuries, needed greater initial fluid resuscitation volume, required intubation more frequently, and were more likely to have The small size of the relatively complications than non-meth patients.new One Pot or Shake and Bake method can be even more dangerous than larger meth labs, as drugmakers typically hold the One Pot container up close, increasing the risk for severe burns from the waist to the face. According to the director of the Vanderbilt University Regional Burn Center in Tennessee, meth lab injuries can also be more severe than burns resulting from just fires alone because patients often suffer thermal burns from the explosion, as well as chemical burns from exposure to caustic chemicals. He also noted that meth lab burn patients tend to be more difficult to treat because their addiction and overall poor physical health make it difficult for them to facilitate their own recovery as well as the fact that most attempt to hide the cause of their injury, which can hinder the administration of proper care. The treatment for meth lab-related burns and injuries can be very expensive. According to one provider, treatment costs for two meth lab burn patients exceeded $2 million per patient. Although accurate estimates of the proportion of burn victims that received their burns from a meth lab are difficult, one estimate placed the percentage of meth lab burn patients at 25 to 35 percent of total burn patients. Of those patients that are identified as receiving their injuries from meth labs, many are found either not to have health insurance or have publicly funded insurance such as Medicaid. For example, the 2008 Kalamazoo study also found that significantly fewer meth burn patients had private insurance, while more were on Medicaid or had no insurance as compared with non-meth burn patients. As part of reporting a lab seizure to the DEA’s NSS, law enforcement is required to report on the number of children affected by the lab, such as those living at the site as well as those that might have visited the site. damage the brain, liver, kidney, spleen, and immunologic system; and result in birth defects. In addition to the physical dangers, children in environments where meth is being made are also reported to be at risk to suffer abuse or neglect by their parents or other adults. Parents and caregivers who are meth dependent can become careless and often lose their capacity to take care of their children such as ensuring their children’s safety and providing essential food, dental and medical care, and appropriate sleeping conditions. Children living in households where meth labs are operated are also at increased risk for being physically and sexually abused by members of their own family or other individuals at the site. To protect the children discovered at meth lab sites from further harm and neglect, social service agencies remove the children from their homes and place them in foster care. Foster care is a social welfare service that serves the needs of abused and neglected children. Child welfare workers can remove a child if it is determined that remaining with the parents will jeopardize a child’s welfare. Children are placed either with a surrogate foster family or in a residential treatment facility called a group home with the intent to provide temporary housing in a safe and stable environment until reunification with the child’s birth parents or legal guardians is possible. Reunification happens once the state is convinced that the harmful factors that triggered removal no longer exist. Several states and jurisdictions have created special protocols and programs to address the needs of children exposed to clandestine meth labs. These protocols and programs typically involve medical screening of the children for toxicity and malnourishment, emergency and long-term foster care, and psychological treatment. Social service agencies may also seek to enroll meth-involved parents and their children in a family-based treatment program, where both the parents and children receive services. Family-based treatment programs offer treatment for adults with substance use disorders and support services for their dependent children in a supervised, safe environment that allows the family to remain together and prevents exposure to further harm. The costs to state department of human service agencies to provide services to these children can be significant depending on the number, age, and specific needs of the child. For example, from January 2006, through December 2011, the Missouri Department of Social Services substantiated 702 reports of children exposed to meth labs, involving a total of 1,279 children. Of those 1,279 children, 653 required placement in departmental custody. The total cost of providing custodial care to children exposed to meth labs in Missouri since August 2005, was approximately $3.4 million according to the department. In one Missouri county, so many children were being removed from meth lab homes and placed in state custody that there are now no longer any foster families available to care for them. Similarly, according to the Tennessee Department of Children’s Services, 1,625 children were removed from meth lab homes from January 2007 through December 2011 and placed in foster care at a cost of approximately $70.1 million. The raw materials and waste of the meth labs pose environmental dangers because they are often disposed of indiscriminately by lab operators to avoid detection, and can also cause residual contamination of exposed surfaces of buildings and vehicles where the meth was being made. According to DEA, for every pound of meth produced, 5 to 6 pounds of toxic waste are produced. Common practices by meth lab operators include dumping this waste into bathtubs, sinks, or toilets, or outside on surrounding grounds or along roads and creeks. Some may place the waste in household or commercial trash or store it on the property. In addition to dumped waste, toxic vapors from the chemicals used and the meth-making process can permeate walls and ceilings of a home or building or the interior of a vehicle, potentially exposing unsuspecting occupants. As a result, the labs potentially end up contaminating the interiors of dwellings and vehicles as well as water sources and soil around the lab site for years if not treated. Because of the dangerous chemicals used in making meth, cleaning up clandestine methamphetamine labs is a complex and costly undertaking. According to regulations promulgated for the Resource Conservation and Recovery Act by the Environmental Protection Agency, the generator of hazardous waste is the person who produced or first caused the waste to be subject to regulation. The act of seizing a meth lab causes any chemicals to be subject to regulation and thus makes law enforcement the “generator” of the waste when seizing a lab. Accordingly, seizing a lab makes a law enforcement agency responsible for cleaning up the hazardous materials and the costs associated with the cleanup. The materials seized at a clandestine drug laboratory site become waste when law enforcement officials make the determination of what to keep as evidence. Those items not required as evidence are considered hazardous waste and must be disposed of safely and appropriately. The task of removal and disposal of the hazardous waste is usually left to contractors who have specialized training and equipment to remove the waste from the lab site and transport it to an EPA-regulated disposal facility. Depending on the size of the lab, the cost for such a service to respond to an average lab incident can range from $2,500 to $10,000, or up to as much as $150,000 to clean up super labs, according to DOJ. To help state and local agencies with the expense of lab cleanup, DEA established a lab cleanup program where DEA contracts with vendors and pays them to conduct the cleanup on behalf of the law enforcement agency seizing the lab. In fiscal year 1998, DEA began funding cleanups of clandestine drug labs that were seized by state and local law enforcement agencies, focusing on the removal and disposal of the chemicals, contaminated apparatus, and other equipment. State and local law enforcement agencies seeking to utilize this service contact the DEA to coordinate the cleanup effort. According to DEA program officials, DEA has spent over $142 million on these cleanups nationwide since calendar year 2002. See figure 6. Given that labs can be placed in a wide range of locations, such as apartments, motel rooms, homes, or even cars, there is also the potential need for further remediation of these areas beyond the initial cleanup of hazardous waste if they are to be safely used or occupied again. Whereas cleanup involves the removal of large-scale contaminants, such as equipment and large quantities of chemicals for the purpose of securing evidence for criminal investigations and reducing imminent hazards such as explosions or fires, remediation involves removing residual contaminants in carpeting or walls, for example, to eliminate the long-term hazards posed by residual chemicals. Procedures for remediation of a property or structure usually involve activities such as the removal of contaminated items that cannot be cleaned, such as carpeting, and wallboard; ventilation; chemical neutralization of residues; washing with appropriate cleaning agents; and encapsulation or sealing of contaminants, among other activities. Depending on the extent of the contamination, the cost to remediate a property can be substantial.Extremely contaminated structures may require demolition. However, unlike the funding that is available for initial lab cleanup from DEA, there are no federal funds available for remediation, leaving the owner of a contaminated property responsible for the costs of any remediation to be done. Because of their toxic nature, meth labs pose a serious physical danger to law enforcement officers who come across or respond to them, and therefore must be handled using special protective equipment and training that are costly to law enforcement agencies. The process of cooking meth, which can result in eye and respiratory irritations, explosions and fires, toxic waste products, and contaminated surroundings, can be dangerous not only to the meth cook but also to persons who respond to or come across a lab, such as law enforcement officers. Because of the physical dangers posed by the labs, the Occupational Safety and Health Administration has established requirements for persons, including law enforcement, entering a clandestine lab. on hazardous waste operations, annual physical exams to monitor the ongoing medical condition of individuals involved in handling meth lab sites, and guidelines for protective equipment to be used when working in a lab. Consequently, whether the lab is raided by investigators or encountered by accident during the course of an investigation, first responders and police agencies are required to provide their personnel specialized training and equipment, such as hermetically sealed hazmat suits, to safely process a lab. suited up outside the lab as a backup team in case something happens with the lab and they need to respond, and at least one other officer on- site to provide security while the lab is being processed for evidence and cleanup. According to one estimate provided by a law enforcement agency in Indiana, the cost to the agency of the officers’ time as well as the protective equipment and processing supplies required to respond to a lab can exceed $2,000 per lab. Given these costs, law enforcement officials from all case study states agreed that responding to meth labs can be a significant financial burden on their agencies. For example, in fiscal year 2010, the Tennessee Meth Task Force spent $3.1 million providing equipment and training to law enforcement personnel and responding to meth lab incidents. Further, unlike large multinational drug- trafficking organizations, meth lab operators are usually lower income and producing meth for personal use; thus operators usually have little in the way of valuable assets or cash that law enforcement agencies can seize as a way of recouping the lab seizure response costs. Electronic tracking systems can help prevent individuals from purchasing more PSE product than allowed by law. By electronically automating and linking logbook information on PSE sales and monitoring sales in real time, stop sale electronic tracking systems can block individuals attempting to purchase more than the daily or monthly PSE limits allowed by federal or state laws. All sales in states using the NPLEx system are linked; thus the system can also be used to block individuals who attempt to purchase more than the allowable amount of PSE in any state using the NPLEx system. According to data provided by the vendor that provides the NPLEx software platform, in 2011, the system was used to block the sale of more than 480,000 boxes and 1,142,000 grams of PSE products in 11 states. Similarly, as of July 31, 2012, the system was used to block the sale of more than 576,000 boxes and 1,412,000 grams of PSE products in the 17 states using the system in 2012. See table 1. By automating the logbook requirement set forth by the CMEA, electronic tracking systems can make PSE sales information more accessible to law enforcement to help it investigate potential PSE diversion, find meth labs, and prosecute individuals for meth-related crimes. Law enforcement officials we spoke with in all four case study states that use electronic tracking systems reported using the systems for one or more of these purposes. For example, officers from a Tennessee narcotics task force told us how they use the NPLEx system to help identify the diversion of PSE for meth production. According to these officers, the NPLEx system provides them with both real-time and on-demand access to pharmacy logs via a website and includes automated tools that enable them to monitor suspicious buying patterns or specific individuals.particular case, the taskforce used NPLEx’s monitoring tools to place a watch on a specific individual previously identified as being involved in illegal meth activity. When the individual subsequently purchased PSE, the task force received a notification e-mail of the purchase and upon In one further investigation was able to determine that the individual had sold the PSE to a Mississippi meth cook. Some law enforcement officials in our four case study states reported that they do not actively use the electronic tracking systems for investigations but rather rely on other sources such as informants, meth hotlines, citizen complaints, and routine traffic stops to identify potential diversion and meth labs. Nevertheless, these officials acknowledged using these systems to obtain evidence needed to prosecute meth-related crimes after meth labs have been found. For example, a law enforcement official in Iowa noted that after officials have identified a suspected lab operator or smurfer, they can use the data in NPLEx to help build their case for prosecution or sentencing by using the records to estimate the amount of PSE that was potentially diverted for meth production. They can also determine for which retailers they need to obtain video evidence to confirm their identity of the individual making the purchase. Law enforcement officials in Indiana and Tennessee, two states that recently moved from lead-generating systems to the NPLEx stop sale system, reported some challenges with NPLEx as a diversion Prior to the implementation of NPLEx, law investigation tool.enforcement was able to use the lead-generating systems in place to identify individuals who exceeded purchase limits and then take enforcement action or obtain a search warrant based upon the criminal offense. However according to these officials, given that NPLEx blocks individuals from exceeding purchase limits, individuals involved in diversion are no longer as readily identifiable as persons of interest and it now takes longer and is more labor intensive to investigate potential PSE diversions, as they no longer have arrest warrants as a tool to get into a residence suspected of having a meth lab. While electronic tracking systems such as NPLEx are designed to prevent individuals from purchasing more PSE than allowed by law, meth cooks have been able to limit the effectiveness of such systems as a means to reduce diversion through the practice of smurfing. Smurfing is a technique meth cooks use to obtain large quantities of PSE by recruiting individuals or groups of individuals to purchase the legal allowable amount of PSE products at multiple stores, and then aggregate for meth production. By spreading out PSE sales among individuals, smurfing circumvents the preventive blocking of stop sale tracking systems. Meth lab incidents in states that have implemented electronic tracking systems have not declined, in part because of smurfing. For example, meth lab incidents in the three states—Oklahoma, Kentucky, and Tennessee—that have been using electronic tracking systems for the longest period of time are at their highest levels since the implementation of state and federal PSE sales restrictions. While these states experienced initial declines in meth lab incidents immediately following the state and federal PSE sales restrictions put in place from 2004 through 2006, lab incidents have continued to rise since 2007, likely in part because of the emergence of smurfing and the use of the One Pot method for production (see table 2). Law enforcement officials from every region of the country report that the PSE used for meth production in their areas can be sourced to local and regional smurfing operations. The methods, size, and sophistication of these operations can vary considerably—from meth users recruiting family members or friends to purchase PSE for their own individual labs to larger-scale operations where groups purchase and sell large quantities of PSE to brokers for substantial profits, who in turn often sell the PSE to Mexican drug-trafficking organizations operating super labs in California.homeless, college students, the mentally handicapped, and inner city gang members, among others. Individuals recruited for smurfing have included the elderly, The use of fake identification by smurfs is an area of growing concern for law enforcement. Smurfs can use several different false IDs to purchase PSE above the legal limit without being detected or blocked by a tracking system. For example, in 2012, through a routine traffic stop, state and local law enforcement officials in Tennessee identified a smurfing ring where a group of at least eight individuals had used more than 70 false IDs over a 9-month period to obtain over 664 grams of PSE. All of the IDs had been used to purchase the maximum amount of PSE allowed, with only one transaction (2.4 grams of PSE) blocked by the electronic tracking system. Law enforcement officials from the four electronic tracking case study states emphasized that investigating smurfing rings can be very time and resource intensive because of the large number of persons involved and the potential use of fraudulent identifications. The use of fake IDs for smurfing can also affect the use of electronic tracking systems as tools to assist in the prosecution of meth-related crimes. According to the National Methamphetamine & Pharmaceuticals Initiative (NMPI) advisory board, smurfers are increasingly utilizing fake identification and “corrupting” electronic tracking databases to the point where prosecutors prefer eyewitness accounts and investigation (law enforcement surveillance) of violations before filing charges or authorizing arrests or search warrants.investigations. This results in costly man-power-intensive In summary, based on the experience of states that have implemented electronic tracking, while it has not reduced meth lab incidents overall, this approach has had general impacts, but also potential limitations, including the following: Under the current arrangement with CHPA, the operating expenses of NPLEx are paid for by PSE manufacturers and provided to the states at no cost. Automating the purchase logbooks required by the CMEA and making the logbook information available in an electronic format to law enforcement is reported to be a significant improvement over paper logs that have to be manually collected and reviewed. This record- keeping ability is reported to have also been useful in developing and prosecuting cases against individuals who have diverted PSE for meth production. Electronic tracking maintains the current availability of PSE as an OTC product under limits already in place through the CMEA and related state laws. The NPLEx system helps to block attempts by a consumer using a single identification to purchase PSE products in amounts that exceed the legal limit, and can prevent excessive purchases made at one or more locations. Although PSE manufacturers currently pay for the NPLEx system, depending on the circumstances, their financial support may not necessarily be sustained in the future. Although electronic tracking can be used to block sales of more than the legal amount to an individual using a given identification, through the practice of smurfing, individuals can undermine this feature and PSE sales limits by recruiting others to purchase on their behalf or by fraudulently using another identification to make PSE purchases. According to some law enforcement officials, the stop sale approach of the NPLEx system makes it more challenging to use the system as an investigative tool than a lead-generating system because it prevents individuals from exceeding purchase limits, which would otherwise make them more readily identifiable to law enforcement as persons of interest. The practice by smurfers of using fraudulent identification to purchase PSE products has been reported to diminish the ability of electronic tracking systems to assist in the prosecution of meth related crimes. According to some law enforcement officials, the rising use of fraudulent identifications has also increased the need to gather eyewitness accounts or conduct visual surveillance to confirm the identities of the individuals, a development that in turn has been reported to lead to more time- and resource-intensive investigations. The number of reported meth lab incidents in both Oregon and Mississippi declined following the adoption by those states of the prescription-only approach for PSE product sales (see fig. 7). In the case of Oregon, the number of reported meth lab incidents had already declined by nearly 63 percent by 2005 from their 2004 peak of over 600 labs. After the movement of PSE products to behind-the-counter status in Oregon in 2005 and implementation of the CMEA and state-imposed prescription-only approach in 2006, the number of reported meth lab incidents in Oregon continued to decline in subsequent years. In Mississippi, after the adoption of the prescription-only approach in 2010, the number of reported meth lab incidents subsequently declined from their peak by 66 percent to approximately 321 labs in 2011. See fig.7 below. The communities in Missouri that have adopted local prescription-only requirements also experienced a decline in the number of meth labs. For example, while lab incidents statewide in Missouri increased nearly 7 percent from 2010 to 2011, the area in southeastern Missouri where most of the communities have adopted prescription-only ordinances saw lab incidents decrease by nearly half. Even as declines were observed in Oregon and Mississippi after implementing the prescription-only approach, declines were also observed in neighboring states that did not implement the approach, possibly because of other regional or reporting factors. For example, all states bordering Oregon also experienced significant declines in meth labs from 2005 through 2011, ranging from a 76 percent decline for California to a 94 percent decline for Washington state. In Mississippi’s case, except for Tennessee, all bordering states also experienced declines in lab incidents from 2009 through 2011, ranging from a 54 percent decrease in Arkansas to a decline of 57 percent in Louisiana. Consequently, there may be some other factors that contributed to the lab incident declines across all these states regardless of the approach chosen. One potential factor for the declines observed from 2010 through 2011 is the exhaustion of DEA funds to clean up labs. According to DEA officials, as the funds provide an incentive to state and local agencies to report meth lab incidents to DEA, the lack of funds from February 2011 to October 2011 may have resulted in fewer lab incidents being reported during this time period. Other potential factors within the states may have also contributed to declines in the number of lab incidents in neighboring states. For example, Arkansas law enforcement officials reported that in 2011, a change in state law took effect that made it illegal to dispense PSE products without a prescription, unless the person purchasing the product provided a driver’s license or identification card issued by the state of Arkansas, or an identification card issued by the United States Department of Defense to active military personnel. In addition, Arkansas law requires that a pharmacist make a professional determination as to whether or not there is a legitimate medical and pharmaceutical need before dispensing a nonexempt PSE product without a valid prescription. As a result of these additional requirements, retailers such as Walmart decided to no longer sell PSE products OTC in Arkansas and instead require a prescription. According to state and local law enforcement officials in Oregon and Mississippi, the prescription-only approach contributed to the reduction of reported meth lab incidents within those states. For example, according to the executive director of the Oregon Criminal Justice Commission and the directors of the Mississippi Bureau of Narcotics and the Gulf Coast HIDTA, the decline in meth lab incidents in their states can be largely attributed to the implementation of the prescription-only approach. Although their perspectives cannot be generalized across the broader population of local law enforcement agencies, law enforcement officials of other agencies we met with in Oregon and Mississippi also credited the reduction in meth lab incidents to the implementation of the prescription- only approach. To determine the extent to which the declines in lab incidents in Oregon were due to the prescription-only approach rather than other variables such as regional or reporting factors, we conducted statistical modeling analysis of lab incident data, the results of which indicate a strong association between the prescription-only approach and a decline in meth lab incidents. Specifically, our analysis showed a statistically significant associated decrease in the number of lab incidents in Oregon following introduction of the law, with the lab incident rate falling by over 90 percent after adjusting for other factors. With the decline in meth lab incidents, officials in the prescription-only states reported observing related declines in the demand and utilization for law enforcement, child welfare, and environmental cleanup services that are needed to respond to meth labs: Law enforcement: Local law enforcement officials in Oregon and Mississippi reported that the reduction in meth lab incidents has reduced the resource and workload demands for their departments to respond to and investigate meth labs. For example, one chief of a municipal police department in Oregon reported that the decline in meth labs has resulted in reduced costs to his department largely in the form of reduced manpower, training, and equipment expenses and noted that lab seizures are now so rare that his department no longer maintains a specialized team of responders to meth labs. Another chief of a municipal police department in Mississippi noted that since the adoption of the prescription-only approach, the amount of time and resources spent on meth-related investigations has declined by at least 10 percent. Child welfare: Officials in both Oregon and Mississippi reported a reduction in the demand for child welfare services to assist children found in households where meth lab incidents occurred. For example, according to a coordinator in Oregon’s Department of Human Services, the state has not removed a child from a household with an active lab since 2007. In Mississippi, the Methamphetamine Field Coordinator with the state Bureau of Narcotics, which tracks the number of drug-endangered children for the state, reported that the number of such children declined by 81 percent in the first year that the prescription-only approach was in effect. Environmental cleanup: According to data from DEA and the Oregon Department of Environmental Quality, declines in costs to clean up labs in Oregon occurred prior to the implementation of the prescription-only approach, falling from almost $980,000 in 2002 to about $580,000 in 2005. However, since 2006, costs for lab cleanup continued to fall and were about $43,000 in 2011. Funding for cleanups in Mississippi showed more variation and fluctuation from year to year; however, between 2010, when the prescription-only approach was implemented, and 2011, cleanup costs dropped by more than half (from over $1 million to less than $400,000). However, even as the prescription-only approach appears to have contributed to reducing the number of lab incidents in Oregon, the availability and trafficking of meth is still widespread and a serious threat in the state. According to a threat assessment by the Oregon HIDTA, while the number of reported meth lab incidents has declined, crystal meth continues to be highly available in the area as Mexican drug traffickers import the finished product from laboratories outside the state and from Mexico. Moreover, while the prescription-only approach appears to have contributed to a reduction in the number of meth labs in the states that have adopted it, the experience of these states to date has shown that the prescription-only approach does not preclude individuals from traveling to neighboring states to purchase PSE products for use in meth labs. Consequently, even as the number of meth lab incidents has declined in prescription-only states, law enforcement reports that many lab incidents that still occur in these states are largely due to PSE product obtained from states without a prescription requirement for PSE. For example, according to a threat assessment by NDIC, law enforcement officers interviewed in 2011 reported that the more stringent restrictions on pseudoephedrine sales in Mississippi have led many pseudoephedrine smurfing groups to target pharmacies in the neighboring states of Alabama, Louisiana, and Tennessee in order to continue operations. Officials of a sheriff’s office in a county located along the Gulf Coast in Mississippi stated that the department’s investigations have found that large numbers of individuals from Mississippi travel out of state to purchase PSE in an effort to circumvent the Mississippi prescription-only law. While some out-of-state purchases may be for licit uses, the officials stated that they believed a substantial proportion of the PSE brought back from other states was likely being diverted for the production of meth. According to law enforcement officials in Oregon, most of the incidents reported there in recent years involved either dumpsites or inactive “boxed labs” that had been used in previous years but have been dismantled and stored away for potential future use. According to the legal counsel for the Oregon Narcotics Enforcement Association, the association asked law enforcement to determine the source of PSE for lab incidents, in cases where that could be determined. In every case where a determination could be made, it was reported that the PSE was obtained from neighboring states, mostly Washington, but also Idaho, California, and Nevada. According to PSE purchase activity data from the NPLEx electronic tracking system and the vendor that provides its software platform, individuals using Oregon identifications have purchased PSE products in neighboring states. These data indicate that from October 15, 2011, through August 31, 2012, over 30,000 purchases were made by individuals using Oregon identifications. Similarly for Mississippi, reports by law enforcement of individuals traveling to neighboring non- prescription-only states to purchase PSE products is supported by PSE purchase activity data provided by the NPLEx electronic tracking system. Since the time the NPLEx system has been implemented in these states, the PSE purchase activity data indicate that over 172,000 purchases have been made by individuals using Mississippi identifications. 2011 Ark. Acts 588. See Ark. Code Ann. §§ 5-64-1103 to -1105. seek to obtain the products in Alabama.laws is to extend the prescription-only requirement for Mississippi residents into Arkansas and Alabama. Officials from the Mississippi Bureau of Narcotics said these laws will help prevent PSE product from being obtained and diverted to Mississippi for use in meth labs. In essence, the impact of these In addition to obtaining PSE products from non-prescription states, another potential source of PSE for meth labs in prescription-only states and localities is through the illicit diversion of PSE obtained with a prescription. Similar to techniques used to divert other controlled prescription drugs such as pain relievers, diversion of prescribed PSE can occur through prescription forgery, illegal or improper prescribing by a physician, or “doctor shopping,” where an individual goes to several doctors to obtain a prescription from each doctor. Although these may provide potential sources of PSE for use in meth labs in prescription-only states, law enforcement officials in Oregon and Mississippi reported no known instances from their meth lab investigations of finding that a PSE product was obtained through one of those methods in order to make meth. Law enforcement officials in Missouri localities where the prescription-only requirement has been adopted reported a few instances of PSE obtained with a prescription being used to make meth. According to investigators from a regional drug task force in a county in Missouri, they have found PSE obtained by prescription in at least three meth lab incidents. Since the county has adopted the prescription-only approach, they are observing more instances in which prescriptions of PSE are found at lab incidents. However, they did not find any evidence in these cases that the PSE had been prescribed illegally or obtained through prescription forgery or doctor shopping. Judging from the experience of Mississippi, the volume of PSE products obtained by consumers after the adoption of the approach declined from levels that existed when PSE was available OTC. Data on Mississippi OTC PSE product sales and the number of prescriptions for PSE filled suggest that use of PSE products could have fallen by several hundred thousand units after the implementation of the prescription-only approach. For example, annual unit sales of PSE dropped from almost 749,000 in 2009 before the prescription-only approach went into effect, to approximately 480,000 total units of PSE product sold OTC or prescribed in 2010, when the approach was in effect for half the year, to approximately 191,000 units prescribed or sold during 2011, when the approach had been in place for the full year (see table 3). Data are not available for Oregon on the sales of PSE products immediately before and after the implementation of the prescription-only approach to do a comparable analysis. Given the more restrictive access to PSE products consumers would face under the prescription-only approach, it is expected that consumers will be impacted. The extent of this impact depends on a number of variables such as the potential change in the effective price of PSE that the requirements of the prescription-only approach result in and the availability of effective substitutes or alternative remedies for PSE, for example. Under the prescription-only approach, the effective price of PSE, which includes costs associated with obtaining a prescription, such as the costs of time and travel to the physician for an appointment as well as any associated copays or out-of-pocket charges for the appointment itself, would increase if an in-person visit were necessary, having a negative impact on consumers. If the PSE prescriptions are being obtained by consumers at a higher effective price because of these factors, consumers can be expected to be negatively impacted to some extent by the prescription-only approach. At the same time, some of these costs, such as the costs of time and travel to go to an in-person appointment can be mitigated to the extent patients can obtain a prescription for PSE through a telephone consultation with their physicians. While it is likely that the effective price for PSE products is higher under the prescription-only approach, data on the cost to consumers for obtaining these prescriptions are not available to make this comparison. Further complicating the determination of the change in the effective price of PSE is the fact that the actual costs to a given consumer for that person’s time, travel, and insurance coverage can vary from consumer to consumer depending on the person’s individual circumstances. For example, given their lack of insurance, uninsured consumers or patients will likely face higher effective costs to obtain PSE products under a prescription-only approach than those with insurance. Because of the uncertainty involving these variables and factors, it is not possible to determine the magnitude of the change in effective price of PSE for consumers. Despite the likely increase in the effective price of PSE because of the prescription-only approach, according to state agencies and consumer groups, consumers in Oregon and Mississippi have made few complaints about the approach since its implementation, although research or surveys on the issue have not been conducted. For example, according to the executive director of the Oregon Board of Pharmacy, the state agency that adopted the rule making PSE a controlled substance, the board initially received a small number of complaints from consumers when PSE was initially scheduled, but after a number of months, the board stopped hearing about it. Officials at the Mississippi Board of Pharmacy also noted that they have not received any complaints from consumers about the prescription requirement since it went into effect. According to consumer and patient advocacy organizations such as the National Consumers League and the Asthma and Allergy Foundation of America, which conducted surveys of consumers regarding access to PSE products in 2005 and 2010 respectively, neither organization has received feedback or complaints from consumers or patients from either state about the diminished access imposed on PSE products by the prescription-only approach. Both organizations also noted that they have not conducted any additional research or surveys on the issue since their earlier surveys in 2005 and 2010. Another variable that determines the impact of the prescription-only approach on consumers is the availability of substitutes for PSE that consumers can use as alternatives to offset any potential increase in the effective price to consumers for obtaining PSE by prescription. To ensure that consumers still have access to an unrestricted oral OTC decongestant, manufacturers of cold and allergy medicines reformulated many products by substituting the ingredient phenylephrine (PE), an alternative oral decongestant also approved by FDA for use in OTC medicines that cannot be used to make methamphetamine. However, according to sales data on PE products in Mississippi for the periods before and after implementation of the prescription-only approach, the changes in sales volume for PE products do not appear to show any direct substitution of PE for PSE by consumers. In fact, the change in volume in PE products shows a decrease for the 52-week period ending in December 2011 (see table 4). The lack of a consumer shift from PSE products to PE products could be the result of several potential factors, but data are limited or unavailable to ascertain their impact. For example, it could reflect, on average, consumer perception that PE is not an effective substitute for PSE. Similarly, it could also be an indication that consumers are choosing to forgo medicating their conditions or are using other medications or remedies to relieve their symptoms. Another potential factor that could contribute to this lack of a consumer shift to PE from PSE is the extent to which PSE sales were being diverted for meth use. Although available estimates of the extent to which PSE sales are being diverted vary greatly, the drop in PSE sales without a corresponding increase in PE product sales could also imply that some of the PSE sales were likely being diverted for meth production. According to officials of the market research firm that provided the PE sales data, another potential explanation for the lack of a distinct shift in demand for PE is the fact that several PE products had to be recalled by the manufacturer because of manufacturing issues. Industry has noted that PE has limitations as a direct substitute for PSE, and in 2007, FDA reexamined the effectiveness of PE at the approved dosing levels. At the request of citizen petitioners who claimed that the available scientific evidence did not demonstrate the effectiveness of PE at the approved 10-miligram dosage level, an FDA advisory committee reviewed the issue in December 2007, including two meta-analyses of After reviewing studies provided by the citizen petitioners and CHPA.this evidence, the committee concluded that, while additional studies would be useful to evaluate higher doses, the 10-miligram PE dose was effective. However, since the recommendation of the FDA advisory committee in 2007 to study the effectiveness of PE at higher dosage levels, it appears that limited work has been undertaken to do so. According to CHPA, while it agrees that the approved dosage levels of PE are effective, PE has known limitations that make it a less than viable substitute for PSE in some long-duration applications and for many consumers. As would be expected under the more restrictive prescription-only approach, consumers of PSE products would be negatively impacted to some extent by enactment of the prescription-only approach, considering the variables that determine the change in the effective price of PSE products. However, because of uncertainties related to these variables, such as consumers’ individual situations regarding insurance or the need for an in-person consultation with their physicians, the effectiveness of substitutes such as PE or use of other alternatives, and the extent to which PSE sales have been used for illicit purposes, the net effect on consumer welfare resulting from enactment of a prescription-only policy cannot be quantified. One of the concerns expressed by industry about the potential impact of the prescription-only approach is that it is likely to increase the workload of health care providers and the overall health care system to some extent. Both the Oregon and Mississippi laws required individuals to obtain a prescription from a health care provider which requires some type of visit or consultation with the provider. This visit or consultation requires increased provider workload to process the prescriptions. In addition, individuals who do not already have an established relationship with a health care provider may require a more involved, initial in-person visit to obtain a prescription, and pharmacies may experience increased workload because of new dispensing requirements. Assuming that health care providers charge prices that reflect the costs of providing these additional services, any increase in the workload of health care providers should get reflected in the office charge billed the patient. While the impact of the prescription-only approach on the health care system is generally unknown, on the basis of limited information available from health care providers in Oregon and Mississippi, it does not appear that there has been a substantial increase in workload demands to provide and dispense prescriptions for PSE products. According to a 2011 study commissioned by CHPA on managing access to PSE, judging from Oregon’s experience, the number of health care provider visits did not grow significantly, as consumers have noted obtaining a prescription via telephone or fax request. Officials from associations representing physicians in Oregon stated that their members have not reported any real impact on their practices, and their research from members suggests that the benefits of fewer meth labs outweighs any inconvenience for their membership of requests for prescriptions. Officials from the association representing Mississippi physicians similarly reported that from the perspective of a limited sample of its members involved in family practice, emergency room care, and addiction treatment, no increase had been observed in the demand for appointments from patients seeking PSE products. In addition, representatives from the association representing pharmacists in Oregon stated that they have received few complaints over the prescription-only requirement. Further, reports from the experience of Oregon and Mississippi indicate that there has not been a significant increase in cost to the states’ Medicaid programs. In terms of an impact on states’ Medicaid programs, officials in those states said there was no net change in their programs’ policy with the implementation of the prescription requirement statewide because their programs already required that participants obtain a prescription for PSE products if they wanted to have the medication covered under the states’ Medicaid pharmacy benefit formulary. In summary, on the basis of the experience of Oregon and Mississippi, the use of the prescription-only approach has had the following impacts: Its apparent effectiveness in reducing the availability of PSE for meth production has in turn helped to reduce or maintain a decline in the number of meth lab incidents in the states that have adopted the approach. The reduction in meth lab incidents has led to a corresponding decline in the demand or need by communities for child welfare, law enforcement, and environmental cleanup services to respond to the secondary impacts of the meth labs. Although it is difficult to quantify due to the lack of data and wide variation in the individual circumstances of consumers, the prescription-only approach has the potential for placing additional burdens on consumers to some extent. Increased the potential for additional workload and costs for the health care system to provide prescriptions for PSE products. From the limited information and data that are available to date, it is not clear that they have been substantial in the two states that have adopted the prescription-only approach to date. Increased possibility of consumers in a prescription-only state attempting to bypass the prescription-only requirement by purchasing PSE in a neighboring nonprescription state. We provided a draft of this report to the Department of Justice and ONDCP for comment. Justice and ONDCP did not provide written comments on the report draft, but both provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Attorney General, the Director of the Office of National Drug Control Policy, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Carol Cha at (202) 512-4456 or chac@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objectives were to identify (1) trends in domestic meth lab incidents over the last decade and the impact of domestic meth labs on the communities affected by them; (2) the impact of electronic tracking systems on domestic meth lab incidents and the limitations, if any, of using these systems; and (3) the impact of prescription-only laws on domestic meth lab incidents and any implications of this approach for consumers and the health care system. To identify the trends in domestic meth lab incidents over the last decade, we obtained and analyzed data for all states from the Drug Enforcement Administration’s (DEA) National Seizure System (NSS) data on nearly 149,000 lab seizure incidents that occurred during the last 10 calendar years, 2002 through 2011. Using these data, we analyzed the number of incidents nationally and by region and by type of lab (i.e., P-2-P, Nazi/Birch and One Pot, or Red Phosphorus) and lab capacity. To assess the reliability of these data, we discussed the sources of the data with agency officials knowledgeable about the data to determine data consistency and reasonableness and compared them with other supporting data and documentation, where available, from states selected to be case studies for this review. As reporting by state and local law enforcement agencies of lab incidents to DEA is voluntary except when DEA provides funds to the agencies for lab cleanup, because of the exhaustion of DEA’s cleanup funds less than half way through fiscal year 2011, the number of lab incidents reported for 2011 could be biased downward as compared with the number of incidents in previous years. We discussed this issue and its potential implications with DEA officials that manage the collection of the data and the steps they have taken to address it. From these efforts and discussions, we determined that the data were sufficiently reliable for the purposes of this report. To identify key factors that influenced lab seizure incident trends over time, we obtained the perspectives and information on meth lab incident trends and factors influencing this trend from state and local officials we interviewed in states that were selected as case studies. This nonprobability sample of states was selected to reflect a mix of characteristics such as the type of approach chosen for controlling the sale of pseudoephedrine (PSE) products (electronic tracking or prescription-only), length of time the approach has been in use, and the number of meth labs seized relative to the state’s population size. The states selected for inclusion as case study states included Iowa (electronic tracking), Kentucky (electronic tracking), Mississippi (prescription-only), Missouri (electronic tracking), Oregon (prescription- only), and Tennessee (electronic tracking). While we cannot generalize any findings or results to the national level from our sample of states visited for our case studies, the information from these states provided perspective on meth lab trends and the experiences of the states in implementing these approaches. We also reviewed drug threat assessments and reports by the National Drug Intelligence Center (NDIC) and information from officials with DEA and the Office of National Drug Control Policy (ONDCP). We reviewed the methodology of the assessments and reports and found them sufficiently reliable to provide perspectives on meth lab incident trends and factors influencing these trends. We obtained additional information and input regarding factors that contributed to meth lab incident trends from federal, state, and local officials participating in the May 2012 conference of the National Methamphetamine and Pharmaceutical Initiative (NMPI), a national initiative funded by ONDCP. To determine the impact of domestic meth labs on the communities affected by them, we first reviewed a variety of reports and studies on meth labs and their impacts from sources such as the Department of Justice (DOJ), DEA, the RAND corporation, media reports, and published academic research to identify the particular areas or ways that communities are directly affected as a result of the presence of labs. On the basis of this review, we identified the key ways communities are impacted by meth labs. These included the provision of health care to meth lab burn victims, threats and dangers posed to the welfare of children, environmental damage, and increased demand and workload for law enforcement agencies. While there are other areas or ways that can be impacted by meth labs, such as treatment for health-related conditions related to meth abuse and the demand for addiction treatment, these impacts are caused by the abuse of both imported and domestically produced meth and are not impacts unique to meth labs. Therefore, we did not include those areas in our review. To describe impacts on health care providers to administer care to meth lab operators injured or burned by their labs, we reviewed and synthesized information from published academic research comparing the injuries and treatment provided to meth-lab-burn victims as compared with non-meth lab burn related patients, documentation from DOJ on meth labs, and media reports on the reported impacts of meth labs on hospital burn centers. We also interviewed the director of the burn center at the Vanderbilt University Hospital in Nashville, Tennessee, to get his perspective, as the center has treated a significant number of burn patients that received their injuries from a meth lab. To describe impacts of meth labs on child welfare, we reviewed and synthesized information from DOJ on drug- endangered children, meth lab incident data from DEA on the number of children reported to be affected by the labs, and published academic research on the impact of meth abuse on the need for foster care. To describe environmental damage caused by meth labs, we reviewed and synthesized information from DOJ on the impact of meth labs, DEA’s guidance for meth lab cleanup, and a report from the DOJ Inspector General on DEA’s meth lab cleanup program. For context, we also obtained information from DEA on its clandestine lab cleanup program and the funds expended on the program to assist state and local law enforcement agencies in cleaning up meth labs from 2002 through 2011. In addition, we obtained and analyzed information from the case study states of Mississippi, Missouri, and Oregon on any funds state agencies spent on the cleanup of meth labs. To describe impacts of meth labs on law enforcement agencies in communities, we reviewed and synthesized information from DEA’s guidance for meth lab cleanup, documentation from DOJ on meth labs, as well as information from state and local law enforcement officials we interviewed from our case study states. To determine the impact of electronic tracking systems on domestic meth lab incidents, we analyzed DEA NSS data on the number of meth lab incidents that were reported in the 3 states that have implemented electronic tracking the longest—Kentucky, Oklahoma, and Tennessee— from 2002 through 2011 to identify any trends in lab incidents that occurred within those states before and after the implementation of electronic tracking within those states. To examine the volume of PSE sales activities the national electronic tracking system monitors and blocks when necessary, we obtained and reviewed PSE purchase activity data (purchases, blocks, and exceedances) for 2011 and 2012 from Appriss, the software firm that developed and manages the software program MethCheck, which is used as the operational platform for the National Precursor Log Exchange (NPLEx), the interstate electronic tracking system paid for by manufacturers of PSE products. We chose this time period because those were the most recent years for which data from multiple states were available. To assess the reliability of these data, we discussed the data with Appriss officials. From these efforts and discussions, we determined that the data were sufficiently reliable for the purposes of this report. To understand how electronic tracking works in practice and the limitations of this approach, we obtained information from officials with Appriss as well as officials with state and local law enforcement and the High Intensity Drug Trafficking Areas (HIDTA) in our electronic tracking case study states of Iowa, Kentucky, Missouri, and Tennessee. For these state and local law enforcement officials, we utilized a snowball sampling methodology in which we initially contacted key law enforcement officials in those states involved in dealing with the meth lab problem who identified and provided contacts for other officials in those states to meet with. From these state and local law enforcement officials, we obtained information and their perspectives on the use of electronic tracking, its impact on the meth lab problem within their jurisdictions, and any potential advantages or limitations of the approach identified through their investigations and experience with the system to date. Although their perspectives cannot be generalized across the broader population of state and local law enforcement agencies in electronic tracking states, their perspectives provided insights into and information on the use and impact of the approach in practice and its limitations. To determine the impact of prescription-only laws on domestic meth lab incidents and any implications of adopting this approach for consumers and the health care system, we analyzed DEA NSS data on the number of meth lab incidents that were reported in the prescription-only states of Mississippi and Oregon and their border states (Alabama, Arkansas, California, Idaho, Louisiana, Nevada, Tennessee, and Washington) from 2002 through 2011 to identify any trends in lab incidents that occurred within those states before and after the implementation of the prescription-only approach. To determine the impact of the prescription- only approach on meth lab incidents in Oregon, we conducted a statistical modeling analysis of the lab incident data that controlled for other factors such as region of the country, ethnic composition of the state population, the proportion of the state population that is male, distance from the Mexican border, and the state drug arrest rate, among others. For more details on the methodology used for this analysis, see appendix III. To determine the impact of the prescription-only approach in counties and localities in Missouri that have adopted the approach, we also obtained and analyzed information from local officials in Missouri on how meth lab incidents have been impacted since the adoption of the approach within their jurisdictions. To obtain the perspective of state and local officials on the impact of the implementation of the prescription-only approach in their states and localities, we utilized a snowball sampling methodology in which we initially contacted key law enforcement officials involved in dealing with the meth lab problem or associations representing law enforcement in Mississippi, Missouri, and Oregon who then identified and provided contacts of other officials within their states for us to meet with. We interviewed these officials to obtain their perspectives on the impact of the prescription-only approach on the meth lab problem as well as the perceived impacts on other areas, where possible, such as the demand for law enforcement, child welfare, environmental cleanup, and the trafficking of meth within their states. Although their perspectives on these impacts cannot be generalized across the broader population of state and local law enforcement agencies in prescription-only states, their perspectives provided insights into and information on the impact of the approach in practice. To determine the extent to which individuals in prescription-only states have been traveling to neighboring states to obtain PSE product without a prescription or have diverted PSE product obtained with a prescription, we interviewed and obtained information from local law enforcement officials in Mississippi, Missouri, and Oregon on what they have found in their investigations into meth labs and PSE smurfing. We also obtained and reviewed NPLEx data on PSE purchase activity from Appriss for PSE purchases made in Washington state with identifications issued by Oregon from October 15, 2011, to the most recent full month available (August 2012). We chose the starting date of October 15, 2011, because that was the date that Washington state implemented the NPLEx system statewide. To gauge the extent of PSE sales made in other states neighboring Oregon (California, Idaho, and Nevada) to individuals using identifications issued by Oregon that had not implemented NPLEx but had retailers within the states that used the NPLEx MethCheck software program, we obtained and reviewed the MethCheck log data on PSE purchase activity for those states for the same October 15, 2011, to August 2012 time period. For Mississippi, we obtained and reviewed NPLEx data on PSE purchase activity for purchases made with identifications from Mississippi in the NPLEx states neighboring Mississippi (Alabama, Louisiana, and Tennessee) between the time those states joined NPLEx to July 2012. To determine the impact of the prescription-only approach on consumers in Mississippi, we obtained data from IMS Health Inc. through DEA on the volume of PSE sales for three 52-week periods ending in December 2009, 2010, and 2011 and analyzed the data for any changes in volume over time, comparing the 2010 and 2011 periods when the prescription- only approach was in effect with the 2009 period when it was not. To assess the reliability of the data, we reviewed documentation and information from IMS Health officials knowledgeable about the data to determine data consistency and reasonableness. From these reviews, we determined that the data were sufficiently reliable for the purposes of this report. Because data prior to the period Oregon implemented the prescription-only approach in 2006 were not available, we were not able to do a similar analysis for Oregon. To examine the number of prescriptions filled in Mississippi for PSE medications, we obtained and reviewed data provided by the Mississippi Board of Pharmacy’s Prescription Drug Monitoring Program. To assess the reliability of the data, we discussed the data with officials that manage the program. From these efforts and discussions, we determined that the data were sufficiently reliable for the purposes of this report. To obtain additional information on the reported and estimated impacts of the prescription-only approach on consumers and the health care system, we reviewed a report of the potential impacts of the prescription-only approach prepared for the Consumer Healthcare Products Association (CHPA). To help obtain perspective on the potential impact on consumers, we asked the state boards of pharmacy and state associations representing pharmacists in Mississippi and Oregon, such as the Oregon State Pharmacists Association, about the extent to which complaints may have been made by consumers about the prescription- only approach. We also asked the National Consumers League and the Asthma and Allergy Foundation of America if they had received feedback or complaints from consumers on the impact of the prescription-only approach. We chose these organizations because they have previously surveyed consumers about access to PSE products. To understand the prescription-only approach’s impact on the workload demands for physicians, we obtained the perspective of state associations representing physicians practicing in Oregon and Mississippi, such as the Oregon Medical Association and the Mississippi State Medical Association, on the extent to which their members have reported seeing an increase in demand for appointments for PSE prescriptions and any corresponding increase in their workload. While their perspectives cannot be generalized to the larger population of physicians in these states, they provided insights into the impact of the approach on their members’ practices. To determine the impact of the prescription-only approach on the Medicaid programs within Mississippi and Oregon, we obtained perspectives and information from Medicaid program officials in those states on what, if any, changes the approach required of their prescription formulary and any resulting changes in program costs. We conducted this performance audit from February 2012 to January 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We evaluated the impact of the prescription-only pseudoephedrine requirement on domestic production of methamphetamine separately using state-level data. We chose Oregon which implemented its prescription-only pseudoephedrine requirement in 2006. In order to evaluate the impact of the policy, we performed multivariate regression analyses using generalized estimating equations (GEE) to compare the trend in lab seizures, reported to DEA between 2002 and 2010. We compared the case study state with a selected group of control states using a method that improves upon the commonly used Difference-in- Differences (DD) estimation method. We estimated robust standard errors for the DD coefficients by modeling the covariance structure in the GEEs. In addition to estimating a DD model, we, alternatively, estimated the intervention effect by comparing the case state to a single synthetic control using the synthetic control methods in comparison case studies following Abadie and colleagues and Nonnemaker and colleagues.These models are described in detail below. All data were annual state-level characteristics from 2001 through 2011 Each observation in the data represented a taken from multiple sources.state for a given year between 2002 and 2011. Some factors were lagged 1 year to account for a deterrent effect and to impute data missing for a later year. Eleven states were excluded from the final analyses as potential controls because they had implemented policies early in the postintervention period or because they were missing data on a key covariate; they include Arkansas, California, Hawaii, Kentucky, Louisiana, Iowa, Illinois, Tennessee, Oklahoma, Oregon, Mississippi, and Florida. Variables included in the analysis were similar to those controlled in other studies on the impact of precursors. Outcome variables: We modeled two outcome variables: the total lab seizure rate per 100,000 population and the small toxic lab seizure (STL) rate per 100,000 population. Small toxic labs are defined as labs with a capacity of 1 pound or less. Data on methamphetamine seizure incidents from the National Seizure System maintained by Drug Enforcement Administration’s El Paso Intelligence Center (DEA EPIC) were aggregated to get the number of methamphetamine lab seizures per state per year.estimate as a denominator multiplied by 100,000. It is expressed as the rate per 100,000 people. The rates were transformed by taking the log base 10 to approximate the normal distribution required for a linear model. The rates were computed using the Census annual population Other factors were controlled in this model. The control variables included the following: Client rate: The rate of substance abuse clients reported annually into the Substance Abuse and Mental Health Services Administration through the National Survey of Substance Abuse Treatment Services (N-SSATS) per 100,000 people. This factor is lagged 1 year to account for the possibility that the number of substance abuse clients has more of an impact on the future number of labs seized than the current number of labs. Lagging these data also allows us to make up for unavailable data in 2011. The client rate is not available for 2002. The 2001 value is used to impute that value. Region: Regional factors are expected to affect the methamphetamine problem and domestic production. We cannot identify or control for all of the potential factors that influence lab seizures for the region, so we include a set of dummy variables indicating the census division to approximate the potential influence of regional factors. Divisions include the following: 1 = New England 2 = Middle Atlantic 3 = East North Central 4 = West North Central 5 = South Atlantic 6 = East South Central 7 = West South Central 8 = Mountain 9 = Pacific (referent category). Demographics: Some demographic groups are more likely to use methamphetamine than other groups. We controlled for the demographic composition of the state population to account for potential demand for the drug. The percentage of the population that is non-Hispanic white, male, Hispanic, and under age 18 were computed annually for each state from Census intercensal population estimates. Distance to Mexico: The approximate number of miles between the state and the nearest Mexican border city was taken from Cunningham et al. (2010). The number of miles were included as a set of categories with the farthest distance (1,800 miles) as the reference category. This variable attempts to account for the supply of imported methamphetamine on domestic production. Funding: The Community Oriented Policing Services (COPS) funding amount from DEA was adjusted to 2012 dollars using the Consumer Price Index and divided by 1,000 to adjust the scale of the dollar amounts. This variable controlled for law enforcement activity specific to methamphetamine lab cleanups. It also helped to adjust for a possible downward bias in the 2011 reporting because of a discontinuation of COPS funding for a portion of that year. Police: The presence of police was measured as the annual number of employed law enforcement officers as a percentage of the total population. Police data came from the Uniform Crime Report (UCR) Law Enforcement Officers Killed in Action (LEOKA) data set. This factor was lagged 1 year to account for the possibility that the presence of police has a deterrent effect on the future number of labs seized. Lagging these data also allowed us to make up for unavailable data in 2011. Arrests: The drug arrest rate was measured as the number of drug arrests (UCR offense code 18) per 100,000 population. The data come from the Uniform Crime Reporting Program Data: Arrests by Age, Sex, and Race, Summarized Yearly. Data for Florida were not reported in this data set. This factor was lagged 1 year to account for the possibility that the number of drug arrests has a deterrent effect on the future number of labs seized. Lagging these data also allowed us to make up for unavailable data in 2011. While recent analyses of methamphetamine precursor laws have used relatively similar parsimonious models, our model may still be underspecified. For example, we did not control for alternative drug use. The DD model is a regression model that compares over time the outcomes for a unit of analysis that has been exposed to a treatment or intervention (referred to as a case) with the outcomes of at least one unit that has not been exposed to the treatment or intervention (referred to as a control). The case is exposed to the intervention at some point after the first period of time; the control is never exposed to the intervention during the course of the study. The impact of the intervention is represented by the difference in differences. In this case there are two differences. The first difference is between the average outcomes of the case and the control, respectively, in the post-intervention period and the preintervention period. The second difference subtracts the control difference between the two periods from the case difference. It can be written as equation 1. EQ. 1: DD = (y-bar Case,post- - y-bar Case,pre-) - (y-bar Control,post- - y-bar Control,pre-) For a DD model, the data consist of one observation for each geographic unit, which is represented by subscript i and each unit of time which is represented by subscript j. In our analysis, each observation represents a state in each year from 2000 through 2010. Since the interventions were implemented in 2006, the preintervention period spans 2000 through 2006. The post period spans 2007 through 2010. A dummy-variableindicating the postintervention period is specified; therefore, our DD model takes the form: EQ. 2: Yij= β+ βPost-Intervention Dummy + βOregon+ βPost-Intervention Dummy*Oregon + βTime+ βXij +εij Where Yij is the outcome for state i at period j; βDD estimation has some known limitations described in the academic literature. Beasley and Case (2000) describe the endogeniety of interventions, i.e., the fact that policies are made in response to the same conditions that lead to the outcome. Heckman (2000) and Bertrand and colleagues (2004) showed that because of serial correlation in the outcomes over time, difference-in-differences models tended to underestimate the standard error of the intervention coefficient and therefore overestimate the test statistic, leading to the interpretation of statistically significant differences between the case and control units. Abadie and colleagues (2010) argue that the selection of control units are made on the basis of subjective measures of affinity between case and control units and that there is uncertainty in the control units’ ability to produce the counterfactual outcome trend that the case would have experienced if the intervention has not taken place. This is an additional source of uncertainty beyond that measured by the standard error. We attempt to address these limitations in the analysis. To account for autocorrelation, we implemented this model using Generalized Estimating Equations (GEE) in SAS Proc Genmod with a repeated statement specifying the compound symmetry covariance structure to account for the autocorrelation in the covariates across time periods for each state. The covariance structure was determined by examining the working correlation matrix estimated when specifying an unstructured covariance structure and by comparing the quasi-likelihood indicator criteria (QIC) statistics for models specifying five different covariance structures: independence, compound symmetry, first-order autocorrelation, unstructured, and 1-dependent. The unstructured covariance structure allows for correlations to be different in each comparison of times without any specific pattern. The unstructured working correlation matrix indicated high constant correlation over time. Since the correlations seem constant, a compound symmetry structure is more appropriate. The QIC values for the GEE models were similar with the independent, compound symmetry, and autocorrelation structures specified, but the QIC was usually lowest for the independence structure with autoregressive next. This indicates that those structures fit the model better. Independence in the measures across time is not a logical assumption given the nature of the data, and the structure of the correlation matrix specified by the unstructured covariance structure does not show declining correlations over time described in the autoregressive structure. The QIC supports our choice of a compound symmetry covariance structure. We validated our model findings using the synthetic control method. The synthetic control method introduced by Abadie and colleagues (2010) is a modification on the DD method that creates a data-driven synthetic control that represents the counterfactual of the case in the absence of the intervention. The synthetic control method has two advantages. It allows for transparency and objectivity in the selection of control. It also safeguards against extrapolation of the counterfactual by creating the synthetic control to match the case closely in the pre-intervention period. We implemented the synthetic control method in Stata using the synth ado program. The program uses the set control states to create a synthetic form of the case study state by weighting the control states. The treated and synthetic control states are matched on the outcome and any combination of covariates in the preintervention period so that the mean squared error of the prediction variables is minimized. Then the model interpolates the trajectory of the synthetic state over the postintervention period assuming that the intervention was not implemented. In preliminary analyses, we tested the robustness of the model matching the state and synthetic controls on the outcomes alone and on the outcomes and all covariates controlled in the GEE models. All results presented here are based on a model matching on the outcome and most covariates controlled in the GEE models. The synthetic control method does not generate a simple test statistic to determine whether the difference between the case study and synthetic control state is statistically significant. To test whether the results are likely to be found by chance in Oregon, we ran the model assigning Oregon’s neighboring states that met our criteria for inclusion as controls (Washington, Idaho, and Nevada) as the case study state and allowed the model to generate a synthetic control to compare what would have happened relative to the experience in each of those states. If the results were found to be similar to Oregon’s, then we could not dismiss the possibility that our findings for Oregon were due to chance. Prescription-only had significant impacts on lab seizure rates compared with a selected group of controls. Contrary to the findings in Cunningham et al. (2012) and Strauberg and Sharma (2012), our analysis found that lab seizure rate fell by more than 90 percent in Oregon after the prescription-only requirement was implemented after adjusting for other factors. While 90 percent seems very high, the estimate should be considered in the context that the rate has been declining and was relatively low before the policy was implemented. The impact of the prescription-only requirement was validated when the case study state was compared with an empirically generated synthetic control. The synthetic control method confirmed the direction of the impact in Oregon. Our placebo analysis that assigned Oregon’s neighbor states as the control state showed that the reductions seen in Oregon were not projected in those states, giving some indication that the Oregon reduction was not found by chance. We cannot determine the extent of the impact using the synthetic control method because of the poor fit of the model in the period prior to the policy’s implementation. Our analysis differs from the two recent studies cited above in the methodology, including the analytical approach and model specification, and the date on which the incident data were pulled. The key finding from the GEE model is the coefficient on the interaction between the case study state and the postintervention period indicators. Since the outcome data were transformed to improve the model fit, we back-transformed the coefficients for ease of interpretation. Four estimates are presented in Table 5. They represent the model specifications. Each group of covariates was modeled on the two outcomes described above: the lab seizure rate including all capacities and the small toxic lab seizure rate. The unadjusted model adjusts only for the policy, state, time effects, and the interaction between the case study state and the postintervention period indicators. The adjusted model adjusts for those factors and controls all covariates described above. The unadjusted impacts are interpreted as the percent change in the rate resulting from the implementation of the policy adjusting only for temporal factors. Adjusted factors are interpreted as the percent change in the rate resulting from the implementation of the requirement after controlling for other factors that may also affect the change in the seizure rate. Impacts are determined to be statistically significant if the p-value is less than 0.05. The key finding from the synthetic control model is the difference in the estimated lab seizure rate in the years after 2006 between the case study state and the synthetic control. Differences in the postintervention period can be attributed to the impact of the policy when the two match closely in the preintervention period. Since the states did not always have a close match in the preintervention period and the model does not generate a test statistic to indicate whether the differences between the case study and synthetic control are statistically significant, we do not present numerical results indicating the size of the impact of the policy from this analysis; instead we used the results to validate the direction of the findings of the GEE models. In addition to the contact named above, Kirk Kiester (Assistant Director), Charles Bausell, Rochelle R. Burns, Willie Commons III, Yvette Gutierrez- Thomas, Michele C. Fejfar, Christopher Hatscher, Eric Hauswirth, Eileen Larence, Linda S. Miller, Jessica Orr, and Monique Williams made significant contributions to the work.
Meth can be made by anyone using easily obtainable household goods and consumer products in labs, posing significant public safety and health risks and financial burdens to local communities and states where the labs are found. Meth cooks have discovered new, easier ways to make more potent meth that require the use of precursor chemicals such as PSE. Some states have implemented electronic tracking systems that can be used to track PSE sales and determine if individuals comply with legal PSE purchase limits. Two states, along with select localities in another state, have made products containing PSE available to consumers by prescription only. GAO was asked to review issues related to meth. Thus, GAO examined, among other things, (1) the trends in domestic meth lab incidents over the last decade; (2) the impact of electronic tracking systems on meth lab incidents and limitations of this approach, if any; and (3) the impact of prescription-only laws on meth lab incidents and any implications of adopting this approach for consumers and the health care system. GAO analyzed data such as data on meth lab incidents and PSE product sales and prescriptions. GAO also reviewed studies and drug threat assessments and interviewed state and local officials from six states that had implemented these approaches. These states were selected on the basis of the type of approach chosen, length of time the approach had been in use, and the number of meth lab incidents. The observations from these states are not generalizable, but provided insights on how the approaches worked in practice. Methamphetamine (meth) lab incidents--seizures of labs, dumpsites, chemicals, and glassware--declined following state and federal sales restrictions on pseudoephedrine (PSE), an ingredient commonly found in over-the-counter cold and allergy medications, but they rose again after changes to methods in acquiring PSE and in the methods to produce meth. According to Drug Enforcement Administration (DEA) data, the number of lab incidents nationwide declined through 2007 after the implementation of state and federal regulations on PSE product sales, which started in 2004. The number of meth lab incidents reported nationally increased after 2007, a trend primarily attributed to (1) the emergence of a new technique for smaller-scale production and (2) a new method called smurfing--a technique used to obtain large quantities of PSE by recruiting groups of individuals to purchase the legally allowable amount of PSE products at multiple stores that are then aggregated for meth production. Electronic tracking systems help enforce PSE sales limits, but they have not reduced meth lab incidents and have limitations related to smurfing. By electronically automating and linking log-book information on PSE sales, these systems can block individuals from purchasing more than allowed by law. In addition, electronic tracking systems can help law enforcement investigate potential PSE diversion, find meth labs, and prosecute individuals. However, meth cooks have been able to limit the effectiveness of such systems as a means to reduce diversion through the practice of smurfing. The prescription-only approach for PSE appears to have contributed to reductions in lab incidents with unclear impacts on consumers and limited impacts on the health care system. The implementation of prescription-only laws by Oregon and Mississippi was followed by declines in lab incidents. Law enforcement officials in Oregon and Mississippi attribute this reduction in large part to the prescription-only approach. Prescription-only status appears to have reduced overall demand for PSE products, but overall welfare impacts on consumers are unclear because of the lack of data, such as the cost of obtaining prescriptions. On the basis of the limited information available from health care providers in Oregon and Mississippi, there has not been a substantial increase in workload demands to provide and dispense prescriptions for PSE products.
You are an expert at summarizing long articles. Proceed to summarize the following text: As it does now, the United States will fund its share of NATO enlargement primarily through contributions to the three common budgets. NSIP pays for infrastructure items that are over and above the needs of the member nations, including communications links to NATO headquarters or reinforcement reception facilities, such as increased apron space at existing airfields. The military budget pays for NATO Airborne Early Warning Force program and military headquarters costs, and the civil budget pays primarily for NATO’s international staff and operation and maintenance costs of its civilian facility in Brussels. For fiscal year 1997, the U.S. contribution for the three common budgets was about $470 million: $172 million for the NSIP, $252 million for NATO’s military budget, and $44.5 million for NATO’s civil budget. Any increases to the U.S. budget accounts would be reflected primarily through increased funding requests for the DOD military construction budget from which the NSIP is funded, the Army operations and maintenance budget from which the military budget is funded (both part of the National Defense 050 budget function), and the State Department’s contributions to international organizations from which the civil budget is funded (part of the International Affairs 150 budget function). While NATO will not have finalized its common infrastructure requirements for new members until December 1997 or decided whether or how much to increase the common budgets until June 1998, DOD and State Department officials told us that the civil and NSIP budgets are likely to increase by only 5 to 10 percent and the military budget will probably not increase at all. This would mean an increase of about $20 million annually for the U.S. contribution to NATO. However, as we indicated, NATO has yet to make decisions on these matters. In addition, the United States could choose to help new members in their efforts to meet their NATO membership obligations through continued Foreign Military Financing grants and/or loans, International Military Education and Training grants, and assistance for training activities. The three candidate countries and other PFP countries have been receiving assistance through these accounts since the inception of the PFP program, and this has enabled some of these countries to be more prepared for NATO membership. In fiscal year 1997, over $120 million was programmed for these activities, and about $60 million of this amount went to the three candidates for NATO membership. Any increased funding for such assistance would be funded through the International Affairs and Defense budget functions. It is through NATO’s defense planning process that decisions are made on how the defense burden will be shared, what military requirements will be satisfied, and what shortfalls will exist. NATO’s New Strategic Concept, adopted in Rome in 1991, places greater emphasis on crisis management and conflict prevention and outlines the characteristics of the force structure. Key features include (1) smaller, more mobile and flexible forces that can counter multifaceted risks, possibly outside the NATO area; (2) fewer troops stationed away from their home countries; (3) reduced readiness levels for many active units; (4) emphasis on building up forces in a crisis; (5) reduced reliance on nuclear weapons; and (6) immediate and rapid reaction forces, main defense forces (including multinational corps), and augmentation forces. Although NATO has not defined exactly the type and amount of equipment and training needed, it has encouraged nations to invest in transport, air refueling, and reconnaissance aircraft and improved command and control equipment, among other items. NATO’s force-planning and goal-setting process involves two interrelated phases that run concurrently: setting force goals and responding to a defense planning questionnaire. The force goals, which are developed every 2 years, define NATO’s requirements. The major NATO commanders propose force goals for each nation based on command requirements. Each nation typically has over 100 force goals. NATO and national officials frequently consult one another while developing force goals and national defense plans. NATO commanders are unlikely to demand that member nations establish units or acquire equipment they do not have. In its annual response to NATO’s defense planning questionnaire, each member verifies its commitment for the previous year, defines its commitment for the next year, and lays out plans for the following 5 years. Alliance members review each nation’s questionnaire and, in meetings, can question national plans and urge member nations to alter their plans. After finishing their reviews, generally in October or November, NATO staff write a report summarizing each nation’s plans and assessing national commitments to NATO. Once NATO members approve this report, it becomes the alliance’s consensus view on each country’s strengths and weaknesses and plan to support the force structure. It is through this process that NATO determines what shortfalls exist, for example, in combat support and combat service support capabilities. According to U.S. officials, NATO is preparing several reports to be presented for approval at the defense ministerial meetings in December 1997. One report will discuss the additional military capability requirements existing alliance members will face as a result of the alliance’s enlargement. According to officials at the U.S. mission and Supreme Headquarters Allied Powers Europe, it is unlikely that any additional military capability requirements will be placed on NATO members over and above the force goals they have already agreed to provide. In other words, if current force goals are attained, NATO will have sufficient resources to respond to likely contingencies in current and new member countries. Therefore, it can be concluded that although enlargement of the alliance is another reason for current allies to attain their force goals, it will not add any new, unknown costs to existing members’ force plans. Other reports resulting from this process will discuss the requirements for commonly funded items in the new nations and their estimated costs. These items include infrastructure that will enable the new allies to receive NATO reinforcements in times of crisis, communication systems between NATO and their national headquarters, and a tie-in to NATO’s air defense system. How these projects will be financed by NATO, for example, whether they will be financed within existing budgets or by increasing the size of NATO’s common budgets, will not be determined until June 1998. Therefore, the impact of these costs on the U.S. contributions to NATO’s common budgets and the U.S. budget will be unknown until next spring. Another report will present an assessment of the capabilities and shortfalls in the military forces of Poland, Hungary, and the Czech Republic. NATO does not and will not estimate the costs of the shortfalls of either the current or the new member states, but once these shortfalls are identified, cost estimates can be made by others. However, even though new members’ capabilities and shortfalls will be identified in December, these countries’ force goals will not be set until the spring. These force goals will, in effect, be a roadmap for the new members on how to address their shortfalls. (See app. I for a timeline illustrating these events.) When the DOD, CBO, and Rand studies were completed, many key cost determinants had not been established. Consequently, each study made a series of key assumptions that had important implications for each studies’ results. DOD made the following key assumptions: Specific nations would be invited to join NATO in the first round of enlargement. NATO would continue to rely on its existing post-Cold War strategy to carry out its collective defense obligations (that is, each member state would have a basic self-defense capability and the ability to rapidly receive NATO reinforcements). NATO would not be confronted by a significant conventional military threat for the foreseeable future, and such a threat would take many years to develop. NATO would continue to use existing criteria for determining which items would be funded in common and which costs would be allocated among members. Using these assumptions, DOD estimated the cost of enlarging NATO would range from about $27 billion to $35 billion from 1997 to 2009. The estimate was broken down as follows: about $8 billion to $10 billion for improvements in current NATO members’ regional reinforcement capabilities, such as developing mobile logistics and other combat support capabilities; about $10 billion to $13 billion for restructuring and modernizing new members’ militaries (for example, selectively upgrading self-defense capabilities); and about $9 billion to $12 billion for costs directly attributable to NATO enlargement (for example, costs of ensuring that current and new members’ forces are interoperable and capable of combined NATO operations and of upgrading or constructing facilities to receive NATO reinforcements). DOD estimated the U.S. share of these costs would range from about $1.5 billion to $2 billion—averaging $150 million to $200 million annually from 2000 to 2009. The estimated U.S. share chiefly consisted of a portion of direct enlargement costs commonly funded through NATO’s Security Investment Program. DOD assumed that the other costs would be borne by the new members and other current member states and concluded that they could afford these costs, although this would be challenging for new members. (See app. II.) In our review of DOD’s study of NATO enlargement, we (1) assessed the reasonableness of DOD’s key assumptions, (2) attempted to verify pricing information used as the basis for estimating enlargement costs, (3) looked into whether certain cost categories were actually linked to enlargement, and (4) identified factors excluded from the study that could affect enlargement costs. We concluded that DOD’s assumptions were reasonable. The assumption regarding the threat was probably the most significant variable in estimating the cost of enlargement. Based on information available to us, we concluded that it was reasonable to assume the threat would be low and there would be a fairly long warning time if a serious threat developed. This assumption, and the assumption that the post-Cold War strategic concept would be employed, provided the basis for DOD’s judgments concerning required regional reinforcement capabilities, new members’ force modernization, and to a large extent those items categorized as direct enlargement costs. DOD also assumed that during 1997-2009, new members would increase their real defense spending at an average annual rate of 1 to 2 percent. Both private and government analysts project gross domestic product (GDP) growth rates averaging 4 to 5 percent annually for the Czech Republic, Hungary, and Poland during 1997-2001. Thus, projected increases in defense budgets appear affordable. Analysts also point out that potential new member countries face real fiscal constraints, especially in the short term. An increase in defense budgets at the expense of pressing social concerns becomes a matter of setting national priorities, which are difficult to predict. If these countries’ growth rates do not meet expectations, their ability to increase real defense spending becomes more problematic. DOD further assumed that current NATO members would on average maintain constant real defense spending levels during 1997-2009. Analysts have expressed somewhat greater concern about this assumption and generally consider it to be an optimistic, but reasonable projection. Some analysts indicated that defense spending in some current member states may decline further over the next several years. Such declines would partly be due to economic requirements associated with entry into the European Monetary Union. Despite our conclusion that DOD’s underlying assumptions were sound, for several reasons we concluded that its estimates are quite speculative. First, DOD’s pricing of many individual cost elements were “best guesses” and lacked supporting documentation. This was the case for all three categories of costs: direct enlargement costs, current members’ reinforcement enhancements, and new members’ modernization requirements. Most of the infrastructure upgrade and refurbishment cost estimates were based on judgments. For example, DOD’s estimate of $140 million to $240 million for upgrading a new member’s existing air base into a NATO collocated operating base was not based on surveys of actual facilities but on expert judgment. We were told that the actual cost could easily be double—or half—the estimate. DOD’s estimated costs for training and modernization were notional, and actual costs may vary substantially. DOD analysts did not project training tempos and specific exercise costs. Instead, they extrapolated U.S. and NATO training and exercise costs and evaluated the results from the point of view of affordability. DOD’s estimate for modernization and restructuring of new members’ ground forces was also notional and was based on improving 25 percent of the new members’ forces. However, it did not specify what upgrades would be done and how much they would cost. Second, we could find no linkage between DOD’s estimated cost of $8 billion to $10 billion for remedying current shortfalls in NATO’s reinforcement capabilities and enlargement of the alliance. Neither DOD nor NATO could point to any specific reinforcement shortfalls that would result from enlargement that do not already exist. However, existing shortfalls could impair the implementation of NATO’s new strategic concept. DOD officials told us that while reinforcement needs would not be greater in an enlarged NATO, enlargement makes eliminating the shortfalls essential. This issue is important in the context of burdensharing because DOD’s estimate shows that these costs would be covered by our current NATO allies but not shared by the United States. Finally, NATO has yet to determine what military capabilities, modernization, and restructuring will be sought from new members. Consequently, DOD had little solid basis for its $10 billion to $13 billion estimate for this cost category. Moreover, DOD and new member governments have noted that new members are likely to incur costs to restructure and modernize their forces whether or not they join NATO. Indeed, some countries have indicated that they may need to spend more for these purposes if they do not become NATO members. DOD showed these costs as being covered entirely by the new members. NATO enlargement could entail costs in addition to those included in DOD’s estimates, including costs for assistance to enhance the PFP or other bilateral assistance for countries not invited to join NATO in July 1997. In addition, the United States may provide assistance to help new members restructure and modernize their forces. For example, Polish officials said they may need up to $2 billion in credits to buy multipurpose aircraft. While not an added cost of enlargement, such assistance would represent a shift in the cost burden from the new member countries to the countries providing assistance. DOD did not include such costs in its estimate of the U.S. share, though it acknowledged that the cost was possible. Moreover, U.S. and NATO officials have stated that additional countries may be invited to join NATO in the future, most likely in 1999. DOD’s cost estimate did not take into account a second or third round of invitations. If additional countries are invited, cost of enlargement would obviously increase. CBO and Rand estimated the cost of incorporating the Czech Republic, Hungary, Poland, and Slovakia into NATO. They based their estimates on a range of NATO defense postures, from enhanced self-defense with minimal NATO interoperability to the forward stationing of NATO troops in new member states. However, they also noted that the current lack of a major threat in Europe could allow NATO to spend as little as it chose in enlarging the alliance. Because of the uncertainties of future threats, and the many possible ways to defend an enlarged NATO, CBO examined five illustrative options to provide such a defense. Each option built on the pervious one in scope and cost. CBO estimated that the cost of the five options over the 15-year period would range from $61 billion to $125 billion. Of that total, CBO estimated that the United States might be expected to pay between $5 billion and $19 billion. CBO included in its range of options a $109-billion estimate that was predicated on a resurgent Russian threat, although it was based on a self-defense and reinforcement strategy similar to that used by DOD. Of this $109 billion, CBO estimated that the United States would pay $13 billion. Similarly, Rand developed estimates for four options to defend an enlarged NATO that build upon one another, from only self-defense support at a cost of $10 billion to $20 billion to the forward deployment of forces in new member states at a cost of $55 billion to $110 billion. These options include a middle option that would cost about $42 billion that was also based on a self-defense and reinforcement strategy. Rand estimated that the United States would pay $5 billion to $6 billion of this $42 billion in total costs. Several factors account for the differences between DOD’s estimates and the CBO and Rand estimates, even those that employed defense strategies similar to DOD’s. (App. III illustrates the major results and key assumptions of the three estimates.) CBO’s cost estimate is significantly higher than DOD’s for the following reasons: DOD assumed reinforcements of 4 divisions and 6 wings, whereas CBO assumed a force of 11-2/3 divisions and 11-1/2 wings and a much larger infrastructure for this force in the new member states. CBO’s modernization costs are much higher than DOD’s and include the purchase of 350 new aircraft and 1,150 new tanks for the new member states. DOD assumed that about 25 percent of the new member states’ ground forces would be modernized through upgrades and that each nation would procure a single squadron of refurbished Western combat aircraft. CBO assumed much higher training costs, $23 billion, which include annual, large-scale combined exercises. DOD included $2 billion to $4 billion for training. CBO included the purchase of Patriot air defense missiles at a cost of $8.7 billion, which is considerably higher than DOD’s assumed purchase of refurbished I-HAWK type missiles at $1.9 billion to $2.6 billion. CBO’s infrastructure costs were much higher than DOD’s and included new construction, such as extending the NATO fuel pipeline, which CBO assumed would meet U.S. standards. DOD assumed planned refurbishment of existing facilities that would meet minimal wartime standards. Rand’s cost estimate is somewhat higher than DOD’s, although both were based on similar threat assessments. First, its reinforcement package was larger—5 divisions and 10 wings—and therefore infrastructure costs were higher. Second, it assumed new members would purchase the more expensive Patriot air defense system rather than the refurbished I-HAWKs. Finally, it assumed greater training costs than did DOD. The author of the Rand study stated that if he had used DOD’s assumptions, the cost range would have been almost identical to DOD’s. Mr. Chairman, this concludes our prepared remarks. We would be happy to answer any questions you or the Committee members may have. NATO issues study on enlargement. NATO issues invitations to Poland, Hungary, and the Czech Republic to begin accessions talks. NATO prepares several reports: additional military capability requirements for existing alliance members that will result from the alliance’s enlargement; requirements for commonly funded items in the new member nations, including infrastructure that will enable the new allies to receive NATO reinforcements in times of crisis, communication systems between NATO and their national headquarters, and a tie-in to NATO’s air defense system; cost estimates for items eligible for common funding presented by NATO officials; and the capabilities and shortfalls in the military forces of Poland, Hungary, and the Czech Republic. NATO defense ministerial meeting to approve the above reports. New members’ force goals set. NATO decides whether or how much to increase the common budgets, which would then be shared among current and new members. Target date for new member accession into NATO. $27-$35 in constant 1997 dollars $61-$125 in constant 1997 dollars ($109 for a defense strategy similar to DOD’s) $10-$110 in constant 1996 dollars ($42 for a defense strategy similar to DOD’s) A small group (details classified) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on issues related to the cost and financial obligations of expanding the North Atlantic Treaty Organization (NATO), focusing on: (1) current U.S. costs to support NATO's common budgets and other funding that supports relations with central and east European nations and promotes NATO enlargement; (2) NATO's defense planning process, which will form the basis for more definitive cost estimates for an enlarged alliance; and (3) GAO's evaluation of the recent Department of Defense (DOD) study of NATO expansion and a comparison of DOD's study with studies of the Congressional Budget Office (CBO) and the Rand Corporation. GAO noted that: (1) the ultimate cost of NATO enlargement will be contingent on several factors that have not yet been determined; (2) NATO has yet to formally define its future: (a) strategy for defending the expanded alliance; (b) force and facility requirements of the newly invited states; and (c) how costs of expanding the alliance will be financed; (3) also unknown is the long-term security threat environment in Europe; (4) NATO's process for determining the cost of enlargement is under way and expected to be completed by June 1998; (5) in fiscal year 1997, the United States contributed about $470 million directly to NATO to support its three commonly funded budgets, the NATO Security Investment Program (NSIP), the military budget, and the civil budget; (6) this is about 25 percent of the total funding for these budgets; (7) it is through proposed increased to these budgets, primarily the NSIP and to a lesser extent the civil budget, that most of the direct cost of NATO enlargement will be reflected and therefore where the United States is likely to incur additional costs; (8) over $120 million was programmed in fiscal year 1997 for Warsaw Initiative activities in the three countries that are candidates for NATO membership and other Partnership for Peace (PFP) countries; (9) this money was provided to help pay for Foreign Military Financing grants and loans, exercises, and other PFP-related activities; (10) funding for these activities will continue, but the allocation between the candidates for NATO membership and all other PFP participants may change over time; (11) this funding is strictly bilateral assistance that may assist the candidate countries and other countries participating in PFP to meet certain NATO standards, but it is not directly related to NATO decisions concerning military requirements or enlargement; (12) GAO's analysis of DOD's cost estimate to enlarge NATO indicates that its key assumptions were generally reasonable and were largely consistent with the views of U.S., and NATO, and foreign government officials; (13) the assumption that large-scale conventional security threats will remain low significantly influenced the estimate; (14) DOD's lack of supporting cost documentation and its decision to include cost elements that were not directly related to enlargement call into question its overall estimate; (15) because of the uncertainties associated with enlargement and DOD's estimating procedures, the actual cost of NATO enlargement could be substantially different from DOD's estimated cost of about $27 billion to $35 billion; and (16) Rand and CBO cost estimates are no more reliable than DOD's.
You are an expert at summarizing long articles. Proceed to summarize the following text: Roughly half of all workers participate in an employer-sponsored retirement or pension plan. Private sector pension plans are classified either as defined benefit (DB) or as defined contribution (DC) plans. DB plans promise to provide, generally, a fixed level of monthly retirement income that is based on salary, years of service, and age at retirement, regardless of how the plan investments perform. In contrast, benefits from DC plans are based on the contributions to and the performance of the investments in individual accounts, which may fluctuate in value. Examples of DC plans include 401(k) plans, employee stock ownership plans, and profit-sharing plans. The most dominant and fastest growing DC plans are 401(k) plans, which allow workers to choose to contribute a portion of their pretax compensation to the plan under section 401(k) of the Internal Revenue Code. IRAs were established under the Internal Revenue Code provisions of the Employee Retirement Income Security Act of 1974 (ERISA). ERISA was generally enacted to protect the interests of employee benefit plan participants and their beneficiaries by requiring the disclosure to them of financial and other information concerning the plan; by establishing standards of conduct for plan fiduciaries; and by providing for appropriate remedies and access to the federal courts. To give IRAs flexibility in accumulating assets for retirement, Congress designed a dual role for these accounts. The first role is to provide individuals not covered by employer-sponsored retirement plans an opportunity to save for retirement on their own in tax-deferred accounts. The second role was to give retiring workers or individuals changing jobs a way to preserve assets in employer-sponsored retirement plans by allowing them to roll over or transfer plan balances into IRAs. Over the past 30 years, Congress has created several types of IRAs designed with different features for individuals and small businesses. The types of IRAs geared toward individuals are: Traditional IRAs: Traditional IRAs allow individuals to defer taxes on investment earnings accumulated in these accounts until distribution at retirement. Eligible individuals may make tax-deductible contributions of earned income to these accounts. Other individuals may make nondeductible contributions to receive the tax deferral on earnings. Yearly contribution amounts are subject to limits based on income, pension coverage, and filing status. Taxpayers over age 70½ cannot contribute and must begin required minimum distributions from these accounts. Withdrawals are generally taxable; and early distributions made before age 59½, other than for specific exceptions, are subject to a 10 percent additional income tax. Roth IRAs: In the Taxpayer Relief Act of 1997, Congress created the Roth IRA, which allows eligible individuals to make after-tax contributions to these accounts. After age 59½, enrollees may take tax-free distributions of their investment earnings. Withdrawals of investment earnings before age 59½ are subject to a 10 percent additional income tax and other taxes. Yearly contribution amounts are subject to limits based on income and filing status. There are no age limits on contributing, and no distributions are required during the Roth IRA owner’s lifetime. Withdrawals are generally tax-free after age 59½, as long as the taxpayer held the account for 5 years; early distributions other than for specific exceptions are subject to an additional 10-percent income tax. Traditional and Roth IRAs can also be established as payroll-deduction IRAs, which requires employer involvement. Payroll-deduction IRA Programs (also called payroll-deduction IRAs): Through payroll-deduction IRAs, employees may establish either traditional or Roth IRAs, and employees may contribute to these accounts through voluntary deductions from their pay, which are forwarded by the employer to the employee’s IRA. As long as employers follow guidelines set by Labor for managing the payroll-deduction IRA, employers are not subject to the fiduciary requirements in ERISA Title I that apply to employer-sponsored retirement plans, like 401(k) plans. Other types of IRAs that are intended to encourage savings through employers include: SEP IRAs: In the Revenue Act of 1978, Congress established SEP IRAs, which were designed with fewer regulatory requirements than traditional employer pension plans to encourage small employers to offer retirement plans to their workers. SEP IRAs allow employers to make tax deductible contributions to their own and each eligible employee’s account. SEP IRAs have higher contribution limits than other IRAs, but they do not permit employee contributions. Yearly contributions are not mandatory, but as with pension plans, they must be based on a written allocation formula and cannot discriminate in favor of highly-compensated employees. SIMPLE IRAs: In the Small Business Job Protection Act of 1996, Congress created SIMPLE IRAs to help employers with 100 or fewer employees more easily provide a retirement savings plan to their employees. In this plan, eligible employees can direct a portion of their salary, within limits, to a SIMPLE IRA and employers may either match the employees’ contribution up to 3 percent or make nonelective, 2 percent contributions of each employee’s salary for all employees making at least $5,000 for the year. This IRA replaced the Salary Reduction Simplified Employee Pension IRA (SAR-SEP IRA)—-a tax-deferred retirement plan provided by sole proprietors or small businesses with fewer than 25 employees. New SAR-SEP IRAs could not be established after December 31, 1996, but plans in operation at that time were allowed to continue. Each of these IRAs have their own eligibility requirements, as shown in table 1. Labor’s Employee Benefits Security Administration (EBSA) shares responsibility for overseeing the IRA component of ERISA with IRS. EBSA enforces Title I of ERISA, which specifies, among other standards, certain fiduciary and reporting and disclosure requirements and seeks to ensure that fiduciaries operate their plans in the best interest of plan participants. IRS enforces Title II of ERISA, which provides, among other standards, tax benefits for plan sponsors and participants, including participant eligibility, vesting, and funding requirements. IRA assets have surpassed DC plan assets and DB plan assets, but the majority of assets that flow into IRAs come from assets being rolled over from other accounts, not from contributions. We also found that IRA ownership is associated with higher education and higher income levels. The percentage of households that own IRAs is similar to those that participate in 401(k) plans, and total contributions to IRAs are lower than contributions to 401(k) accounts. In addition, there are key differences between the structure of employer-sponsored IRAs and that of 401(k)s. Since 1998, IRA assets have comprised the largest portion of the retirement market. As shown in figure 1, in 2004, IRA assets totaled about $3.5 trillion compared to DC assets of $2.6 trillion and DB assets of $1.9 trillion. Most assets flowing into IRAs come from the transfer of retirement assets between IRAs or from other retirement plans, including 401(k) plans, not from contributions. These “rollovers” allow individuals to preserve their retirement savings when they change jobs or retire. As shown in figure 2, from 1998 to 2004, over 80 percent of funds flowing into IRAs came from rollovers, demonstrating that IRAs play a smaller role in building retirement savings than they play in preserving retirement savings. IRA accounts that contain rollover assets also exceeded those without rollover assets. For example, in 2007, the median amount in a traditional IRA with rollover assets was $61,000, while the median amount in a traditional IRA without rollover assets was $30,000. Traditional and Roth IRA ownership is associated with higher education and income levels. In 2004, 59 percent of IRA households were headed by an individual with a college degree, and only about 3 percent were headed by an individual with no high school diploma. Over one-third of these IRA households earned $100,000 or more, and less than 2 percent earned less than $10,000. Households with IRAs also tend to own their homes. Research shows that higher levels of education and household income correlate with a greater propensity to save. Therefore, it is not surprising that IRA ownership increases as education and income levels increase. Lastly, IRA ownership is highest among households headed by individuals aged 45 to 54. More households own traditional IRAs, which were the first IRAs established, than Roth IRAs or employer-sponsored IRAs. In 2007, nearly 33 percent of all households owned traditional IRAs, and about 15 percent owned Roth IRAs. In contrast, about 8 percent of households participated in employer-sponsored IRAs. The percentage of households that own IRAs is similar to the percentage that own 401(k)s, but IRA contributions are less than 401(k) contributions. In 2004, 29 percent of households owned individually arranged IRAs, and 26 percent participated in 401(k) plans (see fig. 3). Ten percent of households own a traditional or Roth IRA and participate in 401(k) plans. Although contributions to both 401(k) plans and IRAs increased from 2002 to 2004, 401(k) contributions were almost four times greater than those made to IRAs. Few studies have been done that have compared contributions by IRA owners and 401(k) participants. However, one study assessed the consistency of taxpayer annual contributions to traditional IRAs and to 401(k) plans from tax years 1999 to 2002. As shown in figure 4, the study found that only 1.4 million taxpayers contributed to their traditional IRAs in all 4 years, while nearly 16 million taxpayers contributed to their 401(k) accounts in the same time period. The study found that the persistency in making IRA contributions may partially be attributed to limits in the tax deductions some owners could take for their contributions. Certain criteria, including age, income, tax filing status, and coverage in a work-based retirement plan, affect the tax deduction taxpayers could take for contributing to an IRA. In addition, a study by the Investment Company Institute that included data on contributions by IRA owners shows that more households with Roth IRAs or employer-sponsored IRAs contribute to their accounts than households with traditional IRAs. For example, in 2004, more than half of households with Roth, SAR-SEP, or SIMPLE IRAs contributed to their accounts, but less than one-third of households with traditional IRAs contributed to their accounts. This, again, may be partly attributed to the emerging role of traditional IRAs as a means to preserve rollover assets more than to build retirement savings. The Investment Company Institute study also stated that the median household contribution to traditional IRAs was $2,300 compared to the median contribution to Roth IRAs of $3,000. The median contribution to SAR-SEP and SIMPLE IRAs was $5,000. The study noted that this difference may be related to the higher contribution limits for employer-sponsored IRAs than for traditional IRAs and Roth IRAs. Table 2 shows contributions limits for the current tax year. Comprehensive comparisons between IRAs and 401(k) plans are difficult because of differences in plan structures. 401(k) plans are sponsored by employers, whereas most households with IRAs own traditional IRAs established outside of the workplace. In addition, most of the assets in IRAs are in traditional IRAs that are set up by individuals and provide individual investors with a vehicle to contribute to their own retirement savings. Employer-sponsored IRAs, such as SIMPLE and SEP, were established for small employers who lack the resources to provide a 401(k) plan. In addition, payroll deduction IRA programs enable small employers to provide employees the opportunity to save for retirement. Key differences exist between employer-sponsored IRAs and 401(k) plans, as shown in table 3. Several barriers may discourage small employers from offering payroll- deduction and employer-sponsored IRAs to their employees. Although employer-sponsored IRAs were designed with fewer reporting requirements to encourage small employers to offer them, few employers appear to do so. In addition, few employers appear to offer payroll- deduction IRA programs. Retirement and savings experts said payroll- deduction IRAs could help many workers save for retirement and these IRAs may be the easiest way for small employers to offer a retirement savings opportunity to their employees. Several barriers, including costs, may discourage employers from offering them; however, information is lacking on the actual costs to employers. In addition, several experts raised questions on how expanded payroll-deduction IRAs may affect employees. Employer-sponsored IRAs offer greater savings opportunities than payroll-deduction IRAs, but employer sponsorship of IRAs may also be hindered by costs, including required employer contributions. Retirement and savings experts offered several legislative proposals to encourage employers to offer and employees to participate in IRAs, but limited government actions have been taken to increase the number of employers sponsoring employer-sponsored IRAs. Employees of small firms are more likely to lack access to a retirement plan at work than employees of larger firms, and several barriers may limit small employers from offering payroll-deduction programs and employer- sponsored IRAs to their employees. Although IRAs have been largely successful at helping individuals preserve their retirement savings through rollovers, experts told us that IRA participation falls short of Congress’ first goal for creating IRAs—to provide a tax-preferred account for workers without employer-sponsored retirement plans to save for their retirement. For example, millions of employees of small firms lack access to a workplace retirement plan. The Congressional Research Service found that private-sector firms with fewer than 100 employees employed about 30.9 million full-time workers between the ages of 25 and 64 in 2006. About 19.9 million of those workers lacked access to an employer- sponsored retirement plan, as shown in figure 5. To address the issue of low retirement plan sponsorship among small employers, Congress created SEP and SIMPLE employer-sponsored IRAs, and has encouraged employers not offering a retirement plan to offer payroll-deduction IRAs. These IRAs were designed to have fewer and less burdensome reporting requirements than 401(k) plans to encourage participation, and payroll-deduction IRA programs do not have any employer reporting requirements. Payroll-deduction and employer- sponsored IRAs offer several advantages, as shown in table 4. Labor issued a regulation under which an employer could maintain a payroll deduction program for employees to contribute to their IRAs without being considered a pension plan under ERISA. Through payroll- deduction IRAs, an employer withholds and forwards an amount determined by the employee directly to an IRA (traditional or Roth) established by the employee. Although any employer can provide payroll- deduction IRAs to their employees, regardless of whether or not they offer another retirement plan, retirement and savings experts told us that very few employers offer their employees the opportunity to contribute to IRAs through payroll deduction. Further, Labor and IRS officials told us that data is limited on how many employers offer payroll-deduction IRAs. Because there are no reporting requirements for payroll-deduction IRAs, and very limited reporting requirements for employer-sponsored IRAs—as discussed later in this report—we were unable to determine exactly how many employers offer these IRAs to their employees. For example, because an employer’s responsibility with payroll-deduction IRAs is to forward employee contributions to IRAs, employers are not required to report to the federal government that they are providing this service to employees. Consequently, neither Labor nor IRS is able to determine how many employers offer payroll-deduction IRAs. Employee access to SIMPLE and SEP IRAs also appears limited. SIMPLE IRAs are only available to firms with 100 employees or fewer who do not already offer another retirement plan; and SEP IRAs are available to employers of any size, including those who may offer either a DC or DB plan. The Bureau of Labor Statistics reported that, in 2005, 8 percent of private sector workers in firms with fewer than 100 employees participated in a SIMPLE IRA, and 2 percent of workers participated in a SEP IRA. An IRS evaluation of employer-filed W-2 forms estimated that in 2004, 190,000 employers sponsored SIMPLE IRAs. However, officials told us that this figure was likely understated, as it does not include accounts that may be owned by sole proprietors or individuals who own unincorporated businesses by themselves, who are not required to file W-2 forms. GAO was unable to determine the number of employers sponsoring SEP plans, but IRS data from 2002 show more taxpayers owned SEP than SIMPLE IRAs, with 3.5 million SEP accounts compared to 2 million SIMPLE accounts. Retirement and savings experts reported that increased worker access to payroll-deduction IRAs could help many workers to save for retirement at work. Through payroll-deduction IRA programs, employees may either contribute to traditional or Roth IRAs, depending on the eligibility requirements of these plans. Any individual under the age of 70½ with taxable compensation may contribute to a traditional IRA, and many individuals could receive a tax deduction for their contribution. Most low- and middle-income individuals are eligible to contribute to Roth IRAs. In theory, all of the estimated 20 million employees of small firms mentioned previously who lacked an employer-sponsored retirement plan in 2006 could be eligible to contribute to a traditional IRA through payroll- deduction; and many of these individuals would be eligible to claim a tax deduction for their contribution. According to Labor’s guidance on payroll-deduction IRAs and several experts we interviewed, individuals are more likely to save in IRAs through payroll deductions than they are without deductions. Payroll deductions are a key feature in 401(k) and other DC plans. Economics literature that we reviewed identifies payroll deduction as a key factor in the success of 401(k) plans, and participation in these plans is much higher than in IRAs, which do not typically use payroll deduction. According to the Congressional Budget Office, in 2003, 29 percent of all workers contributed to a DC plan, while only 7 percent of all workers contributed to an IRA. According to recent economics literature that we reviewed, several papers point to the importance of employment-based defaults, employer endorsements, and advice from peers as factors that may influence an employee’s decision to participate in a retirement plan. The influential role that employers may have in an employee’s decision to participate in a workplace plan may encourage some employees to also participate in payroll-deduction IRAs. Payroll deduction facilitates retirement savings by addressing key behavioral barriers of procrastination and inertia, or a lack of action, according to economics literature that we reviewed and experts we interviewed. Although many individuals have intentions to save for retirement, some may procrastinate because retirement is seen as a remote event and more immediate expenses take precedence. Some individuals also experience inertia because they lack knowledge on how to save or have difficulty making decisions with a number of complex options. Literature that we reviewed states that payroll deduction gives employees a “commitment device” to help them automatically contribute to retirement before wages are spent, relieving them of making ongoing decisions to save. Retirement and savings experts and representatives of small business and consumer groups told us payroll-deduction IRAs are the easiest way for small employers to offer their employees a retirement savings vehicle. According to Labor publications and experts, payroll-deduction IRAs provide employers with a low-cost retirement benefit for their employees, because these IRAs do not permit employer contributions. Payroll- deduction IRAs also have fewer requirements for employee communication than SIMPLE and SEP IRAs, and employers are not subject to ERISA fiduciary responsibilities so long as they meet the conditions in Labor’s regulation and guidance for managing these plans. Finally, payroll-deduction IRAs allow employers to select a single IRA provider to service the accounts to keep administrative costs down and simplify the process for employees. Despite these advantages, payroll-deduction IRAs may present several limitations which discourage employers from offering a payroll-deduction IRA program, including: (1) costs to small employers for setting up payroll deductions, (2) lack of flexibility to promote payroll-deduction IRAs to employees, (3) lack of incentives to employers, and (4) lack of awareness about how these IRAs work. Costs to employers. Additional administrative costs associated with setting up and managing payroll-deduction IRAs may be a barrier for small employers, particularly for those without electronic payroll processing. According to Labor, costs to employers are significantly influenced by the number of IRA providers an employer must remit contributions to on behalf of employees. As such, Labor’s guidance allows employers to select a single IRA provider for all employees. Also, under Labor’s guidance, an IRA sponsor may reimburse the employer for the actual costs of operating a payroll-deduction IRA as long as such costs do not include profit to the employer. Small business groups told us that costs could also be influenced by the number of employees participating in the program and whether an employer has a payroll processing system in place to make automatic deductions and direct deposits to employee accounts. Several experts told us that many small employers lack electronic, or automatic, payroll systems, and these employers would be subject to higher management costs for offering payroll-deduction IRAs. Moreover, representatives from small business groups and other experts told us that providing health care insurance is a more pressing issue to many small employers than providing a retirement savings opportunity. Although experts reported that payroll-deduction IRAs represent costs to employers, we found that opinions on the significance of those costs varied. Experts advocating for expanded payroll-deduction IRAs reported that most employers would incur little to no costs since most employers already make payroll deductions for Social Security and Medicare, as well as federal, state, and local taxes. According to these experts, payroll- deduction IRAs function similarly to existing payroll tax withholdings and adding another deduction would not be a substantial requirement. However, other experts reported that costs to employers may be significant. One report indicated that costs to employers for managing payroll-deduction IRAs were substantial, particularly for employers without electronic payrolls; however, the study did not estimate what the actual costs to employers may be on a per account basis. In our review, we were unable to identify reliable government data on actual costs to small employers. Flexibility to Promote Payroll-Deduction IRAs. According to IRA providers, some employers are hesitant to offer a payroll-deduction IRA program because they find Labor’s guidance limits their ability to effectively publicize the availability of payroll-deduction IRAs to employees for fear of being subject to ERISA requirements. Labor officials told us they issued this guidance to make it easier for employers to understand the guidelines to follow in order to maintain the safe harbor that applies to payroll-deduction IRAs. This guidance explains the conditions under which employers can offer payroll-deduction IRAs and not be subject to the ERISA reporting and fiduciary responsibilities, which apply to employer retirement plans, like 401(k) plans. Labor officials said they have not received any feedback from employers or IRA providers on the clarity of the guidance since it was issued in 1999. However, at the time the guidance was issued, some employers had indicated to Labor that they were hesitant to offer payroll-deduction IRAs due to ERISA fiduciary responsibilities. IRA providers told us that employers need greater flexibility in Labor’s guidance to promote payroll-deduction IRAs and provide a greater sense of urgency to employees to save for retirement. However, Labor told us that it has received no input from IRA providers as to what that flexibility would consist of, and Labor officials note that Interpretive Bulletin 99-1 specifically provides for flexibility. Lack of savings incentives for small employers. Small business member organizations and IRA providers said that the contribution limits for payroll-deduction IRAs do not offer adequate savings incentives to justify the effort to offer these IRAs. Because the contribution limits to these IRAs are significantly lower than those that apply to SIMPLE and SEP IRAs, employers seeking to provide a retirement plan to their employees would be more likely to choose other options, which allow business owners to contribute significantly more to their own retirement than payroll-deduction IRAs allow. Lack of awareness. One reason payroll-deduction IRA programs have not been widely adopted by employers may be a lack of awareness about how payroll-deduction and other IRAs work. Representatives from small business groups said many small employers are unaware that payroll- deduction IRAs are available or that employer contributions are not required. However, Labor has produced educational materials describing the payroll-deduction and employer-sponsored IRA options available to employers and employees, and one Labor official told us that that Labor has received positive feedback from small businesses for their efforts. IRA providers told us they experience challenges in marketing IRAs because varying eligibility requirements make it difficult to communicate IRA benefits to a mass market. Instead, providers said it is more efficient to market IRAs to current customers and focus advertising budgets on capturing rollover IRAs. Some experts questioned whether increased worker access to payroll- deduction IRA programs will in fact lead to increased participation and retirement savings for many workers. For example, IRA providers and experts expressed concerns that low- and moderate-income workers may choose not to participate in payroll-deduction IRAs because they lack discretionary income. Many low- and moderate-income workers are already eligible to contribute to IRAs, but have chosen not to do so because they lack sufficient income to save for retirement. Experts raised doubts that payroll-deduction IRA programs would lead to adequate retirement savings, as low-income individuals would be unable to contribute to these IRAs consistently. Further, experts said that individuals with low-balance IRAs would be inclined to make early withdrawals and be subject to additional income taxes. Experts also reported that because the incentives for tax-deferred IRA contributions are based on marginal tax rates, lower-income individuals receive a lower immediate tax subsidy than higher income individuals. Two experts told us that policymakers should begin their evaluation of payroll-deduction IRAs by calculating how much savings is required for an adequate standard of living in retirement, and then determine what role payroll- deduction IRAs could play in reaching that level. We found that employer-sponsored SEP and SIMPLE IRAs can help small employers and their workers to save for their retirement, but several factors may discourage small employers from offering these IRAs to their employees. Experts said the higher contribution limits and flexible employer contribution options of SEP and SIMPLE IRAs offer greater savings benefits to employers and employees than payroll-deduction IRAs. For example, the 2007 SIMPLE contribution limit of $10,500 per year for individuals under age 50 is more than twice the amount allowed in 2007 in payroll-deduction IRAs. In 2007, SEP IRAs allowed employers to contribute the lesser of 25 percent of an employee’s compensation or up to $45,000. Moreover, because SIMPLE IRAs require employers to match the contributions of participating employees or to make “nonelective” contributions to all employee accounts, employees are able to save significantly more per year in SIMPLE accounts than they are in payroll- deduction IRAs. Under SEP rules, employers must set up SEP IRAs for all employees working for them in at least 3 of the past 5 years who have reached age 21 and received at least $500 in compensation in 2007, and employees may not contribute to their own accounts. Annual employer contributions are not mandatory; however, if an employer decides to contribute, they must make contributions to the SEP IRAs of all employees performing services in that year. Because annual contributions are not mandatory for SEP IRAs, employers have the flexibility to adjust contributions depending on business revenues. Employers offering SIMPLE IRAs must either make a nonelective contribution of 2 percent of each eligible employee’s compensation or a minimum of a 1 percent match to the SIMPLE IRAs of those employees who choose to contribute to their accounts. Certain factors may limit employer sponsorship of SIMPLE and SEP IRAs. Small business groups told us that the costs of managing SEP and SIMPLE IRAs may be prohibitive for small employers. Experts also pointed out that contribution requirements for SIMPLE and SEP plans may, in some cases, limit employer sponsorship of these plans. For example, because SIMPLE IRAs require employers to make contributions to employee accounts, some small firms may be unable to commit to these IRAs. Small business groups and IRA providers told us that small business revenues are inconsistent and may fluctuate greatly from year to year, making required contributions difficult for some firms. In addition, employers offering SIMPLE IRAs must determine before the beginning of the calendar year whether they will match employee contributions or make nonelective contributions to all employees’ accounts. According to IRA providers, this requirement may discourage some small employers from offering these IRAs, and if employers had the flexibility to make additional contributions to employee accounts at the end of the year, employers may be encouraged to contribute more to employee accounts. With regard to SEP IRAs, two experts said small firms may be discouraged from offering these plans because of the requirement that employers must set up a SEP IRA for all employees performing service for the company in 3 of the past 5 years and with more than $500 in compensation for 2007. These experts stated that small firms are likely to hire either seasonal employees or interns who may earn more than $500, and these employers may have difficulty finding an IRA provider willing to open an IRA small enough for these temporary or low-earning participants. Retirement and savings experts reported that several legislative proposals could encourage employers to offer and employees to participate in IRAs. While several bills have been introduced in Congress to expand worker access to payroll-deduction IRAs, limited government actions have been taken to increase the number of employers sponsoring employer- sponsored IRAs. Employer incentives to offer IRAs. Several retirement and savings experts said additional incentives should be in place to increase employer sponsorship of IRAs. For example, experts suggested tax credits should be made available to defray start-up costs for small employers of payroll- deduction IRAs, particularly for those without electronic or automatic payroll systems. These credits should be lower than the credits available to employers for starting SIMPLE, SEP, and 401(k) plans to avoid competition with those plans, these experts said. IRA providers and small business groups said increasing contribution limits for SIMPLE IRAs to levels closer to those for 401(k) plans would encourage more employers to offer these plans. Other experts said doing so could provide incentives to employers already offering 401(k) plans to switch to SIMPLE IRAs, which have fewer reporting requirements. Employee incentives to participate in IRAs. Experts offered several proposals to encourage workers to participate in IRAs, including: (1) expanding existing tax credits for moderate- and low-income workers, (2) offering automatic enrollment in payroll-deduction IRAs, and (3) increasing public awareness about the importance of saving for retirement and how to do so. Several experts said expanding the scope of the Retirement Savings Contribution Credit, commonly known as the saver’s credit, could encourage IRA participation among workers who are not covered by an employer-sponsored retirement plan. They said expanding the saver’s credit to include more middle-income earners and making the credit refundable—available to tax filers even if they do not owe income tax—could encourage more moderate- and low-income individuals to participate in IRAs. However, an expanded and refundable tax credit would have revenue implications for the federal budget. Other experts told us that automatically enrolling workers into payroll-deduction and SIMPLE IRAs could increase employee participation; however, small business groups and IRA providers said that mandatory automatic enrollment could be burdensome to small employers. In addition, given the lack of available income for some, several experts told us that low- income workers may opt out of automatic enrollment programs or be more inclined to make early withdrawals, which can result in additional income taxes. Experts also said increasing public awareness of the importance of saving for retirement and educating individuals how to do so could increase IRA participation. Several experts reported the growth of DC plans and IRAs has resulted in individuals bearing greater responsibility for their own retirement and earlier and more frequent information about retirement savings could encourage IRA participation. IRS and Labor share oversight for all types of IRAs, but Labor lacks a process to monitor all IRAs and data gaps exist. IRS is responsible for tax rules on establishing and maintaining IRAs, while Labor is responsible for oversight of fiduciary standards for employer-sponsored IRAs. Payroll- deduction IRAs are not under Labor’s jurisdiction; however, Labor does provide guidance to help ensure such a retirement program is not subject to the Title I requirements of ERISA. Reporting requirements for employer- sponsored IRAs are limited. Under Title I, there is no reporting requirement for SIMPLE IRAs, and an alternative method available for reporting of employer-sponsored SEP IRAs. Labor does not have processes in place to identify all employers offering IRAs, numbers of employees participating, and employers not in compliance with the law. Obtaining information about employer-sponsored and payroll-deduction IRAs is also important to determine whether these vehicles help workers without pensions and 401(k) plans build retirement savings. Although IRS publishes some IRA data, IRS has not consistently produced IRA reports. IRS and Labor share responsibility for overseeing IRAs. IRS has primary responsibility for tax rules governing how to establish and maintain an IRA, as shown in figure 5. Labor has sole responsibility for overseeing ERISA’s fiduciary standards for employer-sponsored IRAs. Fiduciaries have an obligation, among others, to make timely contributions to fund benefits. When contributions are delinquent for those IRAs subject to Labor’s jurisdiction, Labor investigates and takes action to ensure that contributions are restored to the plan. Labor also issues guidance related to payroll-deduction IRAs. In 1999, Labor issued an interpretive bulletin that consolidated Labor regulations and various advisory opinions on payroll-deduction programs for IRAs into one set of guidance. Specifically, the bulletin sets out Labor’s safe harbor under which an employer may establish a payroll-deduction IRA program without inadvertently establishing an employee benefit plan subject to all of the ERISA requirements. Labor and IRS also work together to oversee IRA prohibited transactions; generally, Labor has interpretive jurisdiction and IRS has certain enforcement authority. Both ERISA and the Internal Revenue Code contain various statutory exemptions from the prohibited transaction rules and Labor has authority to grant administrative exemptions and establish exemption procedures. Labor has interpretive authority over prohibited transactions and may grant administrative exemptions on a class or individual basis for a wide variety of proposed transactions with a plan. IRS has responsibility related to imposing an excise tax on parties that engage in a prohibited transaction. Reporting requirements for employer-sponsored IRAs are limited. Currently, the financial institution/trustee handling the employer- sponsored IRA provides the IRS and participants with annual statements containing contribution and fair market value information on IRS Form 5498, IRA Contribution Information, as shown in figure 7. Distributions from that same plan are reported by the financial institution making the distribution to both IRS and the recipients of the distributions on IRS Form 1099-R, Distributions from Pension, Annuities, Retirement or Profit-Sharing Plans, IRA, Insurance Contracts, etc., as shown in figure 8. Information on retirement plans are also reported annually by employers and others to IRS on its Form W-2, which contains the amounts deducted from wages for contributions to pension plans, as well as codes that provide more details on the kinds of plans, such as employer-sponsored IRAs, where the contribution was made, as shown in figure 9. Employers who offer payroll-deduction IRAs have no reporting requirements, and consequently, there is no reporting mechanism that captures how many employers offer payroll-deduction IRAs. Although IRS receives information reports for all traditional and Roth IRAs, those data do not show how many of those IRAs were for employees using payroll- deduction IRAs. In our discussions with Labor and IRS officials, they explained that the limited reporting requirements for employer-sponsored IRAs were put in place to try to encourage small employers to offer their employees retirement plan coverage by reducing their administrative and financial burdens. According to Labor officials, IRS does not share the information it receives with Labor because it is confidential tax information. IRS clarified that it does not share tax information involving specific employers or employees with Labor because it is confidential. Consequently, Labor does not have information on employer-sponsored IRAs. Labor also does not receive information, such as annual financial reports, from such employers, as it does from private pension plan sponsors. For example, pension plan sponsors must file Form 5500 reports with Labor on an annual basis, which provides Labor with valuable information about the financial health and operation of private pension plans. Labor’s Bureau of Labor Statistics (BLS) National Compensation Survey surveys employee benefit plans in private establishments, receiving information on access, participation, and take-up rates for DB and DC plans. The BLS survey, however, collects less information on employer-sponsored IRAs. Given the limited reporting requirements for employer-sponsored IRAs and the absence of requirements for payroll-deduction IRAs, as well as Labor’s role in overseeing these IRAs, a minimum level of oversight is important to ensure that employers are acting in accordance with the law. Yet, Labor officials said that they are unable to monitor (1) whether all employers are in compliance with the prohibited transaction rules and fiduciary standards, such as by making timely and complete employer- sponsored IRA contributions or by not engaging in self-dealing; and (2) whether all employers who offer a payroll-deduction IRA are meeting the conditions of Labor’s guidance. Employer-sponsored IRAs: Labor officials said that they do not have a process for actively seeking out and determining whether employer- sponsored IRAs are engaging in prohibited transactions or not abiding by their fiduciary responsibilities, such as by having delinquent or unremitted employer-sponsored IRA contributions. Instead, as in the case of Labor’s oversight of pension plans, Labor primarily relies on participant complaints as sources of investigative leads to detect employers that are not making the required contributions to their employer-sponsored IRA. For example, according to Labor officials, about 90 percent of its IRA investigations were the result of participant complaints. However, while Labor has other processes in place for private pension plan oversight, such as computer searches and targeting to identify ERISA violations, Labor does not have other processes for IRA investigation leads. Unlike its oversight of pension plans, Labor is at a greater risk of not being able to ensure that all IRA sponsors are in compliance with the laws designed to protect individuals’ retirement savings. Payroll-deduction IRAs: Through payroll-deduction IRAs, employees may establish either traditional or Roth IRAs, and employees may contribute to these accounts through voluntary deductions from their pay, which are forwarded by the employer to the employee’s IRA. As long as employers meet the conditions in Labor’s regulation and guidance, employers are not subject to the fiduciary requirements in ERISA Title I that apply to employer-sponsored retirement plans, such as 401(k) plans. According to Labor officials, if they become aware of an employer operating a payroll-deduction IRA that may not be following agency guidance, Labor will conduct an investigation to determine if the IRA should be treated as an ERISA pension plan. The IRA may be become subject to the requirement of Title I of ERISA, which includes filing a detailed annual report (Form 5500) with Labor. Labor officials said this was done in an effort to ensure that plans are being operated and maintained in the best interest of plan participants. Labor officials told us that they are not aware of employers improperly relying on the safe harbor regarding payroll-deduction IRAs. However, without a process to monitor payroll-deduction IRAs, Labor cannot be certain of the extent or nature of certain employer activities which may fall outside of the guidance provided by Labor. For example, Labor does not know the extent to which employers are sending employee contributions to IRA providers, exercising any influence over the investments made or permitted by the IRA provider, or receiving any compensation in connection with the IRA program except reimbursement for the actual cost of forwarding the payroll deduction. In addition, Labor does not have information on the number of employers that are operating payroll-deduction IRAs. Ensuring that information is obtained about employer-sponsored and payroll-deduction IRAs by regulators is one way to help them and others determine the status of these IRAs and whether those individuals who lack employer-sponsored pension plans are able to build retirement savings through employer-sponsored and payroll-deduction IRAs. However, key information on IRAs is currently not reported, such as information that identifies employers offering payroll-deduction IRAs, distribution by employers of the number of employees that contribute to payroll- deduction IRAs, and distribution by employer of the type of payroll- deduction IRA account offered (traditional or Roth) and the total employee contributions to these accounts. Experts that we interviewed said that, without information on the distribution by employer of the type of payroll-deduction IRA offered and the total employee contributions to these accounts, they are unable to determine how many employers and employees participate in payroll-deduction and the extent to which these IRAs have contributed to the retirement savings of its participants. In addition, the limited reporting requirements prevent information from being obtained about the universe of employers that offer employer- sponsored and payroll-deduction IRAs. Also, without information on the distribution by employer of the type of payroll-deduction IRA offered, and the total employee contributions to these accounts, it is difficult to determine the extent to which payroll-deduction IRAs are being used and to determine ways to increase retirement savings for workers not covered by an employer-sponsored pension plan. This information can be useful when determining policy options to increase IRA participation among uncovered workers because it provides a strong foundation to assess the current extent to which these IRAs are being utilized and information about the people that are participating in these plans. Although IRS does publish some of the information it receives on IRAs through its Statistics of Income program (SOI), IRS does not produce IRA reports on a consistent annual basis. IRS officials told us that they are currently facing three major challenges that affect their ability to publish IRA information on a more consistent basis. First, IRS relies, in part, on information returns to collect data on IRAs, which are not due until the following year after the filing of the tax return. IRS officials said that these returns have numerous errors, making it difficult and time-consuming for IRS to edit them for statistical analysis. They also said that the IRA rules, and changes to those rules, make it difficult for some taxpayers, employers, and trustees to understand, which contributes to filing errors. Second, IRS’s reporting of IRA data is not a systematic process. In the past, the production of IRS reports on IRAs was done on an ad hoc basis. IRS officials told us that they recognize this problem and are in the early stages of determining ways to correct it. Third, in the past, one particular IRS employee, who has recently retired, took the lead for developing a statistical analysis on IRAs. Since IRS does not have a process in place to train another employee to take over this role, a knowledge gap was created that IRS is trying to fill. Labor officials and retirement and savings experts told us that without the consistent reporting of IRA information by IRS, they use studies by financial institutions and industry associations for research purposes, which include assessing the current state of IRAs and future trends. These experts said that although these studies are helpful, some may double count individuals because one person may have more than one IRA at different financial institutions. They also said that more consistent reporting of IRA information could help them ensure that their analyses reflect current and accurate information about retirement assets, such as the fair market value of IRAs. Since IRS is the only agency that has data on all IRA participants, consistent reporting of these data could give policymakers and others a comprehensive look at the IRA landscape. Thirty years ago, when Congress created IRAs, these accounts were designed, in part, to help workers who do not have pensions or 401(k) plans save for their retirement. Currently, IRAs play a major role in preserving retirement assets but a very small role in creating them. Although studies show that individuals find it difficult to save for retirement on their own, millions of U.S. workers have no retirement savings plan at work. Employer-sponsored and payroll-deduction IRAs afford an easier way for workers, particularly those who work for small employers, to save for retirement. They also offer employers less burdensome reporting and legal responsibilities than defined benefit pension plans and defined contribution plans, such as 401(k) plans. Yet, encouraging employers to offer IRAs to their employees will not be productive if Congress and regulators do not make sure that there is also adequate information and improved oversight of employer-sponsored and payroll-deduction IRAs. Given that limited reporting requirements for employer-sponsored IRAs and the absence of reporting requirements for payroll-deduction IRAs were meant to encourage small employers to offer retirement plans to employees, providing more complete and consistent data on IRAs would help ensure that regulators have the information they need to make informed decisions about how to increase coverage and facilitate retirement savings. Currently, IRS collects information on employer-sponsored IRAs that it does not share with Labor because it is confidential tax information, but IRS does report summary information on employer-sponsored IRAs that could be useful for Labor to have on a consistent basis. Without IRS sharing such information, data on IRAs will continue to be collected on an episodic basis, and mapping the universe of IRAs, especially employer-sponsored IRAs, will continue to be difficult. Steps must be taken to improve oversight of payroll-deduction IRAs and determine whether direct oversight is needed. Currently, neither Labor nor IRS is able to determine how many employers are offering their employees the opportunity to contribute to traditional or Roth IRAs through payroll-deduction IRA programs, and Labor has no process in place—nor responsibility—to monitor employers offering payroll- deduction IRAs. Consequently, Labor is unable to determine the universe of employers offering payroll-deduction IRAs, the prevalence and nature of activities that fall outside Labor’s safe harbor, and the impact on employees. As a result, Labor lacks key information on employers who offer payroll-deduction IRAs. Without information on the number of employers offering these IRAs to employees, and the number of employees participating in these programs, neither Labor nor IRS is able to determine the effectiveness of payroll-deduction IRAs in facilitating retirement savings for workers lacking an employer-sponsored pension. Moreover, given that payroll-deduction IRAs currently lack direct oversight, it is important to decide whether such oversight is needed. Without direct oversight, employees may lack confidence that payroll-deduction IRAs will provide them with adequate protections to participate in these programs, which is particularly important given the current focus in Congress on expanding payroll-deduction IRAs. However, any direct oversight of payroll-deduction IRAs should be done in a way that does not pose an undue burden on employers or their employees. Although the limited reporting requirements for employer-sponsored IRAs and the absence of reporting requirements for payroll-deduction IRAs were meant to encourage small employers to offer retirement savings vehicles to employees, there is also a need for those responsible for overseeing retirement savings vehicles to have the information necessary to do so. This will help ensure that there is a structure in place to help protect individuals’ retirement savings if they choose either employer- sponsored or payroll-deduction IRAs. If current oversight vulnerabilities are not addressed, future problems could emerge as more employers and workers participate in employer-sponsored and payroll-deduction IRAs. However, any improvements to plan oversight and data collection should be done in a way that does not pose an undue burden on employers or their employees. Given the absence of direct oversight of payroll-deduction IRAs, Congress may wish to consider whether payroll-deduction IRAs should have some direct oversight. We recommend that the Secretary of Labor take the following three actions: 1. To increase retirement plan coverage for the millions of workers not covered by an employer-sponsored pension plan and the possibility that payroll-deduction IRAs can help bridge the coverage gap, examine ways to better encourage employers to offer and employees to participate in these IRAs that could include: examining and determining the financial and administrative costs to employers for establishing payroll-deduction IRA programs, especially for those employers that do not have an automatic payroll system in place; developing policy options to help employers defray the costs associated with establishing payroll-deduction IRA programs, while taking into consideration the potential costs to taxpayers and small employers; and evaluating whether modifications or clarifications to its guidance on payroll-deduction IRAs are needed to encourage employers to establish payroll-deduction IRA programs. 2. To improve the federal government’s ability to regulate employer- sponsored and payroll-deduction IRAs and protect plan participants, evaluate ways to determine whether employers who establish employer-sponsored IRAs and offer payroll-deduction IRAs are in compliance with the law and the safe harbor provided under Labor’s regulations and Interpretive Bulletin 99-1, while taking employer burden into account. 3. To improve the federal government’s ability to better assess ways to improve retirement plan coverage for workers who do not have access to an employer-sponsored retirement plan, and to provide Congress, federal agencies, and the public with more usable and relevant information on all IRAs, evaluate ways to collect additional information on employer-sponsored and payroll-deduction IRAs, such as adding questions to the Bureau of Labor Statistics National Compensation Survey that provide: information sufficient to identify employers that offer payroll- deduction and employer-sponsored IRAs and the distribution by employer of the number of employees that contribute to payroll-deduction and employer-sponsored IRAs. We also recommend the Commissioner of the Internal Revenue Service take the following two actions: 1. To supplement information Labor would receive through the Bureau of Labor Statistics National Compensation Survey, provide Labor with summary information on IRAs and information collected on employers that sponsor IRAs. 2. Considering the need for federal agencies, Congress, and the public to have access to timely and useful information on IRAs, release its reports on IRA contributions, accumulations, and distributions on a consistent basis, such as annually. We provided a draft of this report to the Secretary of Labor, the Secretary of the Treasury, and the Commissioner of Internal Revenue. We obtained written comments from the Assistant Secretary of Labor and from the Commissioner of Internal Revenue, which are reproduced in appendixes II and III. Both agencies neither agreed nor disagreed with our recommendations, and provided more information about what each agency was currently doing. Treasury and both EBSA and BLS within Labor provided technical comments, which were incorporated in the report where appropriate. Labor clearly stated in its comments that payroll-deduction IRAs are not under Labor’s jurisdiction. We agree with Labor and have revised our report to reflect Labor’s authority. As stated in our report, Labor does provide guidance to help ensure that payroll-deduction programs are not subject to the Title I requirements of ERISA. In addition, we described in our report that IRS’s responsibility over IRAs is to provide tax rules governing how to establish and maintain an IRA. As previously described in the report, several bills have been introduced to Congress to expand worker access to payroll-deduction IRAs. However, without direct oversight of payroll-deduction IRAs, employees may lack confidence that payroll-deduction IRAs will provide them with adequate protections to participate in such programs, which is particularly important given the increasing role that IRAs have in retirement savings. Given that Labor and IRS do not have direct oversight over payroll-deduction IRAs, we added the matter for congressional consideration to the report suggesting that Congress may wish to consider whether payroll-deduction IRAs should have some direct oversight. In response to our first recommendation that Labor should examine and determine the financial and administrative costs to employers for establishing payroll-deduction IRA programs for their employees, Labor neither agreed nor disagreed with the recommendation and stated that payroll-deduction IRAs are not under its jurisdiction. However, as a part of its broad program of research, Labor studies costs and expenses related to retirement programs and said it will consider GAO’s recommendation in developing its research agenda on costs and expenses related to retirement programs. Labor also stated that its Interpretive Bulletin 99-1 addresses the costs related to payroll-deduction IRA programs, which states that employers may select one IRA sponsor to receive payroll contributions to keep administrative costs down, and that employers can receive payments from an IRA sponsor to cover the actual costs of operating the IRA payroll- deduction program. Even though Labor’s Interpretive Bulletin addresses some costs related to payroll-deduction programs, because we do not know the actual costs of managing a payroll-deduction IRA program, it is difficult to determine if these remedies are sufficient. For example, if the actual costs of maintaining such a program are minimal—as some experts have suggested—limiting employees to one IRA provider may discourage some employees from participating in the program unnecessarily. On the other hand, if the costs of managing these programs are significant—as other experts have suggested—this allowance may be insufficient to encourage employers to offer a payroll-deduction IRA program. Labor also noted that Interpretive Bulletin 99-1 indicates that employees can receive payments from an IRA sponsor to cover the actual costs of operating the IRA payroll-deduction program. However, employers may not receive any consideration beyond “reasonable compensation for services actually rendered in connection with payroll deductions.” Without an accurate assessment of what the actual costs of operating these programs are to employers, Labor may be unable to readily determine whether such programs fall outside the safe harbor and may be considered to have become ERISA Title I programs. Furthermore, without accurate cost estimates and a determination of what constitutes “reasonable compensation” to employers, employers may be reluctant to seek compensation from IRA service providers to defray the costs of operating a payroll-deduction IRA program. In response to our recommendation that Labor should develop policy options to help employers defray the costs associated with establishing payroll-deduction IRA programs, Labor stated that Interpretive Bulletin 99- 1 advises employers on how to defray the costs of operating payroll- deduction IRA programs without subjecting the program to coverage under ERISA, but also noted that payroll-deduction IRAs operated in accordance with Interpretive Bulletin 99-1 are outside of Labor’s jurisdiction. Consequently, Labor suggested that the development of additional policy options to help employers defray costs may be more properly considered by the Secretary of Treasury. We believe some further examination by Treasury and Labor of this area would be appropriate. We believe that any policy options proposed to defray costs to employers should, in fact, be based on an accurate assessment of what the actual costs to employers of managing such programs. Efforts to identify appropriate policies to defray costs would be most efficiently executed if coordinated with the process of determining the actual costs of managing payroll deduction programs, and that responsibility may lie more with Labor. Proposals designed to defray employer costs that are not determined by an accurate accounting of actual costs to employers’ risks providing either an excessive or insufficient benefit to employers. Labor stated that Interpretive Bulletin 99-1 advises employers on how to defray the costs of operating payroll-deduction IRA programs without subjecting the program to coverage under ERISA. In response to our recommendation that Labor evaluate whether modifications or clarifications to its guidance on payroll-deduction IRAs are needed, Labor stated that the draft report does not provide specifics regarding why employers believe they cannot effectively publicize the availability of payroll-deduction IRAs, and stated that Labor had not received any input from employers or IRA sponsors about being unable to effectively publicize the availability of payroll-deduction IRAs. Our report includes a discussion of the barriers identified by retirement and savings experts that may discourage employers from offering payroll-deduction IRAs to employees. IRA providers told us that Labor’s guidance lacks adequate flexibility for employers to promote these IRAs to their employees, without operating outside of the safe harbor and potentially becoming subject to ERISA Title I requirements. In addition, as we noted in our report, employers have indicated that they are hesitant to offer payroll-deduction IRAs due to the possibility that ERISA fiduciary responsibilities could apply. In response to our second recommendation that Labor evaluate ways to determine whether employers who establish employer-sponsored IRAs and offer payroll-deduction IRAs are in compliance with law, while taking employer burden into account, Labor simply described its enforcement program and its reliance on targeting, and stated that during the past three fiscal years, 170 SIMPLE IRAs and SEP plans had been investigated with approximately $1.2 million obtained in monetary results. We acknowledge that Labor’s enforcement program for employer- sponsored IRAs has led to investigations and has produced monetary results. However, as indicated in our report, Labor has primarily relied on the complaints of participants as sources for its investigations, as about 90 percent of its investigations into employer-sponsored IRAs were the result of participant complaints. In addition, our report indicates that because of the limited reporting requirements for employer-sponsored IRAs, Labor does not have specific information on employers that sponsor such IRAs, or even how many there are. Because Labor lacks such information, it is unable to target and investigate potential ERISA violations for employer- sponsored IRAs. We do not believe the information provided by Labor on its enforcement activities precludes our recommendation and we believe our recommendation remains valid. Regarding our third recommendation that Labor evaluate ways to collect additional information on employer-sponsored and payroll-deduction IRAs, Labor’s comments focused on statutory requirements and policy considerations, and stated that any collection of information on employer- sponsored and payroll-deduction IRAs should not impose burdens on employers to report information. The intent of our recommendation was to evaluate alternative, less burdensome approaches to obtain important information, such as through the Bureau of Labor Statistics National Compensation Survey. As we noted in our report, key information on IRAs is currently not reported and ensuring that such information is obtained can help determine valuable information about whether employers are choosing to sponsor employer-sponsored IRAs or offer payroll-deduction IRAs, and whether individuals are able to build retirement savings through these vehicles. We do not believe the information provided by Labor makes our recommendation less important and we believe our recommendation remains valid. In response to our recommendation that IRS provide Labor with summary information on IRAs and information collected on employers that sponsor IRAs, and release its reports on IRA contributions, accumulations, and distributions on a consistent basis, IRS stated that it recognizes the need for federal agencies and others to have access to routine and timely information on IRAs and then listed the information it currently provides. IRS also stated that it will continue to provide data and ensure that Labor receives information on IRAs on the same day that such information is published or otherwise made available to the public. Although IRS will be providing summary information on all IRAs to Labor and for public information, we stand by our recommendation that IRS should also consider providing information to Labor and others on employers that sponsor IRAs, such as the number of employers that sponsor SEP and SIMPLE IRAs, which is currently absent in the information IRS stated it would provide to Labor. We are sending copies of this report to the Commissioner of Internal Revenue, the Secretary of Labor, the Secretary of the Treasury; appropriate congressional committees; and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-7215 or bovbjergb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix IV. During our review, our objectives were to (1) compare individual retirement account (IRA) assets to assets in pension plans, (2) describe the barriers that may discourage small employers from offering employer- sponsored and payroll-deduction IRAs to their employees, and (3) describe how the Internal Revenue Service (IRS) and the Department of Labor (Labor) oversee IRAs and assess the adequacy of oversight and information of employer-sponsored and payroll-deduction IRAs. To identify how IRA assets compare to assets in pension plans and to describe the demographic characteristics of IRA owners, we reviewed reports with published data from the Federal Reserve’s Survey of Consumer Finance (SCF), Statistics of Income (SOI), and relevant industry surveys. The following is a list of the studies we reviewed: Copeland, Craig. “Individual Account Retirement Plans: An Analysis of the 2004 Survey of Consumer Finances.” Issue Brief, no. 293 (Washington, D.C., Employee Benefit Research Institute, May 2006). This report is based on analysis of data from the 2004 SCF. SCF is a triennial survey that asks extensive questions about household income and wealth components. In 2004, it sampled 4,522 households. The Employee Benefit Research Institute (EBRI) is a private nonprofit organization that conducts public policy research on economic security and employee benefits issues. Its membership includes a cross-section of pension funds, businesses, trade associations, labor unions, health care providers and insurers, government organizations, and service firms. Holden, Sara and Michael Bogdan. “The Role of IRAs in U.S. Households’ Saving for Retirement.” Research Fundamentals, vol. 17, no. 1 (Washington, D.C., Investment Company Institute, January 2008). The demographic and financial information of IRA owners come from the May 2007 IRA Owners Survey. The 599 randomly selected respondents are representative of U.S. households owning traditional or Roth IRAs. The standard error for the total sample is ± 4 percentage points at the 95 percent confidence level. The Investment Company Institute (ICI) used the American Association for Public Opinion Research #4 method to calculate its response rate and believes it achieved a response rate in line with comparable industry surveys. ICI is a national association of U.S. investment companies, including mutual funds, closed-end funds, exchange-trade funds, and unit investment trusts. Its research department collects and disseminates industry statistics, and conducts research studies relating to issues of public policy, economic and market developments, and shareholder demographics. “The U.S. Retirement Market, Second Quarter 2007.” Research Fundamentals, vol. 16, no. 3-Q2 (Washington, D.C., Investment Company Institute, December 2007). The information on total IRA market assets comes from tabulations of total IRA assets provided by the IRS SOI for tax years 1989, 1993, and 1996 through 2004. The tabulations are based on a sample of IRS returns. See information above for a description of ICI. Holden, Sara and Michael Bogdan. “Appendix: Additional Data on IRA Ownership in 2007.” Research Fundamentals, vol. 17, no. 1A (Washington, D.C., Investment Company Institute, January 2008). Information on the number of households owning IRAs is based on data from the U.S. Bureau of the Census Current Population Reports. See information above for a description of ICI. Sailer, Peter, Victoria L. Bryant, and Sara Holden, Internal Revenue Service, “Trends in 401(k) and IRA Contribution Activity, 1999-2002 – Results from a Panel of Matched Tax Returns and Information Documents.” (Washington, D.C., 2005). This study is based on SOI’s database of over 71,000 individual taxpayers who filed for tax years 1999 through 2002. The analysis is limited to those taxpayers who filed for all 4 years in the study. The weighted file represents 143.2 million taxpayers, or about 81 percent, of the original 177 million who filed for 1999. West, Sandra and Victoria Leonard-Chambers. “The Role of IRAs in Americans’ Retirement Preparedness.” Research Fundamentals, vol. 15, no. 1 (Washington, D.C., Investment Company Institute, January 2006). The demographic and financial information of IRA owners come from the May 2005 survey of 595 randomly selected representative U.S. households owning IRAs, including traditional IRAs, Roth IRAs, Savings Incentive Match Plans for Employees (SIMPLE), Simplified Employee Pensions (SEP), and Salary Reduction Simplified Employee Pension (SAR-SEP) IRAs. The standard error for the total sample is ±4 percentage points at the 95 percent confidence level. ICI used the American Association for Public Opinion Research #4 method to calculate its response rate and believes it achieved a response rate in line with comparable industry surveys. See information above for a description of ICI. To describe barriers that may discourage employers from offering employer-sponsored and payroll-deduction IRAs, we interviewed retirement and savings experts, including individuals representing public policy research organizations, small business member organizations, consumer and employee advocacy groups, financial industry associations, IRA service provider companies and a pension professional member association. We also interviewed officials at Labor and IRS to gather the perspective of officials of federal agencies with responsibility for payroll- deduction and employer-sponsored IRAs. In our interviews with these experts, we gathered information on challenges that small employers face in offering IRAs to their employees and challenges that employees face in participating in IRAs. In these interviews, we also gathered information on proposals that exist to encourage employers to offer and employees to participate in IRAs. In addition, we reviewed available economics literature and research conducted by federal agencies, public policy organizations, and academic researchers on the factors affecting employer sponsorship of and employee participation in IRAs and other retirement savings plans. To describe how the IRS and Labor oversee IRAs and to assess the adequacy of oversight and information on employer-sponsored and payroll-deduction IRAs, we obtained and reviewed information about Labor’s and IRS’s oversight practices and responsibilities regarding IRAs. To accomplish this, we interviewed Labor and IRS officials about the steps they take to monitor IRA plans. However, we did not assess the effectiveness of IRS and Labor compliance and enforcement efforts. We also reviewed the agencies’ statutory responsibilities in the Internal Revenue Code and the Employee Retirement Income Security Act of 1974 (ERISA) for overseeing IRAs. We analyzed Labor and IRS oversight processes to identify any gaps that may exist. We conducted this performance audit from September 2007 through May 2008 in accordance with generally accepted government auditing standards, which included an assessment of data reliability. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact above, Tamara Cross, Assistant Director; Raun Lazier; Susan Pachikara; Matt Barranca; Joseph Applebaum, Susan Aschoff; Doreen Feldman; Edward Nannenhorn; MaryLynn Sergent; Roger Thomas; Walter Vance; and Jennifer Wong made important contributions to this report.
Congress created individual retirement accounts (IRAs) with two goals: (1) to provide a retirement savings vehicle for workers without employer-sponsored retirement plans, and (2) to preserve individuals' savings in employer-sponsored retirement plans. However, questions remain about IRAs' effectiveness in facilitating new, or additional, retirement savings. GAO was asked to report on (1) how IRA assets compare to assets in other retirement plans, (2) what barriers may discourage small employers from offering IRAs to employees, and (3) the adequacy of the Internal Revenue Service's (IRS) and the Department of Labor's (Labor) oversight of and information on IRAs. GAO reviewed reports from government and financial industry sources and interviewed experts and federal agency officials. Individual retirement accounts, or IRAs, hold more assets than any other type of retirement vehicle. In 2004, IRAs held about $3.5 trillion in assets compared to $2.6 trillion in defined contribution (DC) plans, including 401(k) plans, and $1.9 trillion in defined benefit (DB), or pension plans. Similar percentages of households own IRAs and participate in 401(k) plans, and IRA ownership is associated with higher educational and income levels. Congress created IRAs to provide a way for individuals without employer plans to save for retirement, and to give retiring workers or those changing jobs a way to preserve retirement assets by rolling over, or transferring, plan balances into IRAs. Rollovers into IRAs significantly outpace IRA contributions and account for most assets flowing into IRAs. Given the total assets held in IRAs, they may appear to be comparable to 401(k) plans. However, 401(k) plans are employer-sponsored while most households with IRAs own traditional IRAs established outside the workplace. Several barriers may discourage employers from establishing employer-sponsored IRAs and offering payroll-deduction IRAs to their employees. Although employer-sponsored IRAs were designed with fewer reporting requirements to encourage participation by small employers and payroll-deduction IRAs have none, millions of employees of small firms lack access to a workplace retirement plan. Retirement and savings experts and others told GAO that barriers discouraging employers from offering these IRAs include costs that small businesses may incur for managing IRA plans, a lack of flexibility for employers seeking to promote payroll-deduction IRAs to their employees, and certain contribution requirements of some IRAs. Information is lacking, however, on what the actual costs to employers may be for providing payroll-deduction IRAs and questions remain on the effect that expanded access to these IRAs may have on employees. Experts noted that several proposals exist to encourage employers to offer and employees to participate in employer-sponsored and payroll-deduction IRAs, however limited government actions have been taken. The Internal Revenue Service and Labor share oversight for all types of IRAs, but gaps exist within Labor's area of responsibility. IRS is responsible for tax rules on establishing and maintaining IRAs, while Labor is responsible for oversight of fiduciary standards for employer-sponsored IRAs and provides certain guidance on payroll-deduction IRAs, although Labor does not have jurisdiction. Oversight ensures the interests of the employee participants are protected, that their retirement savings are properly handled, and any applicable guidance and laws are being followed. Because there are very limited reporting requirements for employer-sponsored IRAs and none for payroll-deduction IRAs, Labor does not have processes in place to identify all employers offering IRAs, numbers of employees participating, and employers not in compliance with the law. Obtaining information about employer-sponsored and payroll-deduction IRAs is also important to determine whether these vehicles help workers without DC or DB plans build retirement savings. Although IRS collects and publishes some data on IRAs, IRS has not consistently produced reports on IRAs nor shared such information with other agencies, such as Labor. Labor's Bureau of Labor Statistics National Compensation Survey surveys employer-sponsored benefit plans but collects limited information on employer-sponsored IRAs and no information on payroll-deduction IRAs. Since IRS is the only agency that has data on all IRA participants, consistent reporting of these data could give Labor and others valuable information on IRAs.
You are an expert at summarizing long articles. Proceed to summarize the following text: Since the Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) began in 1956 and was expanded in 1966, it functioned much like a fee-for-service insurance program. Beneficiaries have been free to select providers and required to pay deductibles and copayments, but, unlike with most insurance programs, they have not been required to pay premiums. CHAMPUS has approximately 5.7 million beneficiaries and, as part of a larger Military Health Services System (MHSS), these beneficiaries are also eligible for care in the MHSS’ 127 hospitals and 500 clinics worldwide. Of the approximately $15.2 billion budgeted for the MHSS in fiscal year 1995, the CHAMPUS share is about $3.6 billion or about 24 percent. Because of escalating costs, claims paperwork demands, and general beneficiary dissatisfaction, DOD initiated, with congressional authority, a series of demonstration projects in the late 1980s designed to more effectively contain costs and improve services to beneficiaries. One of these projects, the CHAMPUS Reform Initiative (CRI), a forerunner of TRICARE managed care support contracts, was one of the first to introduce managed care features to CHAMPUS. Included as part of a triple-option health benefit were a health maintenance organization choice, a preferred provider choice, and the existing standard CHAMPUS choice. Managed care features introduced included enrollment, utilization management, assistance in referral to the most cost-effective providers, and reduced paperwork. The first CRI contract, awarded to Foundation Health Corporation,covered California and Hawaii. Foundation delivered services under this contract between August 1988 and January 1994. Before the contract expired, DOD began a new competitively bid procurement for California and Hawaii that resulted in DOD’s awarding a 5-1/2 year (1 half-year plus 5 option years), $3.5 billion contract to Aetna Government Health Plans, Inc. in July 1993. Because a bid protest was sustained on this procurement, it was recompeted, although Aetna’s contract was allowed to proceed until a new one was awarded. In late 1993, in response to requirements in DOD’s Appropriation Act for Fiscal Year 1994, the Department announced plans for restructuring the entire MHSS program, including CHAMPUS. The restructured program, known as TRICARE, is to be completely implemented by May 1997. To implement and administer TRICARE, DOD reorganized the military delivery system into 12 new, joint-Service regions. DOD also created a new administrative organization featuring lead agents in each region to coordinate among the three Services and monitor health care delivery. For medical care the military medical facilities cannot provide, seven managed care support contracts will be awarded to civilian health care companies covering DOD’s 12 health care regions. These contracts, much like the former CRI contracts, retain the fixed-price, at-risk, and triple-option health benefit that CRI featured. An important difference, however, is the addition of lead agent requirements—tasks to be performed by the contractor specific to military medical facilities in the region. Figure 1 shows the regions covered by the seven contracts. Since the December 1993 decision sustaining the protest of the California/Hawaii (regions 9, 10, and 12) contract award, three managed care support contracts have been awarded and all have been protested. Also, a protest was filed on the solicitations for the California/Hawaii recompetition and that for Washington/Oregon (region 11). GAO has denied the protests on these solicitations and the Washington/Oregon contract award and has yet to decide on the other two award protests. Information on the procurements awarded to date appears in table 1. For more information on the transition to managed care support contracts and the offerors submitting proposals for these contracts, see appendixes II and III, respectively. The Office of CHAMPUS, an organization within the Office of the Assistant Secretary of Defense (Health Affairs), administers the procurements. The procurement process involves the issuance of a request for proposal (RFP) that has the detailed specifications and instructions offerors are to follow in responding. Offerors are required to submit both a technical and a business (price) proposal. Upon receipt of the offerors’ proposals, a Source Selection Evaluation Board (SSEB) evaluates the technical proposals according to detailed evaluation criteria, and a Business Proposal Evaluation Team (BPET) evaluates the proposed prices. A Source Selection Advisory Council (SSAC) reviews the work of the two boards and consults with them. Following discussions with offerors about weaknesses and deficiencies in their proposals, DOD requests offerors to submit “best and final offers.” The two boards again evaluate changes to proposals, complete final scoring, and prepare reports on the evaluations. A senior executive designated as the source selection authority uses these reports in selecting the winning offeror. As part of the evaluation process, evaluators are asked to identify ways to improve the process. For a complete description of the procurement process and the tasks performed, see appendix IV. GAO sustained the protest of the July 1993 California/Hawaii award primarily because DOD failed to evaluate offerors’ proposals according to the RFP criteria. The RFP provided that each offeror’s proposed approach to attaining health care cost estimates would be individually evaluated. However, in evaluating the proposals, DOD evaluators rejected the contractors’ cost estimates and assigned the same government cost estimates to all offerors’ proposals. By so doing, the BPET did not consider offerors’ individual cost-containment approaches, such as their utilization management approaches, upon which the success of managed care contracting to contain costs largely rests. In effect, the evaluators’ action made this part of the evaluation methodology meaningless. Also, the process did not allow the price evaluators to discuss with the technical evaluators possible inconsistencies between the price and technical proposals nor otherwise discuss the technical information that supported the price estimates. Such discussions may have highlighted the need to analyze offerors’ individual cost containment approaches. During the protest of the Washington/Oregon award, the offeror protested nearly a dozen of DOD’s technical ratings of its proposal. In its decision, GAO recognized that DOD made mathematical errors that affected scoring, but these errors were not limited to the protesting offeror, and correcting them did not affect the procurement’s final outcome. DOD has made several changes that should improve future procurements. Major changes due to the protest experiences include (1) revising the price evaluation methodology and providing offerors more complete RFP information on how the methodology will be used in evaluating bid prices, (2) adding requirements for discussions between price and technical evaluation boards, and (3) revising both the requirements and the technical evaluation criteria for utilization management. Also, DOD is developing a computer spreadsheet to automate the technical scoring process and, thus, address mistakes made during the Washington/Oregon evaluation process. DOD’s other changes include providing more training for proposal evaluators, colocating the technical evaluation boards, and providing more feedback to offerors on their proposals’ weaknesses. A final change requires that DOD approve the bid price evaluation methodology before evaluating prices. DOD significantly changed its methodology for evaluating the health care cost portion of the offerors’ business proposals. While details of the new methodology are procurement sensitive and cannot be disclosed, the changes essentially involve evaluating the reasonableness of the offerors’ estimates for cost factors over which the contractor has some control, such as utilization management and provider discounts. The evaluation includes comparing the offerors’ cost estimates with the government’s estimates and considering the offerors’ justification and documentation. Also, DOD rewrote portions of the RFP to provide more explicit information to offerors so they can better understand the new evaluation methodology and the factors to be considered in evaluating prices. This more complete guidance should facilitate offerors’ ability to furnish the information DOD needs to evaluate their proposals. DOD instituted a process requiring discussions between the technical and the price evaluators. Previously, discussions between the two boards were prohibited, and knowledge possessed about offerors’ proposals by one group was not shared with the other. Under the new procedures, the SSEB briefs the BPET and responds to BPET questions on offerors’ proposed technical approaches. This should enable the BPET to better judge whether offerors can achieve the health care costs that they have bid. Conversely, the SSEB can request information from the BPET to assist in its technical evaluation. DOD significantly revised its RFP utilization management requirements and utilization management criteria used in evaluating offerors’ proposals. DOD incorporated these revisions in the solicitations for the then ongoing Washington/Oregon procurement as well as the post-protest recompetition of the California/Hawaii procurement. The revised utilization management requirements place additional responsibilities on the contractor and establish specific utilization management procedures. Also, while the previous evaluation criteria basically involved checking whether offerors’ proposed approaches addressed requirements, the revised criteria require evaluators to judge the effectiveness of the cost-containment approaches. Among other DOD improvements is the provision of more training for evaluators and team leaders who oversee the evaluation of specific tasks. Training for the California/Hawaii evaluators had been limited to about one-half day, but training on more recent procurements has been increased to nearly 1 week. The new training includes more detailed information on the (1) procurement cycle, (2) technical and price evaluation boards, (3) evaluation of proposals, and (4) use of personal computers to record evaluation information. Another change involves colocating at Aurora, Colorado, the SSEB staff who had been split between Aurora and Rosslyn, Virginia. SSEB members evaluating managed care tasks were located in Rosslyn, and those evaluating claims processing and related tasks were in Aurora. The dual locations caused the board chair to travel frequently to the Rosslyn location to review work and provide guidance to board members there. Also, DOD lost time awaiting information arriving from the Rosslyn site to Aurora and retyping and reformatting information submitted from the Rosslyn site. More significantly, some rating procedures differed between the two locations. A further change in the process is that DOD, along with providing offerors the questions evaluators raise on their proposals, is also providing information on proposal weaknesses. As a result, offerors are better assured that they are addressing the specific concerns that prompted the questions. Offerors told us, moreover, that DOD is now providing them more information about their proposals, responding more quickly to their questions, and providing more complete information after initial evaluations and debriefings following contract award. A final procedural change is that DOD now formally approves the price evaluation methodology prepared by a contractor before the proposal evaluation begins. On the California/Hawaii procurement awarded to Aetna, DOD had not approved the evaluation methodology before the proposals had been evaluated. The methodology had been prepared by a consultant who submitted it to DOD for review, received no formal response, and proceeded to use it to evaluate proposals. Late in this process, DOD determined that the methodology improperly skewed the evaluation and ordered it changed at that time. DOD’s new procedure eliminates the possibility of changing the evaluation methodology during the process, thus removing any such possible appearance of impropriety. Despite DOD’s process improvements, several matters remain that concern both those administering and those responding to the procurements. First, unless DOD can avoid further delays in this round of procurements, it may not meet the congressional deadline for awarding all contracts by September 30, 1996. Also, the substantial expense that offerors incur to participate may further limit future competition. Also, the specificity of solicitation requirements may work against offerors proposing innovative, cost-saving managed care techniques. Further, by reducing the length of transition periods, DOD has introduced significant risk that all the tasks needed to deliver health care will not be completed on time. Finally, DOD needs to better ensure that prospective evaluators are properly qualified. For each of the four contracts awarded thus far, the procurement lengths, on average, have been 18 months or more than twice as long as originally planned. Figure 2 compares the planned and actual procurement times for the contracts. If the remaining procurements encounter similar delays, DOD will have difficulty in meeting the congressional mandate for awarding all contracts by September 30, 1996. The current schedule allows about 1 month of slippage for the remaining procurements to have all contracts awarded on time. A primary cause of delays has been the many changes DOD has made to solicitation requirements. For example, as shown in figure 3, the California/Hawaii (regions 9, 10, and 12) recompetition procurement had 22 RFP amendments, and the Washington/Oregon (region 11) procurement had 15 amendments. Some of the changes resulted from such new requirements as the lead agent concept and a new uniform benefits package to replace previous beneficiary cost-sharing requirements that differed across the country. Other changes resulted from major revisions to such existing requirements as utilization management. When such changes occur, extra time is needed to issue solicitation amendments, for offerors to analyze the changes and revise their proposals, and often for evaluation boards to review the changes. Offerors have expressed extreme displeasure about the continually changing program requirements that make it more costly for them to participate in the protracted procurements. On the other hand, procurements have been delayed to allow offerors to correct errors in their cost proposals and as a result of bid protests. While these actions have not caused major delays so far, because DOD normally can proceed with the procurements, protests can add additional time to the overall schedule. DOD has acted to shorten the procurement process by increasing the size of evaluation boards and changing the way proposals are evaluated. The enlarged boards can divide evaluation tasks among more members, and members have narrower spans of review responsibility. Regarding RFP changes, some offerors maintain that DOD did not adequately plan the program before beginning the procurements. While DOD officials acknowledge planning problems, particularly for the lead agent concept, they told us that RFP changes will become less of a problem as their experience with the managed care support contracts grows. Also, DOD officials are concerned that if needed changes are not added before contract award, it will be more costly to implement them after award in the form of contract change orders when competition no longer exists. Currently, the administration is strongly encouraging simplifying federal procurements by, among other things, adopting commercial best practices to reduce costs and expedite service delivery. DOD recognizes that its process is extremely costly, complex, and cumbersome for all affected and acknowledges the need to simplify and shorten it. DOD can take advantage of the administration initiative’s momentum and seek ways to simplify and streamline its health care procurements by considering, among other things, the private sector’s best practices. Because the procurements are broad, complex, lengthy, and involve huge sums of money, offerors incur substantial expense to participate. As a result, participation thus far has been limited to large companies with vast resources. For example, the California/Hawaii procurement required that offerors be in a position to risk losing a minimum of $65 million should they incur losses during the contract’s performance. Competition is further limited because only a small number of available subcontracting firms can now knowledgeably process CHAMPUS claims. Moreover, several offerors told us that it cost them between $1 and $3 million to develop their proposals. Planning and preparing bid proposals and responding to amendments require them to divert their most able people from their regular duties to work months preparing offers. One offeror, in illustrating the procurement’s size, complexity, and resources needed to participate, told us that its proposal consisted of 33,000 pages. The offeror told us that if it did not win a then ongoing procurement, it would not participate again unless it could develop a proposal for no more than $100,000. Another offeror said its firm could not afford to continue bidding if it did not win a contract soon. DOD incurs substantial costs as well. The evaluation process, in particular, requires tremendous time, effort, and costs. A DOD official estimated that 54,000 hours were spent on evaluating a recent procurement. In addition to evaluation duties, many staff must continue to perform their regular duties. Many commonly spend weekends performing evaluation duties involving a considerable amount of overtime expense. Further, many of the evaluators travel from all over the country and are on travel status for 5 to 6 weeks. DOD recognizes that in the next round of the seven regional procurements, the number of offerors may further narrow and consist only of those who won awards in the first round. While DOD has chosen to award large contracts on a regional basis, it may be advisable in the next round to consider such alternatives as awarding smaller contracts covering smaller geographic areas, awarding to more than one offeror in a region, or simplifying the contracts by removing the claims processing function and awarding it separately. DOD’s RFP requirements are extremely specific and prescriptive because, the Department has stated, it desires a uniform program nationwide in which beneficiaries and providers are subject to the same requirements and processes regardless of residence. Offerors, on the other hand, maintain that if DOD’s RFP stated minimum requirements but emphasized the health care outcomes desired and allowed offerors more flexibility in devising approaches to achieve such outcomes, costs could be reduced without adversely affecting the quality of care delivered. In specifying its requirements, DOD has sought to ensure that beneficiaries not be denied necessary care and that care be provided by appropriate medical personnel in the appropriate setting. DOD’s concern has been that allowing contractors to use different processes and criteria might jeopardize these ends. Offerors maintain that those objectives can be met by allowing them more freedom to use innovative approaches, drawing on their private-sector managed care expertise. In comparing DOD’s managed care procurements with private-sector procurements, private corporations interested in contracting for managed care have far less specific requirements and normally only request general information about offerors such as corporate background, financial capability, health care performance, and utilization management/quality assurance strategies. Offerors told us that DOD does not ask for the kind of information on private-sector experience that would allow them to adequately compare performance among offerors. Also, many corporations use managed care consulting firms to help identify their requirements and select awardees. Offerors often cite utilization management as the area in which more relaxed DOD requirements would enable them to implement equally or more effective techniques than DOD requires but with greater cost savings. Among the most objectionable requirements is the use of a two-level review process for determining care appropriateness/necessity, a specific company’s utilization management criteria, and reviewers with the same specialty as the providing physician. DOD has maintained that its utilization management requirements are based on its extensive review of the literature and are reasonable, though perhaps not the most cost-effective. Also, DOD has maintained that because the military environment differs from the private sector, it warrants different requirements. Nevertheless, DOD has acknowledged that offerors have some legitimate concerns. In recent discussions, DOD told us that, while it has no plans yet, for the next round of procurements it may begin considering ways of making the requirements less onerous to offerors while ensuring that beneficiaries receive adequate access to care. DOD officials said that they may begin seeking to simplify the requirements by making them less process and more outcome driven, while respecting, to the extent practicable, their overall system goals. Because of procurement delays occurring before contract award, DOD has tried to recover lost time by reducing to 6 months its scheduled 8- to 9-month transition period during which contractors prepare to deliver health care. But by doing so, DOD has introduced significant risk that contractors will not complete the many tasks needed to begin health care delivery on time. We have reported that DOD has experienced serious problems in the past both with fiscal intermediary contractors and the CRI contractor being unable to begin processing claims by the start work date because the 6-month transition period was too short. As a result, beneficiaries faced considerable difficulties getting services and providers getting reimbursement. The managed care transitions are more complex and involved than the prior transitions. Most offerors we contacted told us that 6 months was too short and that about 8 months was needed to accomplish the tasks required to be ready on time. The transition tasks include signing up network providers, establishing service centers, hiring health care finders, preparing information brochures, bringing the claims processing system on line, resolving database problems, enrolling beneficiaries, and many other tasks. Offerors also told us that even a contractor with CRI experience would have difficulty meeting the 6-month transition requirement. DOD contracting officials and evaluators also have expressed the same concerns. While DOD, in reducing the transition periods, is driven to adhere to its individual procurement schedules and thus respond to internal and external pressures to bring services on line, we believe the risk introduced far outweighs the small potential time savings due to shorter transition periods. As demonstrated in the fiscal intermediary and CRI transitions, inadequate transition periods can overly tax contractors to the point of failure and result in substantial additional time and expense to recover. DOD has so far selected evaluation board members in a relatively informal way, either allowing board chairs to do so, on the basis of their knowledge of the individuals, or military services headquarters or lead agents to do so, on the basis of general guidelines. To date, DOD, relying on this less formal appointee approach, has not set forth general qualification requirements for evaluators such as experience or subject area knowledge. But, because the tasks they evaluate are so specialized and because the boards have expanded and members are increasingly less familiar to selecting officials, specifying evaluator qualifications—as has been suggested by offerors and board members alike—seems prudent. Some offerors expressed concern to us that DOD evaluators have had little or no experience with private-sector managed care plans and thus have difficulty distinguishing among offerors who can perform effectively in the private sector and those who are less effective in ensuring quality care and controlling costs. Evaluation board team leaders for recent procurements told us that qualification requirements would be helpful to ensure that people with appropriate experience and knowledge can adequately evaluate specific tasks. One board member, as input to DOD’s internal improvement process, stated that some SSEB members seemed to lack (by their own admission) the prerequisite experience and background to serve most effectively as subject matter experts on the SSEB. He went on to state that, given the potential impact of these contracts in dollars and health care service, it seems critical that only experienced evaluators be put in a position to make the essential judgment calls inherent in the technical review process. On more recent procurements, DOD has requested that evaluator nominees submit resumes to assist selection decisions and facilitate their assignment to various tasks. While this is a step in the right direction, it does not ensure that prospective evaluators with appropriate skills are nominated in the first place and are selected on the basis of the requisite qualifications. DOD has improved the procurement process since the protest on the California/Hawaii award to Aetna was sustained, to the extent that offerors can be more assured of equitable and fair treatment. While the dollar value of the contracts will likely cause offerors to protest in the future, DOD improvements have reduced the chance of protests being sustained. Despite improvements in the process, several areas of concern remain, particularly regarding the next round of procurements. The procurement process is extremely costly, complex, and cumbersome for all affected, and DOD acknowledges the need to simplify it. We agree and see an opportunity for DOD to draw upon the administration’s current initiative for simplifying federal procurements as it seeks ways to streamline its processes. Further, because of the costs of participating, the number of offerors in the next procurement round may be limited to only those who received contracts in the first round. We think that DOD should consider alternative procurement approaches to help preserve the competitiveness of the process. Along with these measures, DOD needs to address whether its solicitation requirements can be less prescriptive and still achieve their overall health care goals. Though DOD was driven by internal and external pressures to bring health care services on line, we do not agree with the Department’s decision to reduce transition times to make up for time lost in awarding the contracts. The potential time saved by shortening transition periods, in our view, does not justify the risk of contractors not being able to prepare to deliver services on time. Finally, given the increasing size of the evaluation boards, their specialized tasks, and members’ increasing lack of familiarity to selecting officials, we believe that DOD needs to develop qualification requirements for evaluator appointees. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense (Health Affairs) to weigh, in view of the potential effects of such large procurements on competition, alternative award approaches for the next procurement round; determine whether and, if so, how the next round’s solicitation requirements could be simplified, incorporating the use of potentially better, more economical, best-practice managed care techniques while preserving the system’s overall health care goals; adhere to the 8- to 9- month scheduled transition period and discontinue, whenever possible, reducing such periods to make up for delays incurred before contracts are awarded; and establish general qualification requirements for evaluator appointees. In commenting on the draft report, DOD fully agreed with the first three of our recommendations and agreed in part that qualifications for evaluation board appointees need to be established. DOD pointed out that, while it could improve the evaluator selection process, it now tasks lead agents and the Services with nominating qualified individuals and the contracting officer and board chairs with reviewing their resumes. We continue to believe that establishing general qualification requirements would more appropriately equip responsible DOD officials to nominate and select the best qualified evaluators and assign them the most suitable tasks. DOD made other comments and suggested changes that we incorporated in the report as appropriate. DOD’s comments are included as appendix V. As arranged with your staff or offices, unless you announce its contents earlier, we plan no further distribution of this report until 7 days after its issue date. At that time, we will send copies to the Secretary of Defense; the Director, Office of Management and Budget; and interested congressional committees. We will also make copies available to others upon request. If you have any questions concerning the contents of this report, please call me at (202) 512-7101. Other major contributors to this report are Stephen P. Backhus and Daniel M. Brier, Assistant Directors, Donald C. Hahn, Evaluator-in-Charge, and Robert P. Pickering and Cheryl A. Brand, Senior Analysts. We examined in detail the complete California/Hawaii procurement file for the contract that was awarded to Aetna as well as selected portions of more recent procurements’ files. These files were from the California/Hawaii recompetition procurement, the Washington/Oregon procurement, and the region 6 procurement. We also reviewed agency files and discussed with agency officials various aspects of the procurement process. Also, we reviewed pertinent regulations governing the procurement processes in the Federal Acquisition Regulation, the Defense Federal Acquisition Regulation Supplement, and the Office of CHAMPUS Acquisition Manual. We held discussions with contract management personnel who conduct the procurements, officials who develop solicitation requirements, staff involved in the evaluations, and agency legal staff who ensure that the procurements are conducted according to applicable laws and regulations and in an equitable manner. Our review of procurement documents included (1) documents related to the planning of the CHAMPUS Reform Initiative and managed care support procurements, (2) procurement schedules showing planned and actual dates, (3) RFPs and amendments to the RFPs, (4) questions raised by offerors and agency responses, (5) documents relating to evaluation methodology, (6) evaluation criteria and scoring sheets, (7) reports of discussions with offerors, (8) internal reports, (9) reports of the evaluation boards, (10) selection reports, and (11) preaward survey reports. Because of agency concerns about compromising future procurements, we are not presenting specific information on the evaluation methodology or on the scoring and weighting systems used. Nor are we presenting information on the criteria used in the rating and scoring process. We examined proposals of individual offerors to only a limited extent and are not providing information on these proposals because it is proprietary. We interviewed the Source Selection Evaluation Board, Business Proposal Evaluation Team (BPET), and Source Selection Advisory Council chairmen involved in recent procurements as well as the selecting officials. We also interviewed several team leaders involved in evaluating the technical proposals of the California/Hawaii recompetition procurement and several members of the BPET. In addition, to assess the qualifications of evaluation members, we reviewed their resumes. In conducting our review, we examined GAO bid protest decisions involving these managed care procurements and coordinated our efforts with GAO’s Office of General Counsel, which handles these bid protests. In addition to the protest decisions, we reviewed much of the supporting documentation for decisions, including the offerors’ protests, agency reports, offerors’ comments on the agency reports, videotapes of the protest hearings, and post-hearing comments. To obtain information on their experiences with DOD managed care procurements and their views of the overall procurement process and the solicitation requirements, we interviewed officials from four offerors who had participated in recent procurements. The officials interviewed were from Aetna Government Health Plans, Inc., California Care Health Plan (Blue Cross of California), Foundation Health Federal Services, Inc., and QualMed, Inc. We also interviewed the lead agents and their staffs for regions 9 and 11 to obtain similar information. Our work was conducted at the Office of CHAMPUS, Aurora, Colorado, and at the Office of the Assistant Secretary of Defense (Health Affairs), Washington, D.C. In addition, we visited the offerors at their headquarters offices and the lead agents at their military treatment facilities. We conducted our review between March 1994 and June 1995 in accordance with generally accepted government auditing standards. CHAMPUS provides funding for health care services from civilian providers for uniformed services beneficiaries. CHAMPUS began in 1956 and was expanded in 1966 to include additional classes of beneficiaries and more comprehensive benefits. These beneficiaries eligible for CHAMPUS include dependents of active-duty members, retirees and their dependents, and dependents of deceased members. CHAMPUS has approximately 5.7 million eligible beneficiaries and has traditionally functioned much like a fee-for-service insurance program. Beneficiaries are free to select providers and are required to pay deductibles and copayments, but, unlike with most insurance programs, they are not required to pay premiums. CHAMPUS is part of the overall Military Health Services System (MHSS) that serves active- and nonactive-duty members and includes 127 hospitals and over 500 clinics worldwide. CHAMPUS beneficiaries can also obtain medical care services in military medical facilities on a space-available basis. In fiscal year 1995, the MHSS was budgeted at over $15 billion, of which $3.6 billion, or about 24 percent, was budgeted for CHAMPUS. Because of escalating costs, claims paperwork demands, and general beneficiary dissatisfaction, DOD initiated in the late 1980s, with congressional authority, a series of demonstration projects designed to more effectively contain costs and improve services to beneficiaries. One of these, known as the CHAMPUS Reform Initiative (CRI), was designed by DOD in conjunction with a consulting company. Under CRI, a contractor provided both health care and administrative-related services, including claims processing. The CRI project was one of the first to introduce managed care features to the CHAMPUS program. Beneficiaries under CRI were offered three choices—a health maintenance organization-like option called CHAMPUS Prime that required enrollment and offered enhanced benefits and low-cost shares, a preferred provider organization-like option called CHAMPUS Extra that required use of network providers in exchange for lower cost shares, and the standard CHAMPUS option that continued the freedom of choice in selecting providers and higher cost shares and deductibles. Other features of CRI included use of health care finders for referrals and the application of utilization management. The project also contained resource sharing features whereby the contractor, to reduce overall costs, could provide staff or other resources to military treatment facilities to treat beneficiaries in these facilities. Although DOD’s initial intent under CRI was to award three competitively bid contracts covering six states, only one bid—made by Foundation Health Corporation—covering California/Hawaii was received. Because of the lack of competition, DOD ended up awarding a negotiated fixed-price, at-risk contract with price adjustment features to Foundation. Although designated as fixed price, the contract contained provisions for sharing risks between the contractor and the government. Foundation delivered services under this contract between August 1988 and January 1994. Before the contract expired, DOD began a new procurement for the CRI California/Hawaii contract that resulted in the competition’s narrowing down to four bidders. In July 1993, DOD awarded a 5-1/2 year (1 half-year plus 5 option years), $3.5 billion contract to Aetna Government Health Plans, with health care services beginning on February 1, 1994. Because a bid protest was sustained on this procurement, this contract was recompeted, although Aetna was allowed to proceed with its contract until a new contract was awarded. In late 1993, in response to requirements in the DOD Appropriation Act for Fiscal Year 1994, the Department announced plans for implementing a nationwide managed care program for the MHSS that would be completely implemented by May 1997. Under this program, known as TRICARE, the United States is divided into 12 health care regions. An administrative organization, the lead agent, is designated for each region and coordinates the health care needs of all military treatment facilities in the region. Under TRICARE, seven managed care support contracts will be awarded covering DOD’s 12 health care regions. DOD estimates that over a 5-year period these contracts will cost about $17 billion. The TRICARE managed care support contracts retain the fixed-price, at-risk, and triple-option health benefit features of CRI as well as many other CRI features. An important change, however, involves including in the contract tasks to be performed by the contractor that are specific to military treatment facilities in the regions, in addition to the standard requirements. Since the announcement of DOD’s plan for implementing managed care contracts nationwide, three contracts have been awarded, as shown in table II.1. Foundation Health Federal Services, Inc. QualMed, Inc. Foundation Health Federal Services, Inc. The current schedule for awarding the remaining four contracts appears in table II.2. Region(s) RFP issue date (actual or planned) Organizations submitting best and final proposals 1. Aetna Government Health Plans, Inc. 2. BCC/PHP Managed Health Company 3. Foundation Health Federal Services, Inc. 4. QualMed, Inc. 1. CaliforniaCare Health Plans (Blue Cross of California) 2. Foundation Health Federal Services, Inc. 3. QualMed, Inc. 9,10, and 12 (recompetition) The Office of CHAMPUS, an organization within the Office of the Assistant Secretary of Defense (Health Affairs) conducts the managed care support procurements. In conducting these procurements, DOD must follow the requirements in the Federal Acquisition Regulation and the Defense Federal Acquisition Regulation Supplement. In addition, the Office of CHAMPUS Acquisition Manual provides further guidance for conducting procurements. The major steps in the procurement process are described in this appendix. The request for proposal (RFP) contains the detailed specifications, instructions to offerors in responding to the RFP, and evaluation factors that DOD will consider in making the award. The RFP requires that offerors submit both a technical and a business (price) proposal, and offerors are told that the technical content will account for 60 percent of the scoring weight and the price, 40 percent. In preparing the technical proposal, offerors are required to address 13 different tasks: (1) health care services; (2) contractor responsibilities for coordination and interface with the lead agent and military treatment facilities; (3) health care providers’ organization, operations, and maintenance; (4) enrollment and beneficiary services; (5) claims processing; (6) program integrity; (7) fiscal management and controls; (8) management; (9) support services; (10) automatic data processing; (11) contingencies for mobilization; (12) start-up and transitions; and (13) resource support program. Experience and performance are other evaluation factors. Offerors must describe the approaches they would take in accomplishing these tasks. While offerors are not told the specific weights assigned the individual tasks, they are told their order of importance. In preparing the business proposal, offerors must provide support for both their administrative and health care prices and justify their health care prices by addressing seven cost factors over which the offerors have some control: (1) HMO option penetration rates (enrollment), (2) utilization management, (3) provider discounts, (4) coordination of benefits/third-party liability, (5) resource sharing savings, (6) resource sharing expenditures, and (7) enrollment fee revenues. Offerors must also provide trend data for costs that the offeror is considered likely to have little or no control over such as price inflation. In evaluating proposals, since these factors are considered uncontrollable, the government substitutes its own estimates for the offerors’ so that all offerors are treated equally. Offerors must also pledge an equity amount to absorb losses if health care costs exceed the amount proposed. In evaluating proposals, DOD determines whether offerors have the financial resources to meet this pledge, and the equity amount is also applied as part of the methodology in evaluating prices. Before the proposals’ due date, offerors are free to submit questions on clarification of requirements or further program information. Offerors can continue to submit questions up until the close of discussions before best and final offers are due. Upon receipt of the offerors’ proposals, a Source Selection Evaluation Board (SSEB) evaluates the technical proposals according to detailed evaluation criteria. The board size depends on the number of offerors and, in recent procurements, has numbered about 80 people. Board members are selected from offices such as the Assistant Secretary of Defense (Health Affairs), the military Surgeons General, the military treatment facilities, and the Office of CHAMPUS. A chairperson heads the board, which is divided into teams to review the various tasks and subtasks. The worksheets used in these evaluations contain both the specifications and the criteria upon which to base a judgment. A Business Proposal Evaluation Team (BPET) evaluates the business proposals. A chairperson also heads this team, which comprises about 10 people, divided between a team that primarily evaluates administrative costs and another that primarily evaluates health service costs. The team evaluating administrative costs is supported by the Defense Contract Audit Agency, which performs a cost analysis of the administrative costs bid. The team evaluating health service costs consists primarily of consultants, some of whom are actuaries. In their evaluation, they use specially developed criteria as well as a government-developed cost estimate. Another consultant ensures the financial viability of the offerors, including whether they have the fiscal capacity to absorb the amount of equity offered, which would be at risk if losses were to be incurred under the contract. A Source Selection Advisory Council (SSAC) is an oversight board that reviews the work of the SSEB and BPET and provides consultation advice to the two teams. The SSAC comprises about six executive-level personnel. DOD does not normally award a contract after the initial evaluations, although nothing precludes an award at that time. Instead, DOD notifies offerors in writing of weaknesses and deficiencies identified in the initial evaluation and prepares questions relating to them. This gives the offerors an opportunity to correct the weaknesses and deficiencies and improve their proposals. In addition to the questions provided offerors, DOD holds face-to-face discussions to clarify and resolve any outstanding issues. DOD then requests best and final offers, and offerors submit their revised proposals, including any desired price revisions. Upon receipt of the best and final offers, the SSEB and BPET evaluate revisions to the initial proposals, and the SSAC reviews the work of the two boards. DOD then completes final scoring and prepares reports of the evaluations. DOD can conduct preaward surveys before award if outstanding issues remain to be resolved. This survey can include an on-site visit to an offeror or subcontractor. A senior official, designated as the Source Selection Authority, selects the winning offeror using reports prepared by the SSEB, BPET, and SSAC. The official prepares a written report justifying the final selection. Following selection of the winning offeror, unsuccessful offerors can learn why they were not selected. Offerors are individually told of the deficiencies and weaknesses in their proposals. This can serve as the basis for preparing improved proposals for subsequent procurements. The period between contract award and the start of health care delivery is referred to as the transition period. During this period, the contractor must perform many tasks, including assembling a provider network, establishing service centers, getting the claims processing system operational, and beginning the process of enrolling beneficiaries into the HMO-like option. Throughout the evaluation process, evaluators are requested, as part of the “lessons learned” process, to identify problems or suggest potential changes to improve future procurements. The lessons learned can be as minor as correcting specification references or as major as changing evaluation procedures. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed defense health care, focusing on: (1) procurement process problems identified by the bid protest experiences; (2) the Department of Defense's (DOD) actions to improve and help ensure the fairness of the procurement process; and (3) what problems and concerns remain and whether further actions are needed. GAO found that: (1) DOD has changed its managed care procurement process to address such past problems as its failure to evaluate bidders' proposed prices according to solicitation criteria, the lack of communication between technical and price evaluators, and its failure to properly evaluate bidders' cost containment approaches; (2) although DOD has revised its evaluation methodology and has added new discussion requirements to improve future procurements and ensure better treatment of bidders, protests are likely to continue, given the vast sums of money at stake and the relatively small expense of protesting; (3) DOD may have difficulty meeting the congressional deadline for awarding all contracts by September 1996, since procurements have been taking twice as long as planned; (4) DOD has tried to make up for procurement delays by reducing its transition period after contract award for contractors to deliver health care, but this action has created major risks; and (5) DOD must establish required qualifications for evaluation board members, since their tasks have become so specialized.
You are an expert at summarizing long articles. Proceed to summarize the following text: The United States has approximately 360 commercial sea and river ports that handle more than $1.3 trillion in cargo annually. A wide variety of goods travels through these ports each day—including automobiles, grain, and millions of cargo containers. While no two ports are exactly alike, many share certain characteristics such as their size, proximity to a metropolitan area, the volume of cargo they process, and connections to complex transportation networks. These characteristics can make them vulnerable to physical security threats. Moreover, entities within the maritime port environment are vulnerable to cyber-based threats because they rely on various types of information and communications technologies to manage the movement of cargo throughout the ports. These technologies include terminal operating systems, which are information systems used to, among other things, control container movements and storage; industrial control systems, which facilitate the movement of goods using conveyor belts or pipelines to structures such as refineries, processing plants, and storage tanks; business operations systems, such as e-mail and file servers, enterprise resources planning systems, networking equipment, phones, and fax machines, which support the business operations of the terminal; and access control and monitoring systems, such as camera surveillance systems and electronically enabled physical access control devices, which support a port’s physical security and protect sensitive areas. All of these systems are potentially vulnerable to cyber-based attacks and other threats, which could disrupt operations at a port. While port owners and operators are responsible for the cybersecurity of their operations, federal agencies have specific roles and responsibilities for supporting these efforts. The National Infrastructure Protection Plan (NIPP) establishes a risk management framework to address the risks posed by cyber, human, and physical elements of critical infrastructure. It details the roles and responsibilities of DHS in protecting the nation’s critical infrastructures; identifies agencies that have lead responsibility for coordinating with federally designated critical infrastructure sectors (maritime is a component of one of these sectors—the transportation sector); and specifies how other federal, state, regional, local, tribal, territorial, and private-sector stakeholders should use risk management principles to prioritize protection activities within and across sectors. The NIPP establishes a framework for operating and sharing information across and between federal and nonfederal stakeholders within each sector. These coordination activities are carried out through sector coordinating councils and government coordinating councils. Further, under the NIPP, each critical infrastructure sector is to develop a sector- specific plan that details the application of the NIPP risk management framework to the sector. As the sector-specific agency for the maritime mode of the transportation sector, the Coast Guard is to coordinate protective programs and resilience strategies for the maritime environment. Further, Executive Order 13636, issued in February 2013, calls for various actions to improve the cybersecurity of critical infrastructure. These include developing a cybersecurity framework; increasing the volume, timeliness, and quality of cyber threat information shared with the U.S. private sector; considering prioritized actions within each sector to promote cybersecurity; and identifying critical infrastructure for which a cyber incident could have a catastrophic impact. More recently, the Cybersecurity Enhancement Act of 2014 further refined public-private collaboration on critical infrastructure cybersecurity by authorizing the National Institute of Standards and Technology to facilitate and support the development of a voluntary set of standards, guidelines, methodologies, and procedures to cost-effectively reduce cyber risks to critical infrastructure. In addition to these cyber-related policies and law, there are laws and regulations governing maritime security. One of the primary laws is the Maritime Transportation Security Act of 2002 (MTSA) which, along with its implementing regulations developed by the Coast Guard, requires a wide range of security improvements for the nation’s ports, waterways, and coastal areas. DHS is the lead agency for implementing the act’s provisions, and DHS component agencies, including the Coast Guard and the Federal Emergency Management Agency (FEMA), have specific responsibilities for implementing the act. To carry out its responsibilities for the security of geographic areas around ports, the Coast Guard has designated a captain of the port within each of 43 geographically defined port areas. The captain of the port is responsible for overseeing the development of the security plans within each of these port areas. In addition, maritime security committees, made up of key stakeholders, are to identify critical port infrastructure and risks to the port areas, develop mitigation strategies for these risks, and communicate appropriate security information to port stakeholders. As part of their duties, these committees are to assist the Coast Guard in developing port area maritime security plans. The Coast Guard is to develop a risk-based security assessment during the development of the port area maritime security plans that considers, among other things, radio and telecommunications systems, including computer systems and networks that may, if damaged, pose a risk to people, infrastructure, or operations within the port. In addition, under MTSA, owners and operators of individual port facilities are required to develop facility security plans to prepare certain maritime facilities, such as container terminals and chemical processing plants, for deterring a transportation security incident. The implementing regulations for these facility security plans require written security assessment reports to be included with the plans that, among other things, contain an analysis that considers measures to protect radio and telecommunications equipment, including computer systems and networks. MTSA also codified the Port Security Grant Program, which is to help defray the costs of implementing security measures at domestic ports. Port areas use funding from this program to improve port-wide risk management, enhance maritime domain awareness, and improve port recovery and resilience efforts through developing security plans, purchasing security equipment, and providing security training to employees. FEMA is responsible for administering this program with input from Coast Guard subject matter experts. Like threats affecting other critical infrastructures, threats to the maritime IT infrastructure are evolving and growing and can come from a wide array of sources. Risks to cyber-based assets can originate from unintentional or intentional threats. Unintentional threats can be caused by, among other things, natural disasters, defective computer or network equipment, software coding errors, and careless or poorly trained employees. Intentional threats include both targeted and untargeted attacks from a variety of sources, including criminal groups, hackers, disgruntled insiders, foreign nations engaged in espionage and information warfare, and terrorists. These adversaries vary in terms of their capabilities, willingness to act, and motives, which can include seeking monetary gain or pursuing a political, economic, or military advantage. For example, adversaries possessing sophisticated levels of expertise and significant resources to pursue their objectives—sometimes referred to as “advanced persistent threats”—pose increasing risks. They make use of various techniques— or exploits—that may adversely affect federal information, computers, software, networks, and operations, such as a denial of service, which prevents or impairs the authorized use of networks, systems, or applications. Reported incidents highlight the impact that cyber attacks could have on the maritime environment, and researchers have identified security vulnerabilities in systems aboard cargo vessels, such as global positioning systems and systems for viewing digital nautical charts, as well as on servers running on systems at various ports. In some cases, these vulnerabilities have reportedly allowed hackers to target ships and terminal systems. Such attacks can send ships off course or redirect shipping containers from their intended destinations. For example, according to Europol’s European Cybercrime Center, a cyber incident was reported in 2013 (and corroborated by the FBI) in which malicious software was installed on a computer at a foreign port. The reported goal of the attack was to track the movement of shipping containers for smuggling purposes. A criminal group used hackers to break into the terminal operating system to gain access to security and location information that was leveraged to remove the containers from the port. In June 2014 we reported that DHS and the other stakeholders had taken limited steps with respect to maritime cybersecurity. In particular, risk assessments for the maritime mode did not address cyber-related risks; maritime-related security plans contained limited consideration of cybersecurity; information-sharing mechanisms shared cybersecurity information to varying degrees; and the guidance for the Port Security Grant Program did not take certain steps to ensure that cyber risks were addressed. In its 2012 National Maritime Strategic Risk assessment, which was the most recent available at the time of our 2014 review, the Coast Guard did not address cyber-related risks to the maritime mode. As called for by the NIPP, the Coast Guard completes this assessment on a biennial basis, and it is to provide a description of the types of threats the Coast Guard expects to encounter within its areas of responsibility, such as ensuring the security of port facilities, over the next 5 to 8 years. The assessment is to be informed by numerous inputs, such as historical incident and performance data, the views of subject matter experts, and risk models, including the Maritime Security Risk Analysis Model, which is a tool that assesses risk in terms of threat, vulnerability, and consequences. However, we found that while the 2012 assessment contained information regarding threats, vulnerabilities, and the mitigation of potential risks in the maritime environment, none of the information addressed cyber- related risks or provided a thorough assessment of cyber-related threats, vulnerabilities, and potential consequences. Coast Guard officials attributed this gap to limited efforts to develop inputs related to cyber threats to inform the risk assessment. For example, the Maritime Security Risk Analysis Model did not contain information related to cyber threats. The officials noted that they planned to address this deficiency in the next iteration of the assessment, which was to be completed by September 2014, but did not provide details on how cybersecurity would be specifically addressed. We therefore recommended that DHS direct the Coast Guard to ensure that the next iteration of the maritime risk assessment include cyber- related threats, vulnerabilities, and potential consequences. DHS concurred with our recommendation, and the September 2014 version of the National Maritime Strategic Risk Assessment identifies cyber attacks as a threat vector for the maritime environment and assigns some impact values to these threats. However, the assessment does not identify vulnerabilities of cyber-related assets. Without fully addressing threats, vulnerabilities, and consequences of cyber incidents in its assessment, the Coast Guard and its sector partners will continue to be hindered in their ability to appropriately plan and allocate resources for protecting maritime-related critical infrastructure. As we reported in June 2014, maritime security plans required by MTSA did not fully address cyber-related threats, vulnerabilities, and other considerations. Specifically, three area maritime security plans we reviewed from three high-risk port areas contained very limited, if any, information about cyber-threats and mitigation activities. For example, the three plans included information about the types of information and communications technology systems that would be used to communicate security information to prevent, manage, and respond to a transportation security incident; the types of information considered to be sensitive security information; and how to securely handle such information. They did not, however, identify or address any other potential cyber-related threats directed at or vulnerabilities in these systems or include cybersecurity measures that port-area stakeholders should take to prevent, manage, and respond to cyber-related threats and vulnerabilities. Similarly, nine facility security plans from the nonfederal organizations we met with during our 2014 review generally had very limited cybersecurity information. For example, two of the plans had generic references to potential cyber threats, but did not have any specific information on assets that were potentially vulnerable or associated mitigation strategies. Officials representing the Coast Guard and nonfederal entities acknowledged that their facility security plans at the time generally did not contain cybersecurity information. Coast Guard officials and other stakeholders stated that the area and facility-level security plans did not adequately address cybersecurity because the guidance for developing the plans did not require a cyber component. Officials further stated that guidance for the next iterations of the plans, which were to be developed in 2014, addressed cybersecurity. However, in the absence of a maritime risk environment that addressed cyber risk, we questioned whether the revised plans would appropriately address the cyber-related threats and vulnerabilities affecting the maritime environment. Accordingly, we recommended that DHS direct the Coast Guard to use the results of the next maritime risk assessment to inform guidance for incorporating cybersecurity considerations for port area and facility security plans. While DHS concurred with this recommendation, as noted above, the revised maritime risk assessment does not address vulnerabilities of systems supporting maritime port operations, and thus is limited as a tool for informing maritime cybersecurity planning. Further, it is unclear to what extent the updated port area and facility plans include cyber risks because the Coast Guard has not yet provided us with updated plans. Consistent with the private-public partnership model outlined in the NIPP, the Coast Guard helped establish various collaborative bodies for sharing security-related information in the maritime environment. For example, the Maritime Modal Government Coordinating Council was established to enable interagency coordination on maritime security issues, and members included representatives from DHS, as well as the Departments of Commerce, Defense, Justice, and Transportation. Meetings of this council discussed implications for the maritime mode of the President’s executive order on improving critical infrastructure cybersecurity, among other topics. In addition, the Maritime Modal Sector Coordinating Council, consisting of owners, operators, and associations from within the sector, was established in 2007 to enable coordination and information sharing. However, this council disbanded in March 2011 and was no longer active, when we conducted our 2014 review. Coast Guard officials stated that maritime stakeholders had viewed the sector coordinating council as duplicative of other bodies, such as area maritime security committees, and thus there was little interest in reconstituting the council. In our June 2014 report, we noted that in the absence of a sector coordinating council, the maritime mode lacked a body to facilitate national-level information sharing and coordination of security-related information. By contrast, maritime security committees are focused on specific geographic areas. We therefore recommended that DHS direct the Coast Guard to work with maritime stakeholders to determine if the sector coordinating council should be reestablished. DHS concurred with this recommendation, but has yet to take action on this. The absence of a national-level sector coordinating council increases that risk that critical infrastructure owners and operators will be unable to effectively share information concerning cyber threats and strategies to mitigate risks arising from them. In 2013 and 2014 FEMA identified enhancing cybersecurity capabilities as a funding priority for its Port Security Grant Program and provided guidance to grant applicants regarding the types of cybersecurity-related proposals eligible for funding. However, in our June 2014 report we noted that the agency’s national review panel had not consulted with cybersecurity-related subject matter experts to inform its review of cyber- related grant proposals. This was partly because FEMA had downsized the expert panel that reviewed grants. In addition, because the Coast Guard’s maritime risk assessment did not include cyber-related threats, grant applicants and reviewers were not able to use the results of such an assessment to inform grant proposals, project review, and risk-based funding decisions. Accordingly, we recommended that DHS direct FEMA to (1) develop procedures for grant proposal reviewers, at both the national and field level, to consult with cybersecurity subject matter experts from the Coast Guard when making funding decisions and (2) use information on cyber- related threats, vulnerabilities, and consequences identified in the revised maritime risk assessment to inform funding guidance for grant applicants and reviewers. Regarding the first recommendation, FEMA officials told us that since our 2014 review, they have consulted with the Coast Guard’s Cyber Command on high-dollar value cyber projects and that Cyber Command officials sat on the review panel for one day to review several other cyber projects. FEMA officials also provided examples of recent field review guidance sent to the captains of the port, including instructions to contact Coast Guard officials if they have any questions about the review process. However, FEMA did not provide written procedures at either the national level or the port area level for ensuring that grant reviews are informed by the appropriate level of cybersecurity expertise. FEMA officials stated the fiscal year 2016 Port Security Grant Program guidance will include specific instructions for both the field review and national review as part of the cyber project review. With respect to the second recommendation, since the Coast Guard’s 2014 maritime risk assessment does not include information about cyber vulnerabilities, as discussed above, the risk assessment would be of limited value to FEMA in informing its guidance for grant applicants and reviewers. As a result, we continue to be concerned that port security grants may not be allocated to projects that will best contribute to the cybersecurity of the maritime environment. In summary, protecting the nation’s ports from cyber-based threats is of increasing importance, not only because of the prevalence of such threats, but because of the ports’ role as conduits of over a trillion dollars in cargo each year. Ports provide a tempting target for criminals seeking monetary gain, and successful attacks could potentially wreak havoc on the national economy. The increasing dependence of port activities on computerized information and communications systems makes them vulnerable to many of the same threats facing other cyber-reliant critical infrastructures, and federal agencies play a key role by working with port facility owners and operators to secure the maritime environment. While DHS, through the Coast Guard and FEMA, has taken steps to address cyber threats in this environment, they have been limited and more remains to be done to ensure that federal and nonfederal stakeholders are working together effectively to mitigate cyber-based threats to the ports. Until DHS fully implements our recommendations, the nation’s maritime ports will remain susceptible to cyber risks. Chairman Miller, Ranking Member Vela, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions you may have at this time. If you or your staff have any questions about this testimony, please contact Gregory C. Wilshusen, Director, Information Security Issues at (202) 512-6244 or wilshuseng@gao.gov. GAO staff who made key contributions to this testimony are Michael W. Gilmore, Assistant Director; Bradley W. Becker; Jennifer L. Bryant; Kush K. Malhotra; and Lee McCracken. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The nation's maritime ports handle more than $1.3 trillion in cargo each year: a disruption at one of these ports could have a significant economic impact. Increasingly, port operations rely on computerized information and communications technologies, which can be vulnerable to cyber-based attacks. Federal entities, including DHS's Coast Guard and FEMA, have responsibilities for protecting ports against cyber-related threats. GAO has designated the protection of federal information systems as a government-wide high-risk area since 1997, and in 2003 expanded this to include systems supporting the nation's critical infrastructure. This statement addresses (1) cyber-related threats facing the maritime port environment and (2) steps DHS has taken to address cybersecurity in that environment. In preparing this statement, GAO relied on work supporting its June 2014 report on cybersecurity at ports. (GAO-14-459) Similar to other critical infrastructures, the nation's ports face an evolving array of cyber-based threats. These can come from insiders, criminals, terrorists, or other hostile sources and may employ a variety of techniques or exploits, such as denial-of-service attacks and malicious software. By exploiting vulnerabilities in information and communications technologies supporting port operations, cyber-attacks can potentially disrupt the flow of commerce, endanger public safety, and facilitate the theft of valuable cargo. In its June 2014 report, GAO determined that the Department of Homeland Security (DHS) and other stakeholders had taken limited steps to address cybersecurity in the maritime environment. Specifically: DHS's Coast Guard had not included cyber-related risks in its biennial assessment of risks to the maritime environment, as called for by federal policy. Specifically, the inputs into the 2012 risk assessment did not include cyber-related threats and vulnerabilities. Officials stated that they planned to address this gap in the 2014 revision of the assessment. However, when GAO recently reviewed the updated risk assessment, it noted that the assessments did not identify vulnerabilities of cyber-related assets, although it identified some cyber threats and their potential impacts. The Coast Guard also did not address cyber-related risks in its guidance for developing port area and port facility security plans. As a result, port and facility security plans that GAO reviewed generally did not include cyber threats or vulnerabilities. While Coast Guard officials noted that they planned to update the security plan guidance to include cyber-related elements, without a comprehensive risk assessment for the maritime environment, the plans may not address all relevant cyber-threats and vulnerabilities. The Coast Guard had helped to establish information-sharing mechanisms called for by federal policy, including a sector coordinating council, made up of private-sector stakeholders, and a government coordinating council, with representation from relevant federal agencies. However, these bodies shared cybersecurity-related information to a limited extent, and the sector coordinating council was disbanded in 2011. Thus, maritime stakeholders lacked a national-level forum for information sharing and coordination. DHS's Federal Emergency Management Agency (FEMA) identified enhancing cybersecurity capabilities as a priority for its port security grant program, which is to defray the costs of implementing security measures. However, FEMA's grant review process was not informed by Coast Guard cybersecurity subject matter expertise or a comprehensive assessment of cyber-related risks for the port environment. Consequently, there was an increased risk that grants were not allocated to projects that would most effectively enhance security at the nation's ports. GAO concluded that until DHS and other stakeholders take additional steps to address cybersecurity in the maritime environment—particularly by conducting a comprehensive risk assessment that includes cyber threats, vulnerabilities, and potential impacts—their efforts to help secure the maritime environment may be hindered. This in turn could increase the risk of a cyber-based disruption with potentially serious consequences. In its June 2014 report on port cybersecurity, GAO recommended that the Coast Guard include cyber-risks in its updated risk assessment for the maritime environment, address cyber-risks in its guidance for port security plans, and consider reestablishing the sector coordinating council. GAO also recommended that FEMA ensure funding decisions for its port security grant program are informed by subject matter expertise and a comprehensive risk assessment. DHS has partially addressed two of these recommendations since GAO's report was issued.
You are an expert at summarizing long articles. Proceed to summarize the following text: This section of the report describes the paper- and electronic-based check collection processes, presents statistics on the use of electronic and nonelectronic payments and types of check processing, and describes the Federal Reserve’s role in check collection. Interbank checks are cleared and settled through an elaborate check- collection process that includes presentment and final settlement. Check presentment occurs when the checks are delivered or images transmitted to the paying banks for payment and the paying banks must decide whether to honor or return the checks (see fig. 1). Settlement of checks occurs when the collecting banks are credited and the paying banks are debited, usually through accounts held at either the Federal Reserve or correspondent banks. In the paper-based check collection process, banks of first deposit generally sort deposited checks by destination and dispatch them for collection. Banks of first deposit physically can collect a paper check through several methods: Direct presentment of the paper check to the paying bank; Exchange of the paper check at a clearing house in which the bank of first deposit and the paying bank are members; Collection of the paper check through an intermediary, such as a correspondent bank or a Federal Reserve Bank; or Some combination of the above methods. When a paying bank decides not to pay a check, the bank typically returns the dishonored check to the bank of first deposit. Under the Uniform Commercial Code, the paying bank generally has until midnight of the day following presentment (“midnight deadline”) to return dishonored checks or send notices of dishonor. The paying bank may return a dishonored check, commonly referred to as a return item, directly to the bank of first deposit through a clearing house association, if applicable, or through a returning bank (a bank handling a returned check), including the Federal Reserve. Regulation CC was promulgated by the Federal Reserve Board in 1988 to implement the Expedited Funds Availability Act of 1987 (EFAA), which establishes the maximum periods of time that banks can hold funds deposited into accounts before those funds must be made available for withdrawal. Among other things, the EFAA and its implementing Regulation CC generally require banks to make funds from local checks available by the second business day after the day of deposit; funds from nonlocal checks must be available by the fifth business day after the day of deposit. At each step, the check must be processed physically and then shipped to its destination by air or ground transportation. Some have suggested that truncating paper checks, or stopping them before they reach the paying bank, could result in lower costs to process checks and benefits to both the banking industry and the public. Under Regulation CC, the term “truncate” means to remove an original check from the collection or return process. Instead, the recipient receives a substitute check; or by agreement, information relating to the original check (including data taken from the magnetic ink character recognition line of the original check or an electronic image of the original check), whether with or without the subsequent delivery of the original check (see fig. 2). Do not endore or write below thi line. Essentially, check imaging is a process through which a paper check is scanned and a digital image is taken of the front and back of the paper check. The paper check may then at some point be destroyed and the images may then be stored in an archive maintained by the bank for retrieval if needed. When a paper check is imaged depends on the structure of a bank’s back office operations. Some banks have the capability to image a paper check at their branches, while others transport the paper to centralized locations where the paper is imaged. Once the images are taken, an image cash letter (ICL) is assembled and sent to the paying bank directly or to an intermediary (such as the Federal Reserve, a correspondent bank, or an image exchange processor) for ultimate presentment to the paying bank (see fig. 3). Since Check 21 was enacted, imaging technology has been further refined so that it is possible for a bank to image a paper check at its branches or automated teller machines (ATM)—commonly referred to as branch or ATM capture. In addition, some banks are beginning to offer a service to their customers called remote deposit capture where merchants can scan the paper checks they receive and electronically deposit those images at the bank. As discussed in the introduction to this report, electronic check processing was hampered by certain legal impediments that Check 21 addressed. Moreover, as we reported in 1998, perceptions about consumer preferences for receiving canceled checks also deterred electronic check processing. Because, under Check 21, checks drawn on any particular bank can be truncated by any bank across the country, banks cannot return the original canceled paper checks to their customers once they are imaged. At the time of our 1998 report, Federal Reserve officials and bank officials with whom we spoke expressed a belief that many consumers wanted their canceled checks returned. The popularity of the paper check as a retail payment instrument in the United States is waning. The Federal Reserve has estimated that the number of checks used in the United States peaked during the mid-1990s at around 50 billion checks per year. In its 2007 study the Federal Reserve highlighted the decline in check usage as a retail payment instrument. It reported that both the number of checks written and checks paid declined from 2003 through 2006. In 2006, 33.1 billion checks were written compared with 37.6 billion checks in 2003 and paid checks decreased from 37.3 billion checks to 30.6 billion checks in the same period. The number of checks written differs from checks paid because paper checks that have been converted into automated clearing house (ACH) payments were included in the figure for checks written. Additionally, the Federal Reserve concluded that the share of retail payments made electronically was growing, while the share of check payments of total noncash payments was declining. Electronic payments, including debit and credit cards, ACH payments (including check conversions), and electronic benefit transfers (EBT) amounted to two-thirds of the total number of noncash payments, which in 2006 totaled 93.3 billion. The share of check payments declined from 46 percent in 2003 to 33 percent in 2006 (see fig. 4). While check use has declined, check processing increasingly has become electronic. As shown in figure 5, from June 2006 through June 2008, the number of imaged checks deposited by collecting banks and received by paying banks has grown steadily. In June 2006 banks deposited 206 million checks as images compared with June 2008, when banks deposited 1.1 billion checks. Similarly, the number of checks received as images by the paying banks has grown. In June 2006, paying banks received 89 million items; by June 2008, they received almost 852 million items. However, the number of substitute checks has not declined, but has increased from 117 million in June 2006 to 283 million in June 2008. These checks represent paper that must be presented physically to paying banks through the collection system. The Federal Reserve operates a comprehensive, nationwide system for clearing and settling checks drawn on banks located throughout the United States. These offices accept paper check deposits and transport the paper checks to the paying bank. Since the effective date of Check 21, the Federal Reserve sends and receives images between banks. The Federal Reserve offers imaged check products—commonly referred to as the Check 21 products (Fed Forward, Fed Receipt, and Fed Return)—for a fee to banks that use its check collection services. According to the Federal Reserve Board’s 2007 Annual Report, of the approximately 10 billion checks (about one-third of the total 30.6 billion paid checks) processed through the Federal Reserve in 2007, 42.2 percent were deposited as images and 24.6 percent were received using Check 21 products. Further, in the month of July 2008, the proportion of checks deposited and presented as images using the Federal Reserve’s Check 21 products increased to 77.8 percent and 54.4 percent, respectively. As a result of the declining check volumes, the Federal Reserve developed a long-term plan for restructuring its check processing operations. In 2003, the Federal Reserve had 45 check offices. Since then, the Federal Reserve has closed a number of offices or gradually eliminated its check processing operations. In June 2007, the Federal Reserve announced that its check services system would be consolidated into four regional check processing sites. As of September 30, 2008, the Federal Reserve had 15 check offices and was working toward the objective of maintaining four offices at Atlanta, Cleveland, Dallas, and Philadelphia by the end of the first quarter of 2010. Given the significant declines in paper check deposit volumes, the Federal Reserve’s Retail Payments Office believes that the Federal Reserve likely will accelerate the consolidation schedule even further, reducing its check processing offices to perhaps one office by mid-2010. Check truncation has not resulted yet in overall gains in economic efficiency for the Federal Reserve or for the banks we surveyed, but Federal Reserve and bank officials expect efficiencies in the future. The expectation for electronic processing of checks was that it would lead to gains in economic efficiency—that is, removing paper from the payment stream would lead to lower costs. Our analysis of Federal Reserve cost accounting data suggests that its costs may have increased since the passage of Check 21, which may reflect concurrent maintenance of its paper processing infrastructure, investments in equipment and software for electronic check processing, and incurred costs associated with closing check processing sites. Estimates varied on whether costs were lower for private banks as the result of the check truncation that Check 21 facilitated, reflecting differences in the ways in which different banks handle checks and payments and differences among cost accounting systems. For example, several of the 10 largest banks noted that maintaining a dual paper-electronic infrastructure to date had prevented them from achieving overall lower costs, although they had seen reduced transportation and labor costs. Check imaging and the use of substitute checks appear to have had a neutral impact on banks’ fraud losses. We found and the Federal Reserve’s budget documents report that check truncation has not decreased Federal Reserve costs, although it contributed to decreased labor hours and transportation costs in Federal Reserve check services. To distinguish the effects of check truncation from other factors influencing the Federal Reserve’s total costs for check clearing services, we modified econometric cost functions that Federal Reserve economists have used to assess the effects of check volumes on total costs. In particular, we sought to distinguish the effect of the increased use of check truncation following passage of Check 21 on total costs from the concurrent effects of the decrease in the number of checks written in the United States, changes in the volume of checks processed by the Federal Reserve, the Federal Reserve’s consolidation of its check services, and costs of labor, software, and other expenses associated with the check processing services. With this consolidation of check offices, the Federal Reserve has incurred an estimated $115 million in costs from 2003 through 2007, including severance and other payments, which would increase total check services costs. However, the Federal Reserve did recover all costs for its check services from 2005 through 2007. Consistent with our results, the Federal Reserve’s annual budget reports from 2006 through 2008 reported that the Federal Reserve’s budget for check services experienced cost overruns. Most recently, the 2008 annual budget review reported that the expense overrun was due mainly to greater systemwide costs in preparation for additional restructuring of check services (costs included $34.0 million for accrual of severance, equipment impairments, and other expenses). The 2007 annual budget review noted total expenses for check services were to increase to $11.0 million reflecting higher costs for Check 21-related supplies and equipment, as well as additional resources necessary to facilitate further consolidation into five regional check-adjustments sites. “Total check service expenses were budgeted to increase by $5.7 million, or 0.9 percent from the 2005 estimate. The increase reflects one-time costs to prepare further consolidations of check operations, as well as other initiatives underway to improve the efficiency of check operations, including investments in Check 21 technology to accommodate increased volumes.” The Planning and Control System (PACS) is the Federal Reserve’s cost accounting system for recording expenses, which includes the costs of its check operations. We analyzed PACS data on check processing to determine whether electronic check processing had an effect on total processing costs. Our analysis builds on previous research by economists in the Federal Reserve. The analysis includes estimation of econometric cost functions using quarterly data from first quarter of 1994 through the fourth quarter of 2007. We chose 1994 as the beginning point for the analysis based on conversations with Federal Reserve officials about the data and in order to provide adequate coverage for the period before and after enactment of Check 21. These cost functions estimate the effects that different explanatory variables may have on total Federal Reserve costs for check services. Explanatory variables include the total volume of checks processed, the introduction of electronic processing or the volume of checks processed electronically, the number of return items, the number of Federal Reserve check processing offices, whether Check 21 was in effect, and wage and price indexes. The cost functions permit isolation of the effect of Check 21 from the effects of other variables on the Federal Reserve’s total costs for check services. The results do not demonstrate any gains in economic efficiency as measured by lower costs in the Federal Reserve’s check operations for the period since the passage of Check 21 through 2007. In particular, the variable that would measure a change in total costs following the effective date of Check 21 did not have a statistically significant effect on total costs. See appendix II for a more detailed discussion of the estimated cost functions. In part, the results reflect costs associated with the concurrent closing of the Federal Reserve’s check processing sites. While these closings should reduce costs in the long run, restructuring expenses incurred as part of the closings (such as severance pay for workers) represent up-front costs. The need to maintain dual infrastructures for paper and electronic check services also may explain the results. While Check 21 removed a barrier to electronic processing by creating the substitute check, Check 21 did not require that paper be removed from the process. So, the Federal Reserve continues to process paper checks and must maintain the infrastructure to process paper checks as it invests in new equipment to electronically process checks. Further, the creation of the substitute check also required investment in new equipment to print those instruments. For instance, a Federal Reserve Retail Payment Office official noted that the high-speed printing machines for substitute checks cost approximately $200,000 each and the Atlanta processing site had purchased about 12 of these machines. Although the move to electronic check services apparently has not led yet to overall cost savings, the Federal Reserve has seen decreases in transportation costs and work hours. With reduced paper volumes accompanying check truncation, the Federal Reserve’s transportation costs for check services decreased approximately 11 percent from the fourth quarter of 2001 through the fourth quarter of 2007 (see fig. 6). The Federal Reserve also has seen a decrease in the number of work hours for check services. Total work hours dropped from 2.6 million in the fourth quarter of 2001 to 1.3 million in the fourth quarter of 2007, a decrease of approximately 48 percent (see fig. 7). Since the transition to imaging has been gradual throughout the banking industry, the 10 largest U.S. banks still are maintaining paper-based processing systems. As previously noted, Check 21 did not require banks to take any action other than the acceptance of the substitute check. The 10 largest banks in the United States, based on deposit size, generally have large national branch networks and process large volumes of checks; consequently, they have a financial incentive to reduce the amount of paper they have to sort and transport. In 2007, these banks individually had at least 350 million paper checks deposited by their customers and some of them had considerably higher deposits, up to approximately 5 to 7 billion checks. But, the 10 banks have achieved various levels of electronic processing. Two of the 10 banks have not converted their check processing systems to imaging, but plan to do so by early 2009 and 7 banks have migrated to check imaging to some extent, but with imaging volumes at various levels. As of 2007, on the basis of our data collection instrument, the check volume of the seven banks that sent electronic check images ranged from almost 4 to 60 percent of their overall check deposits, although imaged volumes have been growing for some of the seven banks. However, the seven imaging banks are maintaining dual processing systems to collect on checks deposited at their institutions. If a bank cannot receive an image, a bank or an intermediary must either print a substitute check of the image or present the original paper check. Officials from four banks provided us with information on how the continued use of paper presentment has affected their transition to check imaging and their level of cost savings. Federal Reserve officials noted that the willingness of private banks to invest in the equipment needed to process check electronically demonstrated the bank’s expectation of lower costs. One bank official told us that the bank still has to print substitute checks for presentment to the small institutions that cannot receive images, which adds to the bank’s costs. Another bank noted that for banks that would prefer to receive only paper, it will deposit the image with either the Federal Reserve or another intermediary that then will print the substitute check to present for payment. An official representing this bank stated that the bank has to incur the additional cost of printing a substitute check or, if it goes through an intermediary, to pay the intermediary’s prices. The same bank official added that maintaining paper operations has delayed the ultimate potential savings from electronic check processing because the bank had to keep in place its transportation network to continue delivering paper checks. A third bank official reported to us that fees paid to clear checks would be reduced as more and more banks converted to imaging. Finally, a bank official from the fourth bank advised us that mid-size and regional banks were behind in their conversion to imaging because they are too large to outsource their check business, but not large enough to have a financial incentive to invest in check imaging technology. Thus, they continued to use local clearinghouses where they could exchange their checks at very low costs. This official noted that these banks need a reasonable business case for investing in check imaging. The declining volumes of paper checks also may be inhibiting the migration of some banks to check imaging. As previously noted, from 2003 through 2006, the number of checks paid had declined from about 37 billion to over 30 billion checks. According to one bank trade association, some banks are still undecided about converting to imaging because they recognize that check volume is declining and wonder why they should invest in check processing technology. During our interviews, some of the seven imaging banks raised the issue of declining check volumes as an additional complication preventing some banks from converting to check imaging. Officials from the Federal Reserve acknowledged while the volume of checks is declining, paper checks would continue to be used long enough to warrant banks’ investments in the technology for a more efficient check processing method. In both the paper-based and the image- based check processing systems, the bank of first deposit bears most of the cost of check collection; thus, it has the most financial incentive to convert to an image-based system. In addition, under EFAA, the bank of first deposit is required to release funds to the depositor within specified time periods; thus, it has an additional incentive for speeding up processing. The paying bank has the least market incentive to migrate to imaging because it does not incur the costs for collection, such as transportation and clearing fees. Officials representing some of the four banks with the highest volumes of check image deposits and receipts raised concerns with us that some banks are refusing to migrate to the new imaging technology and some action may be needed to encourage them to do so. One official told us that paying banks should be paying more of the cost of check processing so that they would have a financial incentive to receive images. The official specifically stated that a group of banks has refused to implement the technology and accept images. Another bank official said that from approximately 5 to 7 percent of banks have refused to convert to imaging and may need regulatory pressure to adopt the technology. Under a paper-based check system, paper checks have to be sorted and transported at every step until they are presented to paying banks; as a result, transportation and labor are among the banks’ highest costs. From our analysis of responses to our data collection instrument, officials from largest banks told us that labor was their largest category of expenditures related to check processing followed by transportation. However, none of the seven banks that process checks electronically expect transportation to be a large expenditure category for future processing operations if imaging technology is fully implemented. According to our bank interviews, air transportation networks of some of the largest U.S. banks have been reduced. Four banks (those with the highest volumes of check image deposits and receipts) have reduced intrabank and interbank transportation routes for checks, particularly air routes. By the end of 2009, two of the four will have eliminated their air transportation networks entirely. However, three of the four banks have not reduced costs for couriers and local transportation to the same extent as for air transportation because they still transport paper to central processing offices or to local clearinghouses. We were told by two bank officials we interviewed that as more paper checks are imaged at the branch level, the ground transportation costs of banks should be reduced. One bank official advised us that the earlier the bank can transmit the check information to its processing system and capture the checks as images, the lower the bank’s costs. The official added that the bank is working toward implementing branch “capture” (that is, conversion to an image) because the institution achieves better float management and eliminates courier transportation from its cost equation. Another bank official told us that because his bank’s transportation costs (for paper checks going from the branches to the central processing office) would not be reduced until the branches could capture check images; the bank had developed a pilot program for capture in a few branches. Although imaging was expected to result in savings in labor and transportation, the costs associated with installing and maintaining imaging equipment and the need to continue to maintain paper processing and clearing capabilities has prevented the realization of cost savings. According to a third bank, it is unclear when it will recover its significant investment in imaging equipment, image archives, and image exchange enhancements, if ever, due in part to the absence of universal adoption of check imaging. In contrast, we were told that transportation costs for banks that have not migrated to electronic processing may increase because as the overall volume of paper checks declines (due to check imaging and consumer preference) transporting the remaining checks will become more expensive on a per check basis. According to Federal Reserve officials, when fewer banks require the services of a particular transportation network, per-check transportation costs will increase for those banks still using the services because the network is transporting a smaller number of checks. The costs for the last bank on a specific route will be very expensive. According to one Federal Reserve official, in the future overnight mail may be the only practical option for these banks. In congressional testimony, the Director of the Federal Reserve Board’s Division of Reserve Bank Operations and Payment Systems stated, “As banks improve their technological capabilities, they can reduce their reliance on air and ground transportation, especially shared transportation arrangements. The banks that remain tied to paper checks will continue to bear the costs of those arrangements.” Furthermore, bank officials told us that they had additional technology costs when they converted to a check imaging system. To exchange checks electronically with other banks, banks needed to adapt their systems both to send and receive images. The technologies required for electronic check processing include hardware and software to image checks, archive images, and transmit image cash letters for collection. From the analysis of responses to our data collection instrument, six banks projected that the technology costs would continue to be in the “great” or “greatest” range for the foreseeable future. On the basis of our interviews, the two largest imaging banks have recovered or will recover the investments they made for check imaging by 2009. An official representing one of the three banks stated that the bank recovered its investment in imaging mostly through savings in labor and transportation. Moreover, the bank had less equipment, lower maintenance costs on the remaining equipment, and needed less back office space because of electronic processing. The banks that have not recovered their investments still were investing in image archive and image exchange enhancements. Similar to the Federal Reserve, banks have to deal with substitute checks and, thus, may be required to invest in the printing of substitute checks. From the analysis of responses to our data collection instrument, officials representing banks that have deposited images categorized expenditures for the printing of substitute checks in the “some” to “very great” range. In a follow-up interview, one bank official told us that the bank decided to outsource the printing because it decided not to make the investment since substitute checks were a temporary measure and would not be used once all institutions were image-enabled. Thus, this investment did not make sense for the bank. Another bank official acknowledged that substitute check printing has cost the bank hundreds of thousands of dollars to implement. Smaller banks also have been migrating to electronic check processing. But, according to our interviews with three smaller banks (in this case, one bank and two credit unions), they have migrated all of their volumes to electronic processing rather than operating two processing systems, as the largest banks have been doing. In addition, the three smaller banks told us that they typically will use a third-party processor, an image exchange processor like Endpoint Exchange, the Federal Reserve, or another intermediary, such as a correspondent bank. For example, a credit union deposited and received images through the Federal Reserve Banks, while a medium-size bank, with assets of $4.4 billion, deposited and received images through an image processor and correspondent. Officials representing the smaller banks told us that it may be easier for small banks to completely migrate to imaging because their check volumes are minuscule in comparison to the volumes of the largest banks and their back offices generally are less complicated than those of the largest banks. The bank with $4.4 billion in assets received approximately 15 million checks for deposit in 2007, compared with the 10 largest banks in which the bank with the lowest volume of check deposits had 350 million checks deposited. Moreover, generally when these institutions migrate to check imaging, they acquire the imaging services of their intermediary or processor rather than creating their own. In our interviews, representatives of the smaller banks described how check imaging had affected their operations and costs. The bank with assets of $4.4 billion reduced its costs by reducing its transportation network. According to a bank official, the bank also expects to secure cost savings from its local courier routes in the future. But, the bank had to invest in software to transfer check images to its correspondent bank. An official from a small credit union told us that check imaging allowed it to reduce its labor costs by half, after spending almost $6,000 for technology. Another credit union told us that they were able to eliminate three full- time equivalent positions because check processing and related operations (such as researching customer issues on payments) became more efficient. According to an official at the credit union, while the institution made some investments in technology and software, it had recovered the investment costs because of the staff reductions. Based on a recent American Bankers Association’s (ABA) survey of their members about fraud in deposit accounts, the analysis of responses to our data collection instrument, and our interviews with banks, we found that the use of substitute checks and check imaging has had a neutral effect on fraud losses. In 2007, the ABA reported in its survey of members, more than 92 percent of the bank respondents answered that they had not incurred any losses from substitute checks in 2006. Of the 8 percent of banks that responded that they had incurred both fraud and non-fraud losses from substitute checks, more than 80 percent also responded that these losses did not occur because the instruments were substitute checks instead of original checks. From the analysis of our responses to our data collection instrument, the six largest banks that have migrated to electronic check processing noted that check imaging and the use of substitute checks had not affected the prevalence of losses from bad checks and that imaging has had a neutral or minimal effect on check fraud. Officials representing two of these banks explained in subsequent interviews that in the post-Check 21 world, since checks are being processed faster banks can catch a fraudulent item sooner. A third official told us that he had seen a slight decline in fraud losses since Check 21. Finally, from the analysis of the responses to our data collection instrument, four of the largest banks noted that they had not taken additional actions to alleviate the potential threat of losses from images of bad checks. On the basis of our structured bank consumer interviews, we found only a small percentage of consumers who preferred to receive canceled checks with their checking account statement. Of the bank consumers we interviewed, 12 (or about 11 percent) wanted their canceled checks returned, while 37 (or about 35 percent) preferred to use online banking capabilities to review their check payment activity. In general, consumers expressed a variety of preferences for how banks should provide them with the most complete information about their check payments activity. Also, most of the consumers were not concerned significantly about being able to demonstrate proof of payment using a substitute check or check image rather than a canceled check. Few of the consumers reported that they suffered errors from the check truncation process. In addition to conducting consumer interviews, we reviewed consumer complaint data provided by federal banking regulators and found relatively few consumer complaints relating to Check 21. We found that a small percentage of bank consumers in our structured interviews preferred receiving canceled checks, while the remaining consumers preferred reviewing their check payments activity online or in a less paper-intensive format, such as image statements. As we reported in an earlier report, perceptions about consumer preferences for the receipt of their canceled checks deterred the adoption of electronic check processing. Based on the bank consumers we interviewed, it appears that their preference for canceled checks is diminishing. In our interviews, consumers expressed a variety of preferences for how banks should provide them with the most complete information about their check payments activity (see fig. 8). In particular, 12 of the 107 consumers, or about 11 percent, told us that they preferred receiving their canceled checks with their checking account statement. Some of these consumers believed that canceled checks were better for recordkeeping and more secure than electronic images in terms of protecting their privacy. Others in this group stated they wanted to be able to review their handwriting and other details of the canceled paper check to ensure that the checks were not counterfeit or the signatures forged. However, most bank consumers we interviewed accepted the use of online banking to review their check payments activity. Specifically, 37 of the 107 consumers, or about 35 percent, told us that they preferred reviewing check information and images online. Several consumers stated that they did not need the “extra paper” from canceled checks and image statements and that online reviewing was more secure than receiving canceled checks. Some consumers stated that they enjoyed the convenience of reviewing their check payments activity online at any time. Twenty-eight of the 107 consumers, or 26 percent, preferred a combination of the various methods (check images, online review, paper checks, and substitute checks). Most bank consumers reported that they were not concerned significantly about demonstrating proof of payment despite the changes to their checking accounts resulting from check truncation. For example, a consumer might pay a debt using a check, but the creditor might not properly record the payment, and then ask the consumer to demonstrate proof that he or she paid. Under the check truncation process, the consumer most likely would have access only to a substitute check or an image of the canceled check and not the original, canceled check. In our structured interviews, we asked consumers about their experience with demonstrating proof of payment. We found that 33 of the 108 consumers, or about 31 percent, had never been required to demonstrate proof of payment using canceled checks, substitute checks, or an image statement. We found that 58 of the 108 consumers, or about 54 percent, had used a canceled check to demonstrate proof of payment. We also found that 33 of the 108, or about 31 percent, had used a substitute check or image statement to demonstrate proof of payment. Most of these consumers reported that they had no difficulty using a substitute check or image statement, but some consumers reported that creditors would not accept an image showing only the front of the check so the consumer had to get copies of the front and back of the check from the bank. We then asked consumers whether they were concerned about having to demonstrate proof of payment using a substitute check or image statement rather than a canceled check. We found that 53 of the consumers, or about 49 percent, were “slightly” or “not at all” concerned about their ability to demonstrate proof of payment using a substitute check or image statement (see fig. 9). In particular, many of these consumers were confident that a substitute check or image statement contained all of the information necessary to demonstrate proof of payment. However, 35 of the consumers, or 32 percent, were “extremely” or “very” concerned about using a substitute check or image statement. Many of these consumers were concerned that having an image of only the front of the check might not be sufficient, particularly if they had experienced such difficulty in the past. Few of the bank consumers we interviewed reported that they suffered errors from the check truncation process. We asked consumers whether they had experienced errors such as double-posting of an item, a forged signature on a check, a counterfeit check, or some other error involving canceled checks, substitute checks, and image statements. The consumers reported more errors involving canceled checks than substitute checks or image statements. Specifically, 28 of the 108 consumers, or about 26 percent, reported an error involving a canceled check and using it to resolve the error. In contrast, only one consumer we interviewed reported suffering an error related to double-posting of a debit and using a substitute check to resolve the error. Also, 7 of the 74 consumers who reported that they received image statements, or about 9 percent, reported errors involving an image statement and using it to resolve errors they experienced. See figure 10 for the distribution of reported errors involving canceled checks and image statements. Based on interviews with trade association and service vendor officials, we found that some banks have been correcting errors associated with double-posting of a check before consumers experience them. They told us that double-posting initially was a significant problem for banks as they adopted check truncation technology. However, they also noted that many banks have now incorporated protection in their computer system to identify duplicates before they reach the consumer, so that many consumers never see them when they review their bank statements. We found that a small percentage of consumers complained to the federal banking regulators about matters relating to Check 21. In its April 2007 report, the Federal Reserve Board found that less than 1 percent of all complaints received by federal banking regulators related to Check 21. The results of our review of consumer complaint data on Check 21 corroborated the Federal Reserve Board’s conclusion. Specifically, we reviewed consumer complaint data from the four federal banking regulators from October 28, 2004, through March 31, 2008, and found 172 complaints were submitted about Check 21. In comparison, in each year from 2005 through 2007, the regulators received approximately 35,000 consumer complaints overall. Of the 172 complaints relating to Check 21, we found that 78, or about 45 percent, were from consumers who wanted to continue receiving canceled checks. The federal banking regulators responded to such complaints by noting that banks have no legal requirement to return canceled checks to consumers and that the return of canceled checks was dependent on the contractual agreement between consumers and their banks. However, in these instances, the data showed that the interested banks generally agreed to send canceled checks to consumers whenever possible. In addition, another 30 of the 172 complaints, or about 17 percent, were from consumers concerned about the quality or clarity of image statements. Some of the banks we interviewed also mentioned image quality as a prominent consumer complaint, but we learned that they continue to seek a solution to image quality problems. To the extent that banks have implemented electronic check processing, bank consumers have realized both benefits and costs relating to faster processing and access to information about their checking accounts. Faster check processing has helped some banks extend the cut-off time for same-day credit on deposits, which can result in faster availability of deposited funds. In addition, bank industry officials and some of the consumers we interviewed believe it is beneficial to receive simpler checking account statements with check images rather than canceled checks. Also, bank industry officials cited benefits to consumers from immediate access to information about checking account activity and improved customer service. In addition, consumers can benefit specifically from a provision of Check 21 because they have the right to expedited re- credit of their checking accounts if banks make certain errors associated with substitute checks. However, on the basis of our consumer and bank interviews, the extent to which consumers have benefited from expedited re-credit is unclear. We also found that some consumers may incur fees related to receiving canceled checks and check images with their checking account statements. Based on our review of available data from 2001 through 2006, it appears that fees for canceled checks have increased and fees for check images have remained relatively flat. In addition, the amount of the fees can vary depending on the type of checking account the consumer maintains. We found that banks may have extended the cut-off time for accepting deposits for credit on the same business day, due to the check truncation process and other check-system improvements. Generally, banks had established a cut-off hour of 2:00 p.m. or later for receipt of deposits at their main or branch offices and a cut-off of 12:00 p.m. or later for deposits made at ATMs and other off-premise facilities. These cut-off times provided the banks with necessary time for handling checks and transporting them overnight to paying banks. The check truncation process and check imaging provide collecting banks with additional time to present checks to paying banks. As a result, banks may be able to establish a later cut-off hour, which would give consumers more time to deposit funds at the bank for same-day credit. Bank officials told us that they have started to adjust their cut-off times in some geographic areas in response to the growth of check truncation. Of the seven largest U.S. banks that have started to migrate to check imaging, five told us that they have extended some of their deposit cut-off times at certain branches. For instance, one bank on average extended its cut-off time by 2 hours in the Northeast, and another bank had plans in place to make a similar 2-hour extension in selected markets. A third bank told us that it has extended the cut-off time for accepting deposits for credit on the same business day at certain ATMs to 8:00 p.m. in several major cities such as Atlanta, Chicago, Los Angeles, and New York. Although some consumers may have additional time for making deposits, they may not be able to withdraw their funds any sooner because the funds availability schedules of Regulation CC have not been amended following enactment of Check 21. The Federal Reserve Board recently concluded that much broader adoption of new technologies and processes by the banking industry must occur before check return times can decline appreciably and thereby permit a modification of the funds availability deadlines. The Federal Reserve Board found that the banks of first deposit learn of the nonpayment of checks faster than they did when EFAA was enacted, but banks still do not receive “most” local or nonlocal checks before they must make funds available for withdrawal. However, the Federal Reserve’s decision to consolidate its check- processing regions has had a direct effect on consumers in terms of the availability of their deposited funds under Regulation CC. Specifically, the consolidations have increased the proportion of local checks and thereby reduced the maximum permissible hold period from 5 business days to 2 business days for many checks. As previously noted, the Federal Reserve’s check-processing regions are being consolidated into four check- processing regions by the first quarter of 2010. Because the processing regions are larger (and will become even more so), the number of local checks has been increasing. In addition, based on the Federal Reserve Board’s study and our own research, it appears that banks are making depositor funds available earlier than EFAA-established funds-availability schedules. Specifically, the Federal Reserve’s Check 21 study found that banks make about 90 percent of all consumer deposits of local and nonlocal checks available more promptly than required by EFAA. Moreover, it found that banks make funds available from the majority of consumer check deposits within 1 business day. We reviewed the customer account agreements for 5 of the 10 largest U.S. banks and found that the general policy for each bank is to make funds available to consumers on the business day after the day of deposit. Bank industry officials and some consumers we interviewed noted that consumers may realize other benefits relating to access to information about check payments. For example, bank consumers may receive simpler checking account statements using image technology. So-called “image statements” include a sheet of paper with multiple pictures or images of checks that were written by the consumer and processed since the last statement. In our interviews with 108 bank consumers, 75 consumers, or about 69 percent, stated that they received image statements. When asked about their preferred method of receiving information about check payments, 11 of the 108 consumers interviewed, or about 10 percent, stated that they preferred receiving image statements over canceled checks or online review of check payments activity. Some of the 11 consumers told us that they preferred receiving image statements because, while they wanted a paper record of their check payments activity, they preferred not to handle and store canceled checks. Bank consumers who prefer to manage their checking account electronically also might realize benefits from immediate access to information about check payments. With the check imaging process and online access to their checking accounts, consumers can review check payments and images of their paid checks as soon as they are posted to the account and may recognize a problem sooner. With paper check processing, consumers must wait until the checking account statement arrives in the mail to review their check payments activity. Also, improved access to information can be beneficial to consumers when they need to work with the bank to resolve a problem. Bank industry officials and some consumers we interviewed noted that consumers may realize other benefits relating to access to information about check payments. One of the expected consumer benefits of Check 21 is the right to expedited recredit, but the extent to which consumers have benefited is unclear. The expedited recredit provision is considered a benefit to consumers because other banking laws governing checks do not prescribe specific amounts or time frames by which banks must recredit a customer’s account. On the basis of our bank consumer and bank interviews, it appears that a small number of bank consumers have filed expedited recredit claims. The right to expedited recredit exists if the consumer asserts in good faith that the bank charged the consumer’s account for a substitute check provided to the consumer and either the check was not properly charged to the consumer’s account, or the consumer has a warranty claim pertaining to the substitute check. The bank must recredit the customer’s account unless it has provided the customer the original check or a copy of the original check that accurately represents all information on the original check and demonstrated to the consumer that the substitute check was properly charged to the consumer’s account. On the basis of our consumer and bank interviews, it appears that a small number of bank consumers have filed expedited recredit claims. In our interviews with 108 consumers, 9 or about 8 percent of the consumers we interviewed, stated that they had received substitute checks with their main checking account statement, and none had exercised the right to expedited recredit. On the basis of the data provided to us by the 10 largest banks through the data collection instrument (which are not representative of the entire industry), we found 3 banks received a small number of claims related to expedited recredit in 2007. Specifically, one bank reported that it fielded less than 1,000 claims; one received less than 10 claims; and the third bank reported that it received 1 claim. In an interview, a representative of another bank told us that the bank had not received any claims. Six other banks did not report any information on the number of claims received. Some bank consumers can incur fees for receiving canceled checks and image statements, and the amount can depend on the type of checking account the consumer maintains. We reviewed data regarding bank fees for canceled checks and image statements acquired from Informa Research Services in conjunction with a report on bank fees. The data indicated that the average amount of fees for obtaining canceled checks generally increased from 2001 through 2006, and the average amount of fees for obtaining image statements remained relatively flat. For example, as shown in figure 11, the average check enclosure fee more than doubled from $1.42 to $3.11. During the same period, the average check imaging fee rose from $0.40 to $0.49. The Informa data also indicated that banks may charge different amounts for check enclosures and check imaging depending on the type of checking account. Specifically, the Informa data indicated that primarily non-interest, free checking accounts had the highest fees for check enclosures and check imaging. The lowest check enclosure and check imaging fees were found primarily with senior checking accounts. For example, in 2006 the average check enclosure fees for a non-interest, free checking account and a senior checking account were $3.75 and $2.45, respectively, compared to $3.11—the average check enclosure fee of all accounts Informa surveyed. Furthermore, the average check-imaging fee for a non-interest, free checking account in 2006 was $0.84, and the average check-imaging fee for a senior checking account was $0.18, compared to $0.49—the average check imaging fee of all accounts Informa surveyed. A relatively small number of the bank consumers we interviewed reported that their bank charged a fee for obtaining canceled checks or image statements, and some of the banks we interviewed reported that they charged a fee for providing canceled checks. Specifically, 23 bank consumers, or about 21 percent of the consumers we interviewed, told us that their bank charged a fee for obtaining canceled checks. Two consumers stated that they switched to online review of their check payments activity to avoid paying a fee for receiving canceled checks. Also, as we reported above, 12 of the 108 bank consumers we interviewed preferred receiving canceled checks to review their check payments activity. Moreover, 18 bank consumers, or about 17 percent, reported that their bank charged a fee for obtaining image statements. Two of the banks we interviewed charged a fee if consumers wanted to receive canceled checks. For example, one bank stated that its customers paid $2 for receiving canceled checks if they also paid a monthly service fee, but other bank officials we interviewed stated that their banks did not charge a fee for image statements. In addition, faster check processing may cause consumers to lose “float.” Float is the time between the payment transaction and the debiting of funds from a bank consumer’s account. The check truncation process may result in checks clearing a consumer’s account more quickly than under traditional check processing. However, deposited funds may not be available to consumers more quickly because, as noted above, Regulation CC’s funds availability deadlines have not changed. According to our recent report on bank fees, consumer groups and bank representatives believe that the potential exists for increased incidences of overdrafts if funds were debited from a consumer’s account faster than deposits were made available for withdrawal. However, we identified little research on the extent to which check truncation has affected occurrences of overdrafts and nonsufficient funds fees. We provided a copy of a draft of this report to the Federal Reserve Board, which provided us with written comments that are reprinted in appendix III. The Federal Reserve Board agreed with our overall conclusion that, over the past four years, the banking industry has made substantial progress toward establishing an end-to-end electronic check-processing environment. In commenting on this report, the Federal Reserve Board noted that the Federal Reserve Banks expect that by year-end 2009, more than 90 percent of their check deposits and presentments will be electronic. They also commented that the ongoing transformation to electronic check-processing environment has not been without cost. As noted in our report, the Federal Reserve Banks have reduced their transportation costs and work hours associated with their check services. And, according to the Federal Reserve Board, they earned a net income of $326 million for providing check services from 2005 through 2007. The Federal Reserve Board concurred with a number of consumer benefits identified in the report: faster funds availability on check deposits due to later deposit deadlines, quicker access to account information, and improved customer service. In addition, they provided us with technical comments, which we incorporated as appropriate. We also sent a draft of this report to the Federal Deposit Insurance Corporation, Office of the Comptroller of the Currency, and Office of Thrift Supervision. Only the Office of the Comptroller of the Currency provided us with technical comments, which we incorporated as appropriate. We provided sections of the draft of this report to bank officials for their technical review and several of them provided us technical comments, which we incorporated as appropriate. We are providing copies of this report to other interested Congressional committees. We are also providing copies of this report to the Chairman, Board of Governors of the Federal Reserve System; Chairman, Federal Deposit Insurance Corporation; Comptroller of the Currency, Office of the Comptroller of the Currency; Director, Office of Thrift Supervision; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-8678 or jonesy@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The Check Clearing for the 21st Century Act of 2003 (Check 21) mandated that GAO evaluate the implementation and administration of Check 21. The report objectives are to: (1) determine the gains in economic efficiency from check truncation and evaluate the costs and benefits to banks and the Federal Reserve System (Federal Reserve) from check truncation, (2) assess consumer acceptance of the check truncation process resulting from Check 21, and (3) evaluate the costs and benefits to consumers from check truncation. To estimate the gains in economic efficiency from check truncation and evaluate the costs and benefits to banks from check truncation, we separately analyzed costs for the check operations of the Federal Reserve and for a selected group of banks. We used data from the Federal Reserve cost accounting system, known as the Planning and Control System or PACS, for the period beginning 10 years prior to the effective date of Check 21 (1994) through 2007. We modeled the Federal Reserve’s total check processing costs as different functions of variables, such as the volume of checks processed, the volume of returned checks, the number of Federal Reserve check processing offices, and the general indexes on wage and price. The specified cost functions allowed us to use standard econometric methods for estimating the effects of the variables on the Federal Reserve’s total check processing costs for 1994 through 2007. Because data on prices of input factors associated with Federal Reserve’s check processing operations are not available, we also used in our estimation data from the Department of Commerce’s Bureau of Economic Analysis (BEA) and the Department of Labor’s Bureau of Labor Statistics (BLS) as alternative measurements for the prices of these input factors. For example, we used average hourly earning for all private sectors from BLS as an alternative measurement for the Federal Reserve’s labor cost, BEA’s price deflator for equipment and software by nonresidential producers as an alternative measurement for communications equipment and transit cost, and BEA’s Gross Domestic Product price deflator as an alternative measurement for costs of all other input factors. We assessed the quality of all the above data and found them to be sufficiently reliable for our purposes. We also discussed Federal Reserve check processing costs and our econometric cost model with staff at the Federal Reserve. See appendix II for a detailed discussion of our econometric cost functions. While the Federal Reserve has consistent cost accounting data, cost accounting varies throughout the banking industry, preventing a similar analysis for private-sector costs. To evaluate the costs and benefits to banks from check truncation, we focused our data collection and analysis on the 10 largest banks in the United States, based on deposit size as of March 25, 2008. The check volume at the 10 largest U.S. banks represents a significant segment of the check paid volume. In 2007, these banks presented almost 13 billion checks for collection out of approximately 30 billion checks, which were paid in 2006. Thus, we determined that these banks should have a financial incentive to reduce the amount of paper that has to be sorted and transported. We created a data collection instrument to obtain qualitative cost information about the following issues: (1) the extent to which the banks deposited and received checks as images; (2) the primary costs related to paper check processing; (3) the extent of the investment that banks made to exchange check images; (4) the level of cost savings banks achieved, if any, including changes in labor and transportation costs through the use of image technology; and (5) the impact of check imaging and the use of substitute checks on the prevalence of bank losses from fraudulent checks. Officials from the Electronic Check Clearing House Organization, commonly known as ECCHO, also reviewed the data collection instrument. We sent it to the 10 banks and received a response from 9. At an early stage of our engagement, we also interviewed an official representing the bank that did not provide a response. We conducted follow-up interviews with a number of the banks requesting clarification of their responses. We also sent the data collection instrument to 12 smaller institutions, which included credit unions, to understand the small bank experience with check imaging. These banks’ assets ranged from less than $500 million to $5 billion and were selected from ECCHO’s list of participating members. In addition, our selection criteria included whether these smaller institutions were located in metropolitan or nonmetropolitan areas. We received completed forms from five of these institutions, but two had not migrated any of their volume to check imaging. We conducted subsequent interviews with the three institutions that had. We made several attempts to contact the nonrespondents through e-mail messages and follow-up telephone calls. In addition, we interviewed officials from a corporate credit union and a banker’s bank. To assess consumer acceptance of the check truncation process resulting from Check 21, we conducted in-depth structured interviews with a total of 108 adult consumers in three locations (Atlanta, Boston, and Chicago) in May 2008. We contracted with NuStats, Inc., a private research and consulting firm, to recruit a sample of consumers who generally represented a range of demographics within the U.S. population in terms of age, education level, and income. However, the consumers recruited for the interviews did not form a random, statistically representative sample of the U.S. population; therefore, we could not generalize the results of the interviews to the relevant total population. Additionally, the self-reported data we obtained from consumers are based on their opinions and memories, which may be subject to error and may not predict their future behavior. Consumers had to speak English and meet certain other conditions: having primary responsibility in the household for balancing the financial account that allows paper check writing; having received canceled original checks in paper form with the checking account statement at some point since 2000; and not having participated in more than one focus group or similar in-person study in the 12 months before the interview. We achieved our sample recruitment goals for all demographics, with the exception of the age category “65 plus” and the education category “some high school or less.” In addition, our sample comprised 64 women and 43 men. We considered that the impact of not achieving these goals on our work was minimal. See table 1 for further demographic information on the consumers we interviewed. During these interviews, we obtained information about the experience of consumers with, and their opinions about, changes to their checking accounts resulting from the check truncation process. Our interviews included a number of standardized questions, and more tailored follow-up questions as necessary to more fully understand their answers. All consumers were asked about their current experience with their checking accounts and preferred method of making retail payments. The interview focused on consumer experience with canceled checks, substitute checks and check images, and the possible changes to their checking accounts since Check 21. More specifically, the structured interview of the 108 consumers included questions on the following issues: (1) bank fees charged to them to receive canceled checks, substitute checks or image statements; (2) instances and subsequent resolution of errors involving their checking accounts; (3) their preferred method of receiving information from their bank about check payments activity (such as receiving their canceled checks, reviewing information online, or reviewing an image statement); (4) instances in which they had to demonstrate proof of payment using a canceled check or a check image and their resolutions; (5) their level of concern about using a check image as a proof of payment; and (6) whether their bank had extended its cut-off time for accepting deposits and the consumer’s opinion about the merits of such an action. In addition, we asked nine questions about the consumers’ experience submitting complaints to banks and federal banking regulators. This report does not contain all the results from the consumers’ interviews. We reproduced the text from our structured interview instrument and tabulated the results from the questions in Questions for Consumers about Check 21 Act (GAO-09-09SP). To evaluate the benefits and costs to consumers from check truncation, we interviewed staff from the federal banking regulators—the Board of Governors of the Federal Reserve, the Federal Deposit Insurance Corporation, the National Credit Union Administration, the Office of the Comptroller of the Currency, and the Office of Thrift Supervision—and collected consumer complaints about the implementation of Check 21 that were submitted to these agencies from October 28, 2004, through March 31, 2008. Our analysis of the consumer complaint data helped us identify the issues that we pursued in our structured interviews of 108 consumers. While the regulators’ consumer complaint data may be indicative of the relative levels of different types of complaints, we did not rely solely on these data because these voluntary reporting systems rely on complainants to self-select themselves; therefore, the data may not be representative of the experiences of the general public. We also interviewed representatives from consumer advocacy groups, including Consumers Union, the Consumer Federation of America, and the U.S. Public Interest Research Group. Furthermore, we interviewed officials from the American Bankers Association and third-party processors. The data collection instrument discussed above also included questions about the potential benefits and costs of Check 21 for consumers. For example, we asked the banks for information about (1) their policies on returning canceled checks before and after Check 21; (2) the fees they charged to consumers for the return of canceled checks and image statements; (3) their assistance to customers in showing proof of payment using a canceled check, a substitute check, or a check copy; (4) the instances of expedited claims they received on substitute checks and their resolution; and (5) the complaints they have received about matters relating to Check 21 and whether they had changed their cut-off times for deposits at automated teller machines or branches in the last 2 years. In addition, we analyzed the conclusions and the methodology applied in the Federal Reserve Board’s Report to the Congress on the Check Clearing for the 21st Century Act of 2003, published in April 2007, to determine whether we could use the results in our report. The study constituted the Federal Reserve Board’s assessment of the banking industry’s implementation of Check 21 to date, as well as the continued appropriateness of the funds availability requirements of Regulation CC. We interviewed staff from the Federal Reserve Board about the methodology and conclusions in the report and we examined the design, implementation, and analysis of the survey instrument used for the study. We considered the overall strengths and weaknesses of the Federal Reserve’s data collection program, as well as specific questionnaire items relating to Regulation CC. On the basis of our review, we concluded that we could use the results in this report. To determine whether consumers may incur fees for receiving canceled checks and check images since the implementation of Check 21, we reviewed and analyzed data purchased from Informa Research Services (Informa) that included summary-level fee data from 2001 through 2006. The data included information on check enclosure and imaging fees. Informa collected its data by gathering the proprietary fee statements of banks, as well as making anonymous in-branch, telephone, and Web site inquiries for a variety of bank fees. It also received the information directly from its contacts at the banks. The data are not statistically representative of the entire population of depository institutions in the country because the company collects fee data for particular institutions in specific geographical markets so that these institutions can compare their fees against their competitors. That is, surveyed institutions are self-selected into the sample or are selected at the request of subscribers. To the extent that institutions selected in this manner differ from those which are not, results of the survey would not accurately reflect the industry as a whole. Informa collects data on more than 1,500 institutions, including a mix of banks, thrifts, credit unions, and Internet-only banks. The institutions from which it collects data tend to be large ones that have a large percentage of the deposits in a particular market. Additionally, the company has access to individuals and information from the 100 largest commercial banks. The summary-level data Informa provided us for each data element included the average amount, the standard deviation, the minimum and maximum values, and the number of institutions for which data were available to calculate the averages. They also provided these summary- level data by institution type (banks and thrifts combined, and credit unions) and size (as shown in table 2). In addition, Informa provided us with data for nine specific geographic areas: California, Eastern United States, Florida, Michigan, Midwestern United States, New York, Southern United States, Texas, and Western United States. We interviewed representatives from Informa to gain an understanding of their methodology for collecting the data and the processes they had in place to ensure the integrity of the data. Reasonableness checks were conducted in 2007 on the data and identified any missing, erroneous, or outlying data and Informa Research Services representatives corrected any mistakes that were found. Also, in 2007, we compared the average fee amounts that Informa had calculated for selected fees for 2000, 2001, and 2002 with the Federal Reserve’s “Annual Report to the Congress on Retail Fees and Services of Depository Institutions.” The averages were found to be comparable to those derived by the Federal Reserve. While these tests did not specifically include check enclosure and check image fees, they did confirm our assessment of the Informa data system. Because the assessment conducted for our January 2008 report encompassed the checking fee data we used, we determined that the Informa Research Services data were sufficiently reliable for our current report. We conducted this performance audit from September 2007 to October 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Check Clearing for the 21st Century Act of 2003 (Check 21) was intended to make the check payment system more efficient and less costly by facilitating wider use of electronic check processing without demanding that any bank change its current check collection practices. Prior to Check 21, a bank was required to present an original paper check to the paying bank for payment unless the paying bank agreed to accept presentment in some other form. This required the collecting bank to enter into agreements with all or nearly all of the banks to which it presented checks. Because of these impediments, banks were deterred from making the necessary electronic check processing investments. Check 21 addressed these impediments by authorizing a new paper negotiable instrument (a substitute check), which is the legal equivalent of the original check. Other than accepting the substitute check, the act does not require banks to adopt electronic check processing, but it enables banks that want to truncate or remove the original paper checks from the check- collection system to do so more easily. Check 21 facilitates electronic check processing by allowing banks to use imaging technology for collection and create substitute checks from those images for delivery to banks that do not accept checks electronically. To assess the implications for economic efficiency in the Federal Reserve System’s (Federal Reserve) check processing since Check 21 took effect in October 2004, we conducted a standard econometric analysis of the Federal Reserve’s quarterly accounting cost and volume data for the period from 1994 through 2007. This approach allowed us to model total check operating costs as a function of the total check presentment volume and the timing of Check 21, while separating cost effects from other relevant factors such as check return volume, number of check clearing offices, and labor wages. For this report, we refer to banks, thrifts, and credit unions collectively as banks. Many microeconomic textbooks have detailed discussions on cost function. For example, see Hal R. Varian, Microeconomic Analysis, 3rd edition (New York, N.Y.: W.W. Norton & Company, 1993), chapter 5. The total check operating cost at time t (C) depends on the number of checks (items) processed during that period (N) and the number of return items (R). Total operating cost is expected to have a positive relationship with both the total number of items processed and the number of return items; that is, positive α and α in equation (1).coefficient of O [α in equation (1)] is ambiguous; it may be positive in the case of a cost savings or negative in the case of an increase in total costs. The coefficient of primary interest is that of Dc21, (α in the estimation. Consistent with microeconomic theory, we expect an increase in input prices (p) will lead to an increase in total cost. For example, higher labor wage rates are expected to lead to higher total cost, seen as positive coefficients for input prices in the estimation. Based on econometric studies, including some that specifically considered economies of scale for check processing, we modified the basic approach of equation (1) to control for quarterly fluctuations and trends over time, and to consider the potential effects of Check 21 on the presence of scale economies in check clearing operations. Some of these studies include Ernst R. Berndt, The Practice of Econometrics: Classic and Contemporary (Mass: Addison-Wesley Publishing, 1996), chapter 3; Robert M. Adams, Paul W. Bauer, and Robin C. Sickles, Federal Reserve Bank of Cleveland, “Scope and Scale Economies in Federal Reserve Payment Processing,” Working Paper 02-13 (November 2002); David B. Humphrey, “Scale Economies at Automated Clearing House,” Journal of Bank Research (Summer 1981), 71-81; and Paul W. Bauer and Dianna Hancock, “Scale Economies and Technological Change in the Federal Reserve ACH Payment Processing,” Federal Reserve Bank of Cleveland Economic Review (1995) vol. 31, no. 3, 14-29. We estimated equation (2) with quarterly data from the Federal Reserve’s Planning and Control System (PACS) for the period from 1994 through 2007. Table 3 shows the summary statistics for selected variables. We estimated the logarithm of total check processing cost against the logarithms of total presentment items—image, paper, legacy, and substitute—and other related variables. Table 4 presents the results. The basic specification in table 4, which does not account for a possible different cost structure in check processing, yields mostly statistically insignificant coefficients. However, the coefficient for the total number of items presented is significant and positive, implying that a 1 percent increase in total presentment will result in a 1.34 percent increase in total cost. However, the coefficient for the Check 21 dummy variable (Check21), while negative, is not statistically significant. This result does not provide any support for the hypothesis that the introduction of Check 21 led to a decrease in Federal Reserve costs, although it is not possible to determine the extent to which this may be driven by the concurrent consolidation of Federal Reserve check services sites. Table 4 also shows the results of the estimation incorporating a structural break in the cost function for periods before and after the act as described in equation (2). Though insignificant, the coefficient of total presentment is positive and less than 1, and the coefficient of the interacted variable of total presentment and Check 21 dummy is negative (-0.25). If significant, the sum of two coefficients would imply that the cost structure for the check operation in the post-Check 21 period would be different from the pre-Check 21 period. However, the relatively short time series data for the post-Check 21 period increase the standard errors for all the coefficients of the interacted variables. Also, although insignificant, the coefficient of the Check 21 dummy is positive, implying that the total cost, on average, is lower in periods before Check 21 than after. In addition to the estimation results shown in table 4, we estimated alternative functional forms used in other similar studies for the relationships in equation (2). Because these functional forms generally require constructing a substantial number of interacted variables, the subsequent multicollinearity and the limited data available make the results subject to high estimation errors and thus difficult from which to draw clear inferences. We also tested the effects on the estimates of imposing a constraint suggested by economic theory. The standard errors for most of the coefficient estimates decrease, suggesting a decrease in multicollinearity, but the results are otherwise similar to the results without the constraint in table 4. k=1 To impose this constraint, we made some adjustments to the total costs and input price. See William H. Green, Econometric Analysis (Prentice Hall, N.J.: 1993), 503-507. given the changes in technology embodied in electronic presentment and check truncation. These results are likely to change with additional quarters of data and the expected continuing increase in the electronic presentment as a share of the Federal Reserve’s check processing. Also, as previously mentioned, the Federal Reserve’s ongoing effort to close check clearing office facilities has resulted in one-time consolidation and reorganization charges. These charges are included in the total cost operating costs, and although we try to control for their effect by including the number of offices variable, it is plausible that the positive sign of the Check 21 dummy in our estimations may be a result of these charges included in the total costs. Similarly, our analysis implicitly assumes that the Federal Reserve’s consolidation decisions are independent of the volume of checks that it processes. However, the data are not sufficient to explicitly model a relationship between the volume of checks and expectations about future volumes. We appreciate the opportunity to comment on the GAO’s report titled Check 21 Act: Most Consumers Have Accepted and Banks Are Progressing Towards Full Adoption of Check Truncation. We agree with the GAO’s overall conclusion that, over the past four years, the banking industry has made substantial progress towards establishing an end-to-end electronic check-processing environment. Today, more than three-quarters of checks deposited with the Federal Reserve Banks for collection are deposited electronically, and more than half are presented electronically. The Federal Reserve Banks expect that by year-end 2009, more than 90 percent of their check deposits and presentments will be electronic. This ongoing transformation to an end-to-end electronic check-processing environment has not been without cost. The banking industry and the Federal Reserve Banks have made significant technological investments to facilitate an electronic check-clearing system and have incurred incremental transition costs associated with processing both paper and electronic checks. The Federal Reserve Banks’ investments, however, have enabled them to significantly reduce their transportation costs and paper check-processing infrastructure. These cost reductions have been critical to the Reserve Banks’ ability to recover all of their actual and imputed costs of providing check services from 2005 through 2007 and earn a net income of $326 million. processing region, many checks that were previously classified as nonlocal checks subject to a five-day maximum permissible hold are now classified as local checks subject to a maximum two-day hold period. It is likely that within the next several years, all checks will be classified as local, subject to the shorter permissible hold period. Again, we appreciate the opportunity to review and comment on the GAO’s report and the efforts and professionalism of the GAO’s team in conducting this study. The following individuals made key contribution to this report: Debra R. Johnson, Assistant Director; Joanna Chan; Philip Curtin; Nancy Eibeck; Terence Lam; James McDermott; Carl Ramirez; Barbara Roesmann; and Paul Thompson.
Although check volume has declined, checks still represent a significant volume of payments that need to be processed, cleared, and settled. The Check Clearing for the 21st Century Act of 2003 (Check 21) was intended to make check collection more efficient and less costly by facilitating wider use of electronic check processing. It authorized a new legal instrument--the substitute check--a paper copy of an image of the front and back of the original check. Check 21 facilitated electronic check processing by allowing banks to use electronic imaging technology for collection and create substitute checks from those images for delivery to banks that do not accept checks electronically. Check 21 mandated that GAO evaluate the implementation and administration of the act. The report objectives are to (1) determine the gains in economic efficiency from check truncation and evaluate the benefits and costs to the Federal Reserve System (Federal Reserve) and financial institutions; (2) assess consumer acceptance of the check truncation process resulting from Check 21; and (3) evaluate the benefits and costs to bank consumers from check truncation. GAO analyzed costs for the check operations of the Federal Reserve and a group of banks, interviewed consumers about their acceptance of and costs and benefits of electronic check processing, and analyzed survey data on bank fees. The Federal Reserve agreed with the overall findings of the report. Check truncation has not yet resulted in overall gains in economic efficiency for the Federal Reserve or for a sample of banks while Federal Reserve and bank officials expect efficiencies in the future. GAO's analysis of the Federal Reserve's cost accounting data suggests that its costs for check clearing may have increased since Check 21, which may reflect that the Federal Reserve must still process paper checks while it invests in equipment and software for electronic processing and incurs costs associated with closing a number of check offices. However, GAO found that the Federal Reserve's work hours and transportation costs associated with check services declined from the fourth quarter of 2001 through the fourth quarter of 2007. Several of the 10 largest U.S. banks reported to GAO that maintenance of both paper and image-based check processing systems prevented them from achieving overall lower costs, although they had reduced transportation and labor costs since Check 21 was enacted. Check imaging and the use of substitute checks appear to have had a neutral or minimal effect on bank fraud losses. Most bank consumers seem to have accepted changes to their checking accounts from check truncation. In interviews with bank consumers, the majority of them accepted not receiving their canceled checks and being able to access information about their checking account activity online. Several reported that they did not need the "extra paper" from canceled checks and that image statements and online reviewing was more secure than receiving canceled checks. Eleven percent of the 108 consumers still preferred to receive canceled checks. Most consumers reported that they were not significantly concerned about their ability to demonstrate proof of payment using a substitute check or check image rather than a canceled check and few reported that they suffered errors from the check truncation process. Also, GAO found that the federal banking regulators reported few consumer complaints relating to Check 21. To the extent that banks have employed check truncation, bank consumers have realized benefits and costs relating to faster processing and access to account information. GAO found that some banks have extended the hours for accepting deposits for credit on the same business day, which can result in faster availability of deposited funds for consumers. Based on consumer interviews, consumers have benefited from receiving simpler imaged account statements and immediate access to information about check payments. Check 21's expedited recredit (prompt investigation of claims that substitute checks were improperly charged to accounts and recrediting of the amount in question) also is considered a consumer benefit. However, based on our consumer and bank interviews, it appears that a small number of consumers have filed expedited recredit claims. Based on analysis of survey data on bank fees, GAO found some consumers may incur fees related to receiving canceled checks and images. Since 2004, fees for canceled checks appear to have increased, while fees for images appear to have remained relatively flat.
You are an expert at summarizing long articles. Proceed to summarize the following text: The safe travel of U.S. airline passengers is a joint responsibility of FAA and the airlines in accordance with the Federal Aviation Act of 1958, as amended, and the Department of Transportation Act, as amended. To carry out its responsibilities under these acts, FAA supports research and development; certifies that new technologies and procedures are safe; undertakes rule-makings, which when finalized form the basis of federal aviation regulations; issues other guidance, such as Advisory Circulars; and oversees the industry’s compliance with standards that aircraft manufacturers and airlines must meet to build and operate commercial aircraft. Aircraft manufacturers are responsible for designing aircraft that meet FAA’s safety standards, and air carriers are responsible for operating and maintaining their aircraft in accordance with the standards for safety and maintenance established in FAA’s regulations. FAA, in turn, certifies aircraft designs and monitors the industry’s compliance with the regulations. FAA’s general process for issuing a regulation, or rule, includes several steps. When the regulation would require the implementation of a technology or operation, FAA first certifies that the technology or operation is safe. Then, FAA publishes a notice of proposed rule-making in the Federal Register, which sets forth the terms of the rule and establishes a period for the public to comment on it. Next, FAA reviews the comments by incorporating changes into the rule that it believes are warranted, and, in some instances, it repeats these steps one or more times. Finally, FAA publishes a final rule in the Federal Register. The final rule includes the date when it will go into effect and a time line for compliance. Within FAA, the Aircraft Certification Service is responsible for certifying that technologies are safe, including improvements to cabin occupant safety and health, generally through the issuance of new regulations, a finding certifying an equivalent level of safety, or a special condition when no rule covers the new technology. The Certification Service is also responsible for taking enforcement action to ensure the continued safety of aircraft by prescribing standards for aircraft manufacturers governing the design, production, and airworthiness of aeronautical products, such as cabin interiors. The Flight Standards Service is primarily responsible for certifying an airline’s operations (assessing the airline’s ability to carry out its operations and maintain the airworthiness of the aircraft) and for monitoring the operations and maintenance of the airline’s fleet. FAA conducts research on cabin occupant safety and health issues in two research facilities, the Mike Monroney Aeronautical Center/Civil Aerospace Medical Institute in Oklahoma City, Oklahoma, and the William J. Hughes Technical Center in Atlantic City, New Jersey. The institute focuses on the impact of flight operations on human health, while the technical center focuses on improvements in aircraft design, operation, and maintenance and inspection to prevent accidents and improve survivability. For the institute or the technical center to conduct research on a project, an internal FAA requester must sponsor the project. For example, FAA’s Office of Regulation and Certification sponsors much of the two facilities’ work in support of FAA’s rule-making activities. FAA also cooperates on cabin safety research with the National Aeronautics and Space Administration (NASA), academic institutions, and private research organizations. Until recently, NASA conducted research on airplane crashworthiness at its Langley Research Center in Hampton, Virginia. However, because of internal budget reallocations and a decision to devote more of its funds to aviation security, NASA terminated the Langley Center’s research on the crashworthiness of commercial aircraft in 2002. NASA continues to conduct fire-related research on cabin safety issues at its Glenn Research Center in Cleveland, Ohio. NTSB has the authority to investigate civil aviation accidents and collects data on the causes of injuries and death for the victims of commercial airliner accidents. According to NTSB, the majority of fatalities in commercial airliner accidents are attributable to crash impact forces and the effects of fire and smoke. Specifically, 306 (66 percent) of the 465 fatalities in partially survivable U.S. aviation accidents from 1983 through 2000 died from impact forces, 131 (28 percent) died from fire and smoke, and 28 (6 percent) died from other causes. Surviving an airplane crash depends on a number of factors. The space surrounding a passenger must remain large enough to prevent the passenger from being crushed. The force of impact must also be reduced to levels that the passenger can withstand, either by spreading the impact over a larger part of the body or by increasing the duration of the impact through an energy-absorbing seat or fuselage. The passenger must be restrained in a seat to avoid striking the interior of the airplane, and the seat must not become detached from the floor. Objects within the airplane, such as debris, overhead luggage bins, luggage, and galley equipment, must not strike the passenger. A fire in the cabin must be prevented, or, if one does start, it must burn slowly enough and produce low enough levels of toxic gases to allow the passenger to escape from the airplane. If there is a fire, the passenger must not have sustained injuries that prevent him or her from escaping quickly. Finally, if the passenger escapes serious injury from impact and fire, he or she must have access to exit doors and slides or other means of evacuation. Over the past several decades, FAA has taken a number of regulatory actions designed to improve the safety and health of airline passengers and flight attendants by (1) minimizing injuries from the impact of a crash, (2) preventing fire or mitigating its effects, (3) improving the chances and speed of evacuation, or (4) improving the safety and health of cabin occupants. (See app. III for more information on the regulatory actions FAA has taken to improve cabin occupant safety and health.) Specifically, we identified 18 completed regulatory actions that FAA has taken since 1984. In addition to these past actions, FAA and others in the aviation community are pursuing advancements in these four areas to improve cabin occupant safety and health in the future. We identified and reviewed 28 such advancements—5 to reduce the impact of a crash on occupants, 8 to prevent or mitigate fire and its effects, 10 to facilitate evacuation from aircraft, and 5 to address general cabin occupant safety and health issues. Minimizing Injuries from the The primary cause of injury and death for cabin occupants in an airliner Impact of a Crash accident is the impact of the crash itself. We identified two key regulatory actions that FAA has taken to better protect passengers from impact forces. For example, in 1988, FAA required stronger passenger seats for newly manufactured commercial airplanes to improve protection in survivable crashes. These new seats are capable, for example, of withstanding an impact force that is approximately 16 times a passenger’s body weight (16g), rather than 9 times (9g), and must be tested dynamically (in multiple directions to simulate crash conditions), rather than statically (e.g., drop testing to assess the damage from the force of the weight alone without motion). In addition, in 1992, FAA issued a requirement for corrective action (airworthiness directive) for designs found not to meet the existing rules for overhead storage bins on certain Boeing aircraft, to improve their crashworthiness after bin failures were observed in the 1989 crash of an airliner in Kegworth, England, and a 1991 crash near Stockholm, Sweden. We also identified five key advancements that are being pursued to provide cabin occupants with greater impact protection in the future. These advancements are either under development or currently available. Examples include the following: Lap seat belts with inflatable air bags: Lap seat belts that contain inflatable air bags have been developed by private companies and are currently available to provide passengers with added protection during a crash. About 1,000 of these lap seat belts have been installed on commercial airplanes, primarily in the seats facing wall dividers (bulkheads) to prevent passengers from sustaining head injuries during a crash. (See fig. 1.) Improved seating systems: Seat safety depends on several interrelated systems operating properly, and, therefore, an airline seat is most accurately discussed as a system. New seating system designs are being developed by manufacturers to incorporate new safety and aesthetic designs as well as meet FAA’s 16g seat regulations to better protect passengers from impact forces. These seating systems would help to ensure that the seats themselves perform as expected (i.e., they stay attached to the floor tracks); the space between the seats remains adequate in a crash; and the equipment in the seating area, such as phones and video screens, does not increase the impact hazard. Child safety seats: Child safety seats could provide small children with additional protection in the event of an airliner crash. NTSB and others have recommended their use, and FAA has been involved in this issue for at least 15 years. While it has used its rule-making process to consider requiring their use, FAA decided not to require child safety restraints because its analysis found that if passengers were required to pay full fare for children under the age of 2, some parents would choose to travel by automobile and, statistically, the chances would increase that both the children and the adults would be killed. FAA is continuing to consider a child safety seat requirement. Appendix IV contains additional information on the impact advancements we have identified. Fire prevention and mitigation efforts have given passengers additional time to evacuate an airliner following a crash or cabin fire. FAA has taken seven key regulatory actions to improve fire detection, eliminate potential fire hazards, prevent the spread of fires, and better extinguish them. For example, to help prevent the spread of fire and give passengers more time to escape, FAA upgraded fire safety standards to require that seat cushions have fire-blocking layers, which resulted in airlines retrofitting 650,000 seats over a 3-year period. The agency also set new low heat/smoke standards for materials used for large interior surfaces (e.g., sidewalls, ceilings, and overhead bins), which FAA officials told us resulted in a significant improvement in postcrash fire survivability. FAA also required smoke detectors to be placed in lavatories and automatic fire extinguishers in lavatory waste receptacles in 1986 and 1987, respectively. In addition, the agency required airlines to retrofit their fleets with fire detection and suppression systems in cargo compartments, which according to FAA, applied to over 3,700 aircraft at a cost to airlines of $300 million. To better extinguish fires when they do start, FAA also required, in 1985, that commercial airliners carry two Halon fire extinguishers in addition to other required extinguishers because of Halon’s superior fire suppression capabilities. We also identified 8 key advancements that are currently available and awaiting implementation or are under development to provide additional fire protection for cabin occupants in the future. Examples include the following: Reduced flammability of insulation materials: To eliminate a potential fire hazard, in May 2000, FAA required that air carriers replace insulation blankets covered with a type of insulation known as metalized Mylar® on specific aircraft by 2005, after it was found that the material had ignited and contributed to the crash of Swiss Air Flight 111. Over 700 aircraft were affected by this requirement. In addition, FAA issued a rule in July 2003 requiring that large commercial airplanes manufactured after September 2, 2005, be equipped with thermal acoustic insulation designed to an upgraded fire test standard that will reduce the incidence and intensity of in-flight fires. In addition, after September 2, 2007, newly manufactured aircraft must be equipped with thermal acoustic materials designed to meet a new standard for burn- through resistance, providing passengers more time to escape during a postcrash fire. Reduced fuel tank flammability: Flammable vapors in aircraft fuel tanks can ignite. However, currently available technology can greatly reduce this hazard by “blanketing” the fuel tank with nonexplosive nitrogen-enriched air to suppress (“inert”) the potential for explosion of the tank. The U.S. military has used this technology on selected aircraft for 20 years, but U.S. commercial airlines have not adopted the technology because of its cost and weight. FAA officials told us that the military’s technology was also unreliable and designed to meet military rather than civilian airplane design requirements. FAA fire safety experts have developed a lighter-weight inerting system for center fuel tanks, which is simpler than the military system and potentially more reliable. Reliability of this technology is a major concern for the aviation industry. According to FAA officials, Boeing and Airbus began flight testing this technology in July 2003 and August 2003, respectively. In addition, the Air Transport Association (ATA) noted that inerting is only one prospective component of an ongoing major program for fuel tank safety, and that it has yet to be justified as feasible and cost-effective. Sensor technology: Sensors are currently being developed to better detect overheated or burning materials. According to FAA and the National Institute of Standards and Technology, many current smoke and fire detectors are not reliable. For example, a recent FAA study reported at least one false alarm per week in cargo compartment fire detection systems. The new detectors are being developed by Airbus and others in private industry to reduce the number of false alarms. In addition, FAA is developing standards that would be used to approve new, reduced false alarm sensors. NASA is also developing new sensors and detectors. Water mist for extinguishing fires: Technology has been under development for over two decades to dispense water mist during a fire to protect passengers from heat and smoke and prevent the spread of fire in the cabin. The most significant development effort has been made by a European public-private consortium, FIREDETEX, with over 5 million euros of European Community funding and a total project cost of over 10 million euros (over 10 million U.S. dollars). The development of this system was prompted, in part, by the need to replace Halon, when it was determined that this main firefighting agent used in fire extinguishers aboard commercial airliners depletes ozone in the atmosphere. Appendix V contains additional information on advancements that address fire prevention and mitigation. Enabling passengers to evacuate more quickly during an emergency has saved lives. Over the past two decades, FAA has completed regulatory action on the following six key requirements to help speed evacuations: Improve access to certain emergency exits, such as those generally smaller exits above the wing, by providing an unobstructed passageway to the exit. Install public address systems that are independently powered and can be used for at least 10 minutes. Help to ensure that passengers in the seats next to emergency exits are physically and mentally able to operate the exit doors and assist other passengers in emergency evacuations. Limit the distance between emergency exits to 60 feet. Install emergency lighting systems that visually identify the emergency escape path and each exit. Install fire-resistant emergency evacuation slides. We also identified 10 advancements that are either currently available but awaiting implementation or require additional research that could lead to improved aircraft evacuation, including the following: Improved passenger safety briefings: Information is available to the airlines on how to develop more appealing safety briefings and safety briefing cards so that passengers would be more likely to pay attention to the briefings and be better prepared to evacuate successfully during an emergency. Research has found that passengers often ignore the oral briefings and do not familiarize themselves with the safety briefing cards. FAA has requested that air carriers explore different ways to present safety information to passengers, but FAA regulates only the content of briefings. The presentation style of safety briefings is left up to air carriers. Over-wing exit doors: Exit doors located over the wings of some commercial airliners have been redesigned to “swing out” and away from the aircraft so that cabin occupants can exit more easily during an emergency. Currently, the over-wing exit doors on most U.S. commercial airliners are “self help” doors and must be lifted and stowed by a passenger, which can impede evacuation. (See fig. 2.) The redesigned doors are now used on new-generation B-737 aircraft operated by one U.S. and most European airlines. FAA does not currently require the use of over-wing exit doors that swing out because the exit doors that are removed manually meet the agency’s safety standards. However, FAA is working with the Europeans to develop common requirements for the use of this type of exit door. Audio attraction signals: The United Kingdom’s Civil Aviation Authority and the manufacturer are testing audio attraction signals to determine their usefulness to passengers in locating exit doors during an evacuation. These signals would be mounted near exits and activated during an emergency. The signals would help the passengers find the nearest exit even if lighting and exit signs were obscured by smoke. Appendix VI contains additional information on advancements to improve aircraft emergency evacuations. Passengers and flight attendants can face a range of safety and health effects while aboard commercial airliners. We identified three key actions taken by FAA to help maintain the safety and health of passengers and the cabin crew during normal flight operations. For example, to prevent passengers from being injured during turbulent conditions, FAA initiated the Turbulence Happens campaign in 2000 to increase public awareness of the importance of wearing seatbelts. The agency has advised the airlines to warn passengers to fasten their seatbelts when turbulence is expected, and the airlines generally advise or require passengers to keep their seat belts fastened while seated to help avoid injuries from unexpected turbulence. FAA has also required the airlines to equip their fleets with emergency medical kits since 1986. In addition, Congress banned smoking on most domestic flights in 1990. We also identified five advancements that are either currently available but awaiting implementation or require additional research that could lead to an improvement in the health of passengers and flight attendants in the future. Automatic external defibrillators: Automatic external defibrillators are currently available for use on some commercial airliners if a passenger or crew member requires resuscitation. In 1998, the Congress directed FAA to assess the need for the defibrillators on commercial airliners. On the basis of its findings, the agency issued a rule requiring that U.S. airlines equip their aircraft with automatic external defibrillators by 2004. According to ATA, most airlines have already done so. Enhanced emergency medical kits: In 1998, the Congress directed FAA to collect data for 1 year on the types of in-flight medical emergencies that occurred to determine if existing medical kits should be upgraded. On the basis of the data collected, FAA issued a rule that required the contents of existing emergency medical kits to be expanded to deal with a broader range of emergencies. U.S. commercial airliners are required to carry these enhanced emergency medical kits by 2004. Most U.S. airlines have already completed this upgrade, according to ATA. Advance warning of turbulence: New airborne weather radar and other technologies are currently being developed and evaluated to improve the detection of turbulence and increase the time available to cabin occupants to avert potential injuries. FAA’s July 2003 draft strategic plan established a performance target of reducing injuries to cabin occupants caused by turbulence. To achieve this objective, FAA plans to continue evaluating new airborne weather radars and other technologies that broadly address weather issues, including turbulence. In addition, the draft strategic plan set a performance target of reducing serious injuries caused by turbulence by 33 percent by fiscal year 2008--using the average for fiscal years 2000 through 2002 of 15 injuries per year as the baseline and reducing this average to no more than 10 per year. Improve awareness of radiation exposure: Flight attendants and passengers who fly frequently can be exposed to higher levels of radiation on a cumulative basis than the general public. High levels of radiation have been linked to an increased risk of cancer and potential harm to fetuses. To help passengers and crew members estimate their past and future radiation exposure levels, FAA developed a computer model, which is publicly available on its Web site http://www.jag.cami.jccbi.gov/cariprofile.asp. However, the extent to which flight attendants and frequent flyers are aware of cosmic radiation’s risks and make use of FAA’s computer model is unclear. Agency officials told us that they plan to install a counter capability on its Civil Aerospace Medical Institute Web site to track the number of visits to its aircrew and passenger health and safety Web site. FAA also plans to issue an Advisory Circular by early next year, which incorporates the findings of a just completed FAA report, “What Aircrews Should Know About Their Occupational Exposure to Ionizing Radiation.” This Advisory Circular will include recommended actions for aircrews and information on solar flare event notification of aircrews. In contrast, airlines in Europe abide by more stringent requirements for helping to ensure that cabin and flight crew members do not receive excessive doses of radiation from performing their flight duties during a given year. For example, in May 1996, the European Union issued a directive for workers, including air carrier crew members (cabin and flight crews) and the general public, on basic safety and health protections against dangers arising from ionizing radiation. This directive set dose limits and required air carriers to (1) assess and monitor the exposure of all crew members to avoid exceeding exposure limits, (2) work with those individuals at risk of high exposure levels to adjust their work or flight schedules to reduce those levels, and (3) inform crew members of the health risks that their work involves from exposure to radiation. It also required airlines to work with female crew members, when they announce a pregnancy, to avoid exposing the fetus to harmful levels of radiation. This directive was binding for all European Union member states and became effective in May 2000. Improved awareness of potential health effects related to flying: Air travel may exacerbate some medical conditions. Of particular concern is a condition known as Deep Vein Thrombosis (DVT), or travelers’ thrombosis, in which blood clots can develop in the deep veins of the legs from extended periods of inactivity. In a small percentage of cases, the clots can break free and travel to the lungs, with potentially fatal results. Although steps can be taken to avoid or mitigate some travel- related health effects, no formal awareness campaigns have been initiated by FAA to help ensure that this information reaches physicians and the traveling public. The Aerospace Medical Association’s Web site http://www.asma.org/publication.html includes guidance for physicians to use in advising passengers with preexisting medical conditions on the potential risks of flying, as well as information for passengers with such conditions to use in assessing their own potential risks. See appendix VII for additional information on health-related advances. The advancements being pursued to improve the safety and health of cabin occupants vary in their readiness for deployment. For example, of the 28 advancements we reviewed, 14 are mature and currently available. Two of these, preparation for in-flight medical emergencies and the use of new insulation, were addressed through regulations. These regulations require airlines to install additional emergency medical equipment (automatic external defibrillators and enhanced emergency medical kits) by 2004, replace flammable insulation covering (metalized Mylar®) on specific aircraft by 2005, and manufacture new large commercial airliners that use a new type of insulation meeting more stringent flammability test standards after September 2, 2005. Another advancement is currently in the rule- making process—retrofitting the existing fleet with stronger 16g seats. The remaining 11 advancements are available, but are not required by FAA. For example, some airlines have elected to use inflatable lap seat belts and exit doors over the wings that swing out instead of requiring manual removal, and others are using photo-luminescent floor lighting in lieu of or in combination with traditional electrical lighting. Some of these advancements are commercially available to the flying public, including smoke hoods and child safety seats certified for use on commercial airliners. The remaining 14 advancements are in various stages of research, engineering, and development in the United States, Canada, or Europe. Several factors have slowed the implementation of airliner cabin occupant safety and health advancements in the United States. When advancements are available for commercial use but not yet implemented or installed, their use may be slowed by the time it takes (1) for FAA to complete the rule- making process, which may be required for an advancement to be approved for use but may take many years; (2) for U.S. and foreign aviation authorities to resolve differences between their respective cabin occupant safety and health requirements; and (3) for the airlines to adopt or install advancements after FAA has approved their use, including the time required to schedule an advancement’s installation to coincide with major maintenance cycles and thereby minimize the costs associated with taking an airplane out of service. When advancements are not ready for commercial use because they need further research to develop their technologies or reduce their costs, their implementation may be slowed by FAA’s multistep process for identifying advancements and allocating its limited resources to research on potential advancements. FAA’s multistep process is hampered by a lack of autopsy and survivor information from past accidents and by not having cost and effectiveness data as part of the decision process. As a result, FAA may not be identifying and funding the most critical or cost-effective research projects. Once an advancement has been developed, FAA may require its use, but significant time may be required before the rule-making process is complete. One factor that contributes to the length of this process is a requirement for cost-benefit analyses to be completed. Time is particularly important when safety is at stake or when the pace of technological development exceeds the pace of rule-making. As a result, some rules may need to be developed quickly to address safety issues or to guide the use of new technologies. However, rules must also be carefully considered before being finalized because they can have a significant impact on individuals, industries, the economy, and the environment. External pressures—such as political pressure generated by highly publicized accidents, recommendations by NTSB, and congressional mandates—as well as internal pressures, such as changes in management’s emphasis, continue to add to and shift the agency’s priorities. The rule-making process can be long and complicated and has delayed the implementation of some technological and operational safety improvements, as we reported in July 2001.In that report, we reviewed 76 significant rules in FAA’s workload for fiscal years 1995 through 2000—10 of the 76 were directly related to improving the safety and health of cabin occupants. Table 3 details the status or disposition of these 10 rules. The shortest rule-making action took 1 year, 11 months (for child restraint systems), and the longest took 10 years, 1 month (for the type and number of emergency exits). However, one proposed rule was still pending after 15 years, while three others were terminated or withdrawn after 9 years or more. Of the 76 significant rules we reviewed, FAA completed the rule- making process for 29 of them between fiscal year 1995 and fiscal year 2000, in a median time of about 2 ½ years to proceed from formal initiation of the rule-making process through publication of the final rule; however, FAA took 10 years or more to move from formal initiation of the rule- making process through publication of the final rule for 6 of these 29 rules. FAA and its international counterparts, such as the European Joint Aviation Authorities (JAA), impose a number of requirements to improve safety. At times, these requirements differ, and efforts are needed to reach agreement on procedures and equipment across country borders. In the absence of such agreements, the airlines generally must adopt measures to implement whichever requirement is more stringent. In 1992, FAA and JAA began harmonizing their requirements for (1) the design, manufacture, operation, and maintenance of civil aircraft and related product parts; (2) noise and emissions from aircraft; and (3) flight crew licensing. Harmonizing the U.S. Federal Aviation Regulations with the European Joint Aviation Regulations is viewed by FAA as its most comprehensive long-term rule-making effort and is considered critical to ensuring common safety standards and minimizing the economic burden on the aviation industry that can result from redundant inspection, evaluation, and testing requirements. According to both FAA and JAA, the process they have used to date to harmonize their requirements for commercial aircraft has not effectively prioritized their joint recommendations for harmonizing U.S. and European aviation requirements, and led to many recommendations going unpublished for years. This includes a backlog of over 130 new rule-making efforts. The slowness of this process led the United States and Europe to develop a new rule-making process to prioritize safety initiatives, focus the aviation industry’s and their own limited resources, and establish limitations on rule-making capabilities. Accordingly, in March 2003, FAA and JAA developed a draft joint “priority” rule-making list; collected and considered industry input; and coordinated with FAA’s, JAA’s, and Transport Canada Civil Aviation’s management. This effort has resulted in a rule-making list of 26 priority projects. In June 2003, at the 20th Annual JAA/FAA International Conference, FAA, JAA, and Transport Canada Civil Aviation discussed the need to, among other things, support the joint priority rule-making list and to establish a cycle for updating it—to keep it current and to provide for “pop-up,” or unexpected, rule-making needs. FAA and JAA discussed the need to prioritize rule-making efforts to efficiently achieve aviation safety goals; that they would work from a limited agreed-upon list for future rule-making activities; and that FAA and the European Aviation Safety Agency, which is gradually replacing JAA, should continue with this approach. In the area of cabin occupant safety and heath, some requirements have been harmonized, while others have not. For example, in 1996, JAA changed its rule on floor lighting to allow reflective, glow-in-the-dark material to be used rather than mandating the electrically powered lighting that FAA required. The agency subsequently permitted the use of this material for floor lighting. In addition, FAA finalized a rule in July 2003 to require a new type of insulation designed to delay fire burning though the fuselage into the cabin during an accident. JAA favors a performance-based standard that would specify a minimum delay in burn-through time, but allow the use of different technologies to achieve the standard. FAA officials said that the agency would consider other technologies besides insulation to achieve burn-through protection but that it would be the responsibility of the applicant to demonstrate that the technology provided performance equivalent to that stipulated in the insulation rule. JAA officials told us that these are examples of the types of issues that must be resolved when they work to harmonize their requirements with FAA’s. These officials added that this process is typically very time consuming and has allowed for harmonizing about five rules per year. After an advancement has been developed, shown to be beneficial, certified, and required by FAA, the airlines or manufacturers need time to implement or install the advancement. FAA generally gives the airlines or manufacturers a window of time to comply with its rules. For example, FAA gave air carriers 5 years to replace metalized Mylar® insulation on specific aircraft with a less flammable insulation type, and FAA’s proposed rule-making on 16g seats would give the airlines 14 years to install these seats in all existing commercial airliners. ATA officials told us that this would require replacement of 496,000 seats. The airline industry’s recent financial hardships may also delay the adoption of advancements. Recently, two major U.S. carriers filed for bankruptcy, and events such as the war in Iraq have reduced passenger demand and airline revenues below levels already diminished by the events of September 11, 2001, and the economic downturn. Current U.S. demand for air travel remains below fiscal year 2000 levels. As a result, airlines may ask for exemptions from some requirements or extensions of time to install advancements. While implementing new safety and health advancements can be costly for the airlines, making these changes could improve the public’s confidence in the overall safety of air travel. In addition, some aviation experts in Europe told us that health-related cabin improvements, particularly improvements in air quality, are of high interest to Europeans and would likely be used in the near future by some European air carriers to set themselves apart from their competitors. For fiscal year 2003, FAA and NASA allocated about $16.2 million to cabin occupant safety and health research. FAA’s share of this research represented $13.1 million, or about 9 percent of the agency’s Research, Engineering, and Development budget of $148 million for fiscal year 2003. Given the level of funding allocated to this research effort, it is important to ensure that the best research projects are selected. However, FAA’s processes for setting research priorities and selecting projects for further research are hampered by data limitations. In particular, FAA lacks certain autopsy and survivor information from aircraft crashes that could help it identify and target research to the most important causes of death and injury in an airliner crash. In addition, for the proposed research projects, the agency does not (1) develop comparable cost data for potential advancements or (2) assess their potential effectiveness in minimizing injuries or saving lives. Such cost and effectiveness data would provide a valuable supplement to FAA’s current process for setting research priorities and selecting projects for funding. Both FAA and NASA conduct research on aircraft cabin occupant safety and health issues. The Civil Aeromedical Institute (CAMI) and the Hughes Technical Center are FAA’s primary facilities for conducting research in this area. In addition, two facilities at NASA, the Langley and Glenn research centers, have also conducted research in this area. As figure 3 shows, federal funding for this research since fiscal year 2000, reached a high in fiscal year 2002, at about $17 million, and fell to about $16.2 million in fiscal year 2003. The administration’s proposal for fiscal year 2004 calls for a further reduction to $15.9 million. This funding covers the expenses of researchers at these facilities and of the contracts they may have with others to conduct research. In addition, NASA recently decided to end its crash research at Langley and to close a drop test facility that it operates in Hampton, Virginia. In fiscal year 2003, FAA and NASA both supported research projects, including aircraft impact, fire, evacuation, and health. As figure 4 shows, most of the funding for cabin occupant safety and health research has gone to fire-related projects. To establish research priorities and select projects to fund, FAA uses a multistep process. First, within each budget cycle, a number of Technical Community Representative Group subcommittees from within FAA generate research ideas. Various subcommittees have responsibility for identifying potential safety and health projects, including subcommittees on crash dynamics, fire safety, structural integrity, passenger evacuation, aeromedical, and fuel safety. Each subcommittee proposes research projects to review committees, which prioritize the projects. The projects are considered and weighted according to the extent to which they address (1) accident prevention, (2) accident survival, (3) external requests for research, (4) internal requests for research, and (5) technology research needs. In addition, the cost of the proposed research is considered before arriving at a final list of projects. The prioritized list is then considered by the Program Planning Team, which reviews the projects from a policy perspective. Although the primary causes of death and injury in commercial airliner crashes are known to be impact, fire, and impediments to evacuation, FAA does not have as detailed an understanding as it would like of the critical factors affecting survival in a crash. According to FAA officials, obtaining a more detailed understanding of these factors would assist them in setting research priorities and in evaluating the relative importance of competing research proposals. To obtain a more detailed understanding of the critical factors affecting survival, FAA believes that it needs additional information from passenger autopsies and from passengers who survived. With this information, FAA could then regulate safety more effectively, airplane and equipment designers could build safer aircraft, including cabin interiors, and more passengers could survive future accidents as equipment became safer. While FAA has independent authority to investigate commercial airliner crashes, NTSB generally controls access to the accident investigation site in pursuit of its primary mission of determining the cause of the crash. When NTSB concludes its investigation, it returns the airplane to its owner and keeps the records of the investigation, including the autopsy reports and the information from survivors that NTSB obtains from medical authorities and through interviews or questionnaires. NTSB makes summary information on the crashes publicly available on its Web site, but according to the FAA researchers, this information is not detailed enough for their needs. For example, the researchers would like to develop a complete autopsy database that would allow them to look for common trends in accidents, among other things. In addition, the researchers would like to know where survivors sat on the airplane, what routes they took to exit, what problems they encountered, and what injuries they sustained. This information would help the researchers analyze factors that might have an impact on survival. According to the NTSB’s Chief of the Survival Factors Division in the Office of Aviation Safety, NTSB provides information on the causes of death and a description of injuries in the information they make publicly available. In addition, although medical records and autopsy reports are not made public, interviews with and questionnaires from survivors are available from the public docket. NTSB’s Medical Officer was unaware of any formal requests from the FAA for the NTSB to provide them with copies of this type of information, although the FAA had previously been invited to review such information at NTSB headquarters. He added that the Board would likely consider a formal request from FAA for copies of autopsy reports and certain survivor records, but that it was also likely that the FAA would have to assure NTSB that the information would be appropriately safeguarded. According to FAA officials, close cooperation between the NTSB and the FAA is needed for continued progress in aviation safety. Besides lacking detailed information on the causes of death and injury, FAA does not develop data on the cost to implement advancements that are comparable for each, nor does it assess the potential effectiveness of each advancement in reducing injuries and saving lives. Specifically, FAA does not conduct cost-benefit analyses as part of its multistep process for setting research priorities. Making cost estimates of competing advancements would allow direct comparisons across alternatives, which, when combined with comparable estimates of effectiveness, would provide valuable supplemental information to decision makers when setting research priorities. FAA considers its current process to be appropriate and sufficient. In commenting on a draft of this report, FAA noted that it is very difficult to develop realistic cost data for advancements during the earliest stages of research. The agency cautioned that if too much emphasis is placed on cost/benefit analyses, potentially valuable research may not be undertaken. Recognizing that it is less difficult to develop cost and effectiveness information as research progresses, we are recommending that FAA develop and use cost and effectiveness analyses to supplement its current process. At later stages in the development process, we found that this information can be developed fairly easily through cost and effectiveness analyses using currently available data. For example, we performed an analysis of the cost to implement inflatable lap seat belts using a cost analysis methodology we developed (see app. VIII). This analysis allowed us to estimate how much this advancement would cost per airplane and per passenger trip. Such cost analyses could be combined with similar analyses of effectiveness to identify the most cost-effective projects, based on their potential to minimize injuries and reduce fatalities. Potential sources of effectiveness data include FAA, academia, industry, and other aviation authorities. Although FAA and the aviation community are pursuing a number of advancements to enhance commercial airliners’ cabin occupant safety and health, several factors have slowed their implementation. For example, for advancements that are currently available but are not yet implemented or installed, progress is slowed by the length of time it takes for FAA to complete its rule-making process, for the U.S and foreign countries to agree on the same requirements, and for the airlines to actually install the advancements after FAA has required them. In addition, FAA’s multistep process for identifying potential cabin occupant safety and health research projects and allocating its limited research funding is hampered by the lack of autopsy and survivor information from airliner crashes and by the lack of cost and effectiveness analysis. Given the level of funding allocated to cabin occupant safety and health research, it is important for FAA to ensure that this funding is targeting the advancements that address the most critical needs and show the most promise for improving the safety and health of cabin occupants. However, because FAA lacks detailed autopsy and survivor information, it is hampered in its ability to identify the principal causes of death and survival in commercial airliner crashes. Without an agreement with the National Transportation Safety Board (NTSB) to receive detailed autopsy and survivor information, FAA lacks information that could be helpful in understanding the factors that contribute to surviving a crash. Furthermore, because FAA does not develop comparable estimates of cost and effectiveness of competing research projects, it cannot ensure that it is funding those technologies with the most promise of saving lives and reducing injuries. Such cost and effectiveness data would provide a valuable supplement to FAA’s current process for setting research priorities and selecting projects for funding. To facilitate FAA’s development of comparable cost data across advancements, we developed a cost analysis methodology that could be combined with a similar analysis of effectiveness to identify the most cost- effective projects. Using comparable cost and effectiveness data across the range of advancements would position the agency to choose more effectively between competing advancements, taking into account estimates of the number of injuries and fatalities that each advancement might prevent for the dollars invested. In turn, FAA would have more assurance that the level of funding allocated to this effort maximizes the safety and health of the traveling public and the cabin crew members who serve them. To provide FAA decision makers with additional data for use in setting priorities for research on cabin occupant safety and health and in selecting competing research projects for funding, we recommend that the Secretary of Transportation direct the FAA Administrator to initiate discussions with the National Transportation Safety Board in an effort to obtain the autopsy and survivor information needed to more fully understand the factors affecting survival in a commercial airliner crash and supplement its current process by developing and using comparable estimates of cost and effectiveness for each cabin occupant safety and health advancement under consideration for research funding. Agency Comments and We provided copies of a draft of this report to the Department of Our Evaluation Transportation for its review and comment. FAA generally agreed with the report’s contents and its recommendations. The agency provided us with oral comments, primarily technical clarifications, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days after the date of this letter. At that time, we will send copies to the appropriate congressional committees; the Secretary of Transportation; the Administrator, FAA; and the Chairman, NTSB. We will also make copies available to others upon request. In addition, this report is also available at no charge on GAO’s Web site at http://www.gao.gov. As requested by the Ranking Democratic Member, House Committee on Transportation and Infrastructure, we addressed the following questions: (1) What regulatory actions has the Federal Aviation Administration (FAA) taken, and what key advancements are available or being developed by FAA and others to address safety and health issues faced by passengers and flight attendants in large commercial airliner cabins? (2) What factors, if any, slow the implementation of advancements in cabin occupant safety and health? In addition, as requested, we identified some factors affecting efforts by Canada and Europe to improve cabin occupant safety and health. The scope of our report includes the cabins of large commercial aircraft (those that carry 30 or more passengers) operated by U.S. domestic commercial airlines and addresses the safety and health of passengers and flight attendants from the time they board the airliner until they disembark under normal operational conditions or emergency situations. This report identifies cabin occupant safety and health advancements (technological or operational improvements) that could be implemented, primarily through FAA’s rule-making process. Such improvements include technological changes designed to increase the overall safety of commercial aviation as well as changes to enhance operational safety. The report does not include information on the flight decks of large commercial airliners or safety and health issues affecting flight deck crews (pilots and flight engineers), because they face some issues not faced by cabin occupants. It also does not address general aviation and corporate aircraft or aviation security issues, such as hijackings, sabotage, or terrorist activities. To identify regulatory actions that FAA has taken to address safety and health issues faced by passengers and flight attendants in large commercial airliner cabins, we interviewed and collected documentation from U.S. federal agency officials on major safety and health efforts completed by FAA. The information we obtained included key dates and efforts related to cabin occupant safety and health, such as rule-makings, airworthiness directives, and Advisory Circulars. To identify key advancements that are available or are being developed by FAA and others to address safety and health issues faced by passengers and flight attendants in large commercial airliner cabins, we consulted experts (1) to help ensure that we had included the advancements holding the most promise for improving safety and health; and (2) to help us structure an evaluation of selected advancements (i.e., confirm that we had included the critical benefits and drawbacks of the potential advancements) and develop a descriptive analysis for them, where appropriate, including their benefits, costs, technology readiness levels, and regulatory status. In addition, we interviewed and obtained documentation from federal agency officials and other aviation safety experts at the Federal Aviation Administration (including its headquarters in Washington, D.C.; Transport Airplane Directorate in Renton, Washington; William J. Hughes Technical Center in Atlantic City, New Jersey; and Mike Monroney Aeronautical Center/Civil Aerospace Medical Institute in Oklahoma City, Oklahoma); National Transportation Safety Board; National Aeronautics and Space Administration (NASA); Air Transport Association; Regional Airline Association; International Air Transport Association; Aerospace Industries Association; Aerospace Medical Association; Flight Safety Foundation, Association of Flight Attendants; Boeing Commercial Airplane Group; Airbus; Cranfield University, United Kingdom; University of Greenwich, United Kingdom; National Aerospace Laboratory, Netherlands; Joint Aviation Authorities, Netherlands; Civil Aviation, Netherlands; Civil Aviation Authority, United Kingdom; RGW Cherry and Associates; Air Accidents Investigations Branch, United Kingdom; Syndicat National du Personnel Navigant Commercial (French cabin crew union) and ITF Cabin Crew Committee, France; BEA (comparable to the U.S. NTSB), France; and the Direction Générale de l’Aviation Civile (DGAC), FAA’s French counterpart. To describe the status of key advancements that are available or under development, we used NASA’s technology readiness levels (TRL). These levels form a system for ranking the maturity of particular technologies and are as follows: TRL 1: Basic principles observed and reported TRL 2: Technology concept and/or application formulated TRL 3: Analytical and experimental critical function and/or TRL 4: Component validation in laboratory environment TRL 5: Component and/or validation in relevant environment TRL 6: System or subsystem model or prototype demonstrated in a TRL 7: System prototype demonstrated in a space environment TRL 8: Actual system completed and “flight qualified” through test and TRL 9: Actual system “flight proven” through successful mission To determine what factors, if any, slow the implementation of advancements in cabin occupant safety and health, we reviewed the relevant literature and interviewed and analyzed documentation from the U.S. federal officials cited above for the 18 key regulatory actions FAA has taken since 1984 to improve the safety and health of cabin occupants. We used this same approach to assess the regulatory status of the 28 advancements we reviewed that are either currently available, but not yet implemented or installed, or require further research to demonstrate their effectiveness or lower their costs. In identifying 28 advancements, GAO is not suggesting that these are the only advancements being pursued; rather, these advancements have been recognized by aviation safety experts we contacted as offering promise for improving the safety and health of cabin occupants. To determine how long it generally takes for FAA to issue new rules, in addition to speaking with FAA officials, we relied on past GAO work and updated it, as necessary. In order to examine the effect of FAA and European efforts to harmonize their aviation safety requirements, we interviewed and analyzed documentation from aviation safety officials and other experts in the United States, Canada, and Europe. Furthermore, to examine the factors affecting airlines’ ability to implement or install advancements after FAA requires them, we interviewed and analyzed documentation from aircraft manufacturers, ATA, and FAA officials. In addition, to determine what factors slow implementation we examined FAA’s processes for selecting research projects to improve cabin occupant safety and health. In examining whether FAA has sufficient data upon which to base its research priorities, we interviewed FAA and National Transportation Safety Board (NTSB) officials about autopsy and survivor information from commercial airliner accidents. We also examined the use of cost and effectiveness data in FAA’s research selection process for cabin occupant safety and health projects. To facilitate FAA’s development of such cost estimates, we developed a cost analysis methodology to illustrate how the agency could do this. Specifically, we developed a cost analysis for inflatable lap belts to show how data on key cost variables could be obtained from a variety of sources. We selected lap belts because they were being used in limited situations and appeared to offer some measure of improved safety. Information on installation price, annual maintenance and refurbishment costs, and added weight of these belts was obtained from belt manufacturers. We obtained information from FAA and the Department of Transportation’s (DOT) Bureau of Transportation Statistics on a number of cost variables, including historical jet fuel prices, the impact on jet fuel consumption of carrying additional weight, the average number of hours flown per year, the average number of seats per airplane, the number of airplanes in the U.S. fleet, and the number of passenger tickets issued per year. To account for variation in the values of these cost variables, we performed a Monte Carlo simulation. In this simulation, values were randomly drawn 10,000 times from probability distributions characterizing possible values for the number of seat belts per airplane, seat installation price, jet fuel price, number of passenger tickets, number of airplanes, and hours flown. This simulation resulted in forecasts of the life-cycle cost per airplane, the annualized cost per airplane, and the cost per ticket. There is uncertainty in estimating the number of lives potentially saved and their value because accidents occur infrequently and unpredictably. Such estimates could be higher or lower, depending on the number and severity of accidents during a given analysis period and the value placed on a human life. To identify factors affecting efforts by Canada and Europe to improve cabin occupant safety and health we interviewed and collected documentation from aviation safety experts in the United States, Canada, and Europe. We provided segments of a draft of this report to selected external experts to help ensure its accuracy and completeness. These included the Air Transport Association, National Transportation Safety Board, Boeing, Airbus, and aviation authorities in the United Kingdom, France, Canada and the European Union. We incorporated their comments, as appropriate. The European Union did not provide comments. We conducted our review from January 2002 through September 2003 in accordance with generally accepted government auditing standards. The United States, Canada, and members of the European Community are parties to the International Civil Aviation Organization (ICAO), established under the Chicago Convention of 1944, which sets minimum standards and recommended practices for civil aviation. In turn, individual nations implement aviation standards, including those for aviation safety. While ICAO’s standards and practices are intended to keep aircraft, crews, and passengers safe, some also address environmental conditions in aircraft cabins that could affect the health of passengers and crews. For example, ICAO has standards for preventing the spread of disease and for spraying aircraft cabins with pesticides to remove disease-carrying insects. In Canada, FAA’s counterpart for aviation regulations and oversight is Transport Canada Civil Aviation, which sets standards and regulations for the safe manufacture, operation, and maintenance of aircraft in Canada. In addition, Transport Canada Civil Aviation administers, enforces, and promotes the Aviation Occupational Health and Safety Program to help ensure the safety and health of crewmembers on board aircraft. The department also sets the training and licensing standards for aviation professionals in Canada, including air traffic controllers, pilots, and aircraft maintenance engineers. Transport Canada Civil Aviation has more than 800 inspectors working with Canadian airline operators, aircraft manufacturers, airport operators, and air navigation service providers to maintain the safety of Canada’s aviation system. These inspectors monitor, inspect, and audit Canadian aviation companies to verify their compliance with Transport Canada’s aviation regulations and standards for pilot licensing, aircraft certification, and aircraft operation. To assess and recommend potential changes to Canada’s aviation regulations and standards, the Canadian Aviation Regulation Advisory Council was established. This Council is a joint initiative between government and the aviation community. The Council supports regulatory meetings and technical working groups, which members of the aviation community can attend. A number of nongovernmental organizations— including airline operators, aviation labor organizations, manufacturers, industry associations, and groups representing the public—are members. The Transportation Safety Board (TSB) of Canada is similar to NTSB in the United States. TSB is a federal agency that operates independently of Transport Canada Civil Aviation. Its mandate is to advance safety in the areas of marine, pipeline, rail, and aviation transportation by conducting independent investigations, including public inquiries when necessary, into selected transportation occurrences in order to make findings as to their causes and contributing factors; identifying safety deficiencies, as evidenced by transportation occurrences; making recommendations designed to reduce or eliminate any such reporting publicly on their investigations and findings. Under its mandate to conduct investigations, TSB conducts safety-issue- related investigations and studies. It also maintains a mandatory incident- reporting system for all modes of transportation. TSB and Transport Canada Civil Aviation use the statistics derived from this information to track potential safety concerns in Canada’s transportation system. TSB investigates aircraft accidents that occur in Canada or involve aircraft built there. Like NTSB, the Transportation Safety Board can recommend air safety improvements to Transport Canada Civil Aviation. Europe supplements the ICAO framework with the European Civil Aviation Conference, an informal forum through which 38 European countries formulate policy on civil aviation issues, including safety, but do not explicitly address passenger health issues. In addition, the European Union issues legislation concerning aviation safety, certification, and licensing requirements but has not adopted legislation specifically related to passenger health. One European directive requires that all member states assess and limit crewmembers’ exposure to radiation from their flight duties and provide them with information on the effects of such radiation exposure. The European Commission is also providing flight crewmembers and other mobile workers with free health assessments prior to employment, with follow-up health assessments at regular intervals. Another European supplement to the ICAO framework is the Joint Aviation Authorities (JAA), which represents the civil aviation regulatory authorities of a number of European states that have agreed to cooperate in developing and implementing common safety regulatory standards and procedures. JAA uses staff of these authorities to carry out its responsibilities for making, standardizing, and harmonizing aviation rules, including those for aviation safety, and for consolidating common standards among member counties. In addition, JAA is to cooperate with other regional organizations or national European state authorities to reach at least JAA’s safety level and to foster the worldwide implementation of harmonized safety standards and requirements through the conclusion of international arrangements. Membership in JAA is open to members of the European Civil Aviation Conference, which currently consists of 41 member countries. Currently, 37 countries are members or candidate members of JAA. JAA is funded by national contributions; income from the sale of publications and training; and income from other sources, such as user charges and European Union grants. National contributions are based on indexes related to the size of each country’s aviation industry. The “largest” countries (France, Germany, and the United Kingdom) each pay around 16 percent and the smallest around 0.6 percent of the total contribution income. For 2003, JAA’s total budget was about 6.6 million euros. In early 1998, JAA launched the Safety Strategy Initiative to develop a focused safety agenda to support the “continuous improvement of its effective safety system” and further reduce the annual number of accidents and fatalities regardless of the growth of air traffic. Two approaches are being used to develop the agenda: The “historic approach” is based on analyses of past accidents and has led to the identification of seven initial focus areas—controlled flight into terrain, approach and landing, loss of control, design related, weather, occupant safety and survivability, and runway safety. The “predictive approach” or “future hazards approach” is based on an identification of changes in the aviation system. JAA is cooperating in this effort with FAA and other regulatory bodies to develop a worldwide safety agenda and avoid duplication of effort. FAA has taken the lead in the historic approach, and JAA has taken the lead in the future hazards approach. JAA officials told us that they use a consensus-based process to develop rules for aviation safety, including cabin occupant safety and health-related issues. Reaching consensus among member states is time consuming, but the officials said the time invested was worthwhile. Besides making aviation-related decisions, JAA identifies and resolves differences in word meanings and subtleties across languages—an effort that is critical to reaching consensus. JAA does not have regulatory rule-making authority. Once the member states are in agreement, each member state’s legislative authority must adopt the new requirements. Harmonizing new requirements with U.S. and other international aviation authorities further adds to the time required to implement new requirements. According to JAA officials, they use expert judgment to identify and prioritize research and development efforts for aviation safety, including airliner cabin occupant safety and health issues, but JAA plans to move toward a more data-driven approach. While JAA has no funding of its own for research and development, it recommends research priorities to its member states. However, JAA officials told us that member states’ research and development efforts are often driven by recent airliner accidents in the member states, rather than by JAA’s priorities. The planned shift from expert judgment to a more data-driven approach will require more coordination of aviation research and development across Europe. For example, in January 2001, a stakeholder group formed by the European Commissioner for Research issued a planning document entitled European Aeronautics: A Vision for 2020, which, among other things, characterized European aeronautics as a cross-border industry, whose research strategy is shaped within national borders, leading to fragmentation rather than coherence. The document called for better decision-making and more efficient and effective research by the European Union, its member states, and aeronautics stakeholders. JAA officials concurred with this characterization of European aviation research and development. Changes lie ahead for JAA and aviation safety in Europe. The European Union recently created a European Aviation Safety Agency, which will gradually assume responsibility for rule-making, certification, and standardization of the application of rules by the national aviation authorities. This organization will eventually absorb all of JAA’s functions and activities. The full transition from JAA to the safety agency will take several years--per the regulation,the European Aviation Safety Agency must begin operations by September 28, 2003, and transition to full operations by March 2007. To improve the aircraft be subjected to more rigorous testing crashworthiness of than was previously required. The tests subject airplane seats and seats to the forward, downward, and other directional movements that can occur in an accident. Likely injuries under various conditions are estimated by using instrumented crash test dummies. This rule was published on May 17, 1988, and became effective June 16, 1988. However, only the their ability to prevent newest generation of airplanes is or reduce the severity required to have fully tested and of head, back, and femur injuries. certificated 16g seats. FAA proposed a retrofit rule on October 4, 2002, to phase in 16g seats fleetwide within 14 years after adoption of the final rule. FAA issued an airworthiness directive requiring To improve the corrective action for overhead bin designs found crashworthiness of not to meet the existing rules. some bins after failures were observed in a 1989 crash in Kegworth, England. The airworthiness directive to improve bin connectors became effective November 20, 1992, and applied to Boeing 737 and 757 aircraft. In 1986, FAA upgraded the fire safety standards To give airliner cabin FAA required that all commercial for cabin interior materials in transport airplanes, establishing a new test method to determine the heat release from materials exposed to radiant heat and set allowable criteria for heat release rates. occupants more time aircraft produced after August 20, to evacuate a burning 1988, have panels that exhibit airplane by limiting heat releases and smoke emissions when cabin interior materials are exposed to fire. reduced heat releases and smoke emissions to delay the onset of flashover. Although there was no retrofit of the existing fleet, FAA is requiring that these improved materials be used whenever the cabin is substantially refurbished. In 1984, FAA issued a regulation that enhanced To retard burning of flammability requirements for seat cushions. cabin materials to increase evacuation 26, 1987. time. To extinguish in-flight This rule became effective April fires. 29, 1985, and required compliance by April 29, 1986. In March 1985, FAA issued a rule requiring air To identify and carriers to install smoke detectors in lavatories extinguish in-flight within 18 months. fires. This rule became effective on April 29, 1985, and required compliance by October 29, 1986. In March 1985, FAA required air carriers to install automatic fire extinguishers in the waste extinguish prevent in- 29, 1985. paper bins in all aircraft lavatories. flight fires. This rule became effective on April This rule required compliance by April 29, 1987. In 1986, FAA upgraded the airworthiness standards for ceiling and sidewall liner panels used in cargo compartments of transport category airplanes. To improve fire safety This rule required compliance on in the cargo and March 20, 1998. baggage compartment of certain transport airplanes. In 1998, FAA required air carriers to retrofit the To improve fire safety This rule became effective March and suppression systems in certain cargo compartments. This rule applied to over 3,400 airplanes in service and all newly manufactured certain transport airplanes. airplanes. in the cargo and baggage compartment of 19, 1998, requiring compliance on March 20, 2001. This rule requires improved access to the Type To help ensure that III emergency exits (typically smaller, overwing passengers have an exits) by providing an unobstructed unobstructed passageway to the exit. Transport aircraft with passageway to exits 60 or more passenger seats were required to during an comply with the new standards emergency. This rule became effective June 3, 1992, requiring changes to be made by December 3, 1992. Public address system: independent power source system be independently powered for at least 10 minutes and that at least 5 minutes of that time is during announcements. To eliminate reliance This rule became effective on engine or auxiliary-power-unit operation for emergency announcements. November 27, 1989, for air carrier and air taxi airplanes manufactured on or after November 27, 1990. This rule requires that persons seated next to emergency exits be physically and mentally capable of operating the exit and assisting other evacuation in an passengers in emergency evacuations. emergency. This rule became effective April 5, 1990, requiring compliance by October 5, 1990. Rule issued to limit the distance between adjacent emergency exits on transport airplanes to 60 feet. To improve passenger evacuation in an emergency. This rule became effective July 24, 1989, imposing requirements on airplanes manufactured after October 16, 1987. Floor proximity emergency Airplane emergency lighting systems must escape path marking visually identify the emergency escape path and identify each exit from the escape path. To improve passenger evacuation when smoke obscures overhead lighting. This rule became effective November 26, 1984, requiring implementation for large transport airplanes by November 26, 1986. Emergency evacuation slides manufactured after December 3, 1984, must be fire resistant and comply with new radiant heat testing procedures. To improve passenger evacuation. This technical standard became effective for all evacuation slides manufactured after December 3, 1984. In 1986, FAA issued a rule requiring commercial airlines to carry emergency medical carriers’ preparation 1, 1986, requiring compliance as kits. of that date. This rule became effective August for in-flight emergencies. In June 1995, following two serious events involving turbulence, FAA issued a public advisory to airlines urging the use of seat belts at all times when passengers are seated but concluded that existing rules did not require strengthening. To prevent passenger injuries from turbulence by increasing public awareness of the importance of wearing seatbelts. Information is currently posted on FAA’s Web site. In May 2000, FAA instituted the Turbulence Happens public awareness campaign. Technical Class C category cargo compartments are required to have built-in extinguishing systems to control fire in lieu of crewmember accessibility. Class D category cargo compartments are required to completely contain a fire without endangering the safety of the airplane occupants. This appendix presents information on the background and status of potential advancements in impact safety that we identified, including the following: retrofitting all commercial aircraft with more advanced seats, improving the ability of airplane floors to hold seats in an accident, preventing overhead luggage bins from becoming detached or opening, requiring child safety restraints for children under 40 pounds, and installing lap belts with self-contained inflatable air bags. In commercial transport airplanes, the ability of a seat to protect a passenger from the forces of impact in an accident depends on reducing the forces of impact to levels that a person can withstand, either by spreading the impact over a larger part of the person’s body or by decreasing the duration of the impact through the use of energy-absorbing seats, an energy-absorbing fuselage and floors, or restraints such as seat belts or inflatable seat belt air bags adapted from automobile technology. In a 1996 study by R.G.W Cherry & Associates, enhancing occupant restraint was ranked as the second most important of 33 potential ways to improve air crash survivability. Boeing officials noted that the industry generally agrees with this view but that FAA and the industry are at odds over the means of implementing these changes. According to an aviation safety expert, seats and restraints should be considered as a system that involves the seats themselves, seat restraints such as seat belts, seat connections to the floor, the spacing between seats, and furnishings in the cabin area that occupants could strike in an accident. To protect the occupant, a seat must not only absorb energy well but also stay attached to the floor of the aircraft. In other words, the “tie-down” chain must remain intact. Although aircraft seat systems are designed to withstand about 9 to 16 times the force of gravity, the limits of human tolerance to impact substantially exceed the aircraft and seat design limits. A number of seat and restraint devices have been shown in testing to improve survivability in aviation accidents. Several options are to retrofit the entire current fleet with fully tested 16g seats, use rearward-facing seats, require three-point auto-style seat belts with shoulder harnesses, and install auto-style air bags. FAA regulations require seats for newly certified airplane designs to pass more extensive tests than were previously required to protect occupants from impact forces of up to 16 times the force of normal gravity in the forward direction; seat certification standards include specific requirements to protect against head, spine, and leg injuries (see fig. 5).FAA first required 16g seats and tests for newly designed, certificated airplanes in 1988; new versions of existing designs were not required to carry 16g seats. Since 1988, however, in anticipation of a fleetwide retrofit rule, manufacturers have increasingly equipped new airplanes with “16g- compatible” seats that have some of the characteristics of fully certified 16g seats. Certifying a narrow-body airplane type to full 16g seat certification standards can cost $250,000. In 1998 FAA estimated that 16g seats would avoid between about 210 to 410 fatalities and 220 to 240 serious injuries over the 20-year period from 1999 through 2018. A 2000 study funded by FAA and the British Civil Aviation Authority estimated that if 16g seats had been installed in all airplanes that crashed from 1984 through 1998, between 23 to 51 fewer U.S. fatalities and 18 to 54 fewer U.S. serious injuries would have occurred over the period. A number of accidents analyzed in that study showed no benefit from 16g seats because it was assumed that 16g seats would have detached from the floor, offering no additional benefits compared with older seats.Worldwide, the study estimated, about 333 fewer fatalities and 354 fewer serious injuries would have occurred during the period had the improved seats been installed. Moreover, if fire risks had been reduced, the estimated benefits of 16g seats might have increased dramatically, as more occupants who were assumed to survive the impact but die in the ensuing fire would then have survived both the impact and fire. Seats that meet the 16g certification requirements are currently available and have been required on newly certificated aircraft designs since 1988. However, newly manufactured airplanes of older certification, such as Boeing 737s, 757s, or 767s, were not required to be equipped with 16g certified seats. Recently, FAA has negotiated with manufacturers to install full 16g seats on new versions of older designs, such as all newly produced 737s.In October 2002, FAA published a new proposal to create a timetable for all airplanes to carry fully certified 16g seats within 14 years. The comment period for the currently proposed rule ended in March 2003. Under this proposal, airframe manufacturers would have 4 years to begin installing 16g seats in newly manufactured aircraft only, and all airplanes would have to be equipped with full 16g seats within 14 years or when scheduled for normal seat replacement. FAA estimated that upgrading passenger and flight attendant seats to meet full 16g requirements would avert approximately 114 fatalities and 133 serious injuries over 20 years following the effective date of the rule. This includes 36 deaths that would be prevented by improvements to flight attendant seats that would permit attendants to survive the impact and to assist more passengers in an evacuation. FAA estimated the costs to avert 114 fatalities and 133 serious injuries at $245 million in present-value terms, or $519 million in overall costs, which, according to FAA’s analysis, would approximate the monetary benefits from the seats.FAA estimated that about 7.5 percent of airplane seats would have to be replaced before they would ordinarily be scheduled for replacement. FAA’s October 2002 proposal divides seats into three classes according to their approximate performance level. Although FAA does not know how many seats of each type seat are in service, it estimates that about 44 percent of commercial-service aircraft are equipped with full 16g seats, 55 percent have 16g-compatible seats, and about 1 percent have 9g seats. The 16g-compatible or partial 16g seats span a wide range of capabilities; some are nearly identical to full 16g seats but have been labeled as 16g-compatible to avoid more costly certification, and other partial 16g seats offer only minor improvements over the older generation of 9g seats. To determine whether these seats have the same performance characteristics as full 16g seats, it may be sufficient, in some cases, to review the company’s certification paperwork; in other cases, however, full crash testing of actual 16g seats may be necessary to determine the level of protection provided. FAA is currently considering the comments it received on its October 2002 proposal. Industry comments raised concerns about general costs, the costs of retrofitting flight attendant seats, and the possibility that older airplanes designed for 9g seats might require structural changes to accommodate full 16g seats. One comment expressed the desire to give some credit for and “grandfather” in at least some partial 16g seats. In an accident, a passenger’s chances of survival depend on how well the passenger cabin maintains “living space” and the passenger is “tied down” within that space. Many experts and reports have noted floor retention— the ability of the aircraft cabin floor to remain intact and hold the passenger’s seat and restraint system during a crash—as critical to increasing the passenger’s chances of survival. Floor design concepts developed during the late 1940s and 1950s form the basis for the cabin floors found in today’s modern airplanes. Accident investigations have documented failures of the floor system in crashes. New 16g seat requirements were developed in the 1980s. 16g seats were intended to be retrofitted on aircraft with traditional 9g floors and were designed to maximize the capabilities of existing floor strength. While 16g seats might be strong, they could also be inflexible and thus fail if the floor deformed in a crash. Under the current 16g requirement, the seats must remain attached to a deformed seat track and floor structure representative of that used in the airplane. To meet these requirements, the seat was expected to permanently deform to absorb and limit impact forces even if the 16g test conditions were exceeded during a crash. A major accident related to floor deformation occurred at Kegworth, England, in 1989. A Boeing 737-400 airplane flew into an embankment on approach to landing. In total, only 21 of the 52 triple seats—all “16g- compatible” —remained fully attached to the cabin floor; 14 of those that remained attached were in the area where the wing passes through the cabin and the area is stronger than other areas to support the wing. In this section of the airplane, the occupants generally survived, even though they were exposed to an estimated peak level of 26gs. The front part of the airplane was destroyed, including the floor; most of these seats separated from the airplane, killing or seriously injuring the occupants. An FAA expert noted that the impact was too severe for the airplane to maintain its structural integrity and that 16g seats were not designed for an accident of that severity. The British Air Accidents Investigation Branch noted that fewer injuries occurred in the accident than would probably have been the case with earlier-generation seats. However, the Branch also noted that “relatively minor engineering changes could significantly improve the resilience and toughness of cabin floors . . . and take fuller advantage of the improved passenger seats.” The Branch reported that where failures occurred, it was generally the seat track along the floor that failed, and not the seat, and that the rear attachments generally remained engaged with the floor, “at least partially due to the articulated joint built into the rear attachment, an innovation largely stemming from the FAA dynamic test requirements.” The Branch concluded that “seats designed to these dynamic requirements will certainly increase survivability” but “do not necessarily represent an optimum for the long term . . . if matched with cabin floors of improved strength and toughness.” Several reports have recommended structural improvements to floors. A case study of 11 major accidents for which detailed information was available found floor issues to be a major cause of injury or fatalities in 4 accidents and a minor cause in 1 accident. Another study estimated the past benefits of 16g seats in U.S. accidents between 1984 and 1998 and found no hypothetical benefit from 16g seats in a number of accidents because the floor was extensively disrupted during impact. In other words, unless the accidents had been less severe or the floor and seat tracks had been improved beyond the 9g standard on both new and old jets, newer 16g seats would not have offered additional benefits compared with the older seats that were actually on the airplane during the accidents under study. A research program on seat and floor strength was recently conducted by the French civil aviation authority, the Direction Générale de l’Aviation Civile. Initial findings of the research program on seat-floor attachments have not shown dramatic results and showed no rupture or plastic deformation of any cabin floor parts during a 16g test. However, French officials noted that they plan to perform additional tests with more rigid seats. Because many factors are involved it is difficult to identify the interrelated issues and interactions between seats and floors. A possible area for future research, according to French officials, is to examine dynamic floor warping during a crash to improve impact performance. FAA officials said they have no plans to change floor strength requirements. FAA regulations require floors to meet impact forces likely to occur in “emergency landing conditions,” or generally about 9gs of longitudinal static force. According to several experts, stronger floors could improve the performance of 16g seats. In addition, further improvement in seats beyond the 16g standard would likely require improved floors. In an airplane crash, overhead luggage bins in the cabin sometimes detach from their mountings along the ceiling and sidewalls and can fall completely or allow pieces of luggage to fall on passengers’ heads (See fig. 6.). While only a few cases have been reported in which the impact from dislodged overhead bins was the direct cause of a crash fatality or injury, a study for the British Civil Aviation Authority that attempted to identify the specific characteristics of each fatality in 42 fatal accidents estimated that the integrity of overhead bin stowage was the 17th most important of 32 factors used to predict passenger survivability. Maintaining the integrity of bins may also help speed evacuation after a crash. Safer bins have been designed since bin problems were observed in a Boeing 737 accident in Kegworth, England, in 1989, when nearly all the bins failed and fell on passengers. FAA tested bins in response to that accident. The Kegworth bins were certified to the current FAA 9g longitudinal static loading standards, among others. When FAA subsequently conducted longitudinal dynamic loading tests on the types of Boeing bins involved, the bins failed. Several FAA experts said that the overhead bins on 737s had a design flaw. FAA then issued an airworthiness directive that called for modifying all bins on Boeing 737 and 757 aircraft. The connectors for the bins were strengthened in accordance with the airworthiness directive, and the new bins passed FAA’s tests. The British Air Accidents Investigation Branch recommended in 1990 that the performance of both bins and latches be tested more rigorously, including the performance of bins “when subjected to dynamic crash pulses substantially beyond the static load factors currently required.” NTSB has made similar recommendations. Turbulence reportedly injures at least 15 U.S. cabin occupants a year, and possibly over 100. Most of these injuries are to flight attendants who are unrestrained. Some injuries are caused by luggage falling from bins that open in severe turbulence. Estimates of total U.S. airline injuries from bin- related falling luggage range from 1,200 to 4,500 annually, most of which occur during cruising rather than during boarding or disembarking. The study for the British Civil Aviation Authority noted above found that as many as 70 percent of impact-related accidents involve overhead bins that become detached. However, according to the report, bin detachment does not appear to be a major factor in occupants’ survival and data are insufficient to support a specific determination about the mechanism of failure. FAA has conducted several longitudinal and drop tests since the Kegworth accident, including drops of airplane fuselage sections with overhead storage bins installed. A 1993 dynamic vertical drop test showed some varying bin performance problems at about 36gs of downward force. An FAA longitudinal test in 1999 tested two types of bins at 6g, at the 9g FAA certification requirement, and at the 16g level; in the 16g longitudinal test, one of the two bins broke free from its support mountings. In addition to the requirement that they withstand forward (longitudinal) loads of slightly more than 9gs, luggage bins must meet other directional loading requirements. Bin standards are part of the general certification requirements for all onboard objects of mass. FAA officials said that overhead bins no longer present a problem, appear to function as designed, and meet standards. An FAA official told us that problems such as those identified at Kegworth have not appeared in later crashes. Another FAA official said that while Boeing has had some record of bin problems, the problems are occasional and quickly rectified through design changes. Boeing officials told us that the evidence that bins currently have latch problems is anecdotal. Suggestions for making bins safer in an accident include adding features to absorb impact forces and keep bins attached and closed during structural deformation; using dynamic 16g longitudinal impact testing standards similar to those for seats; and storing baggage in alternative compartments in the main cabin, elsewhere in the aircraft, or under seats raised for that purpose. Using a correctly-designed child safety seat that is strapped in an airplane seat offers protection to a child in an accident or turbulence (see fig. 6). By contrast, according to many experts, holding a child under two years old on an adult’s lap, which is permitted, is unsafe for both the child and for other occupants who could be struck by the child in an accident. Requiring child safety seats for infants and small children on airplanes is one of NTSB’s “most wanted” transportation safety improvements. The British Air Accidents Investigation Branch made similar recommendations, as did a 1997 White House Commission report on aviation. An FAA analysis of survivable accidents from 1978 through 1994 found that 9 deaths, 4 major injuries, and 8 minor injuries to children occurred. The analysis also found that the use of child safety seats would have prevented 5 deaths, all the major injuries, and 4 to 6 of the minor injuries. Child safety advocates have pointed to several survivable accidents in which children died—a 1994 Charlotte, North Carolina, crash; a 1990 Cove Neck, New York, accident; and a 1987 Denver, Colorado, accident—as evidence of the need for regulation. A 1992 FAA rule required airlines to allow child restraint systems, but FAA has opposed mandatory child safety seats on the basis of studies showing that requiring adults to pay for children’s seats would induce more car travel, which the study said was more dangerous for children than airplane travel. One study published in 1995 by DOT estimated that if families were charged full fares for children’s seats, 20 percent would choose other modes of transportation, resulting in a net increase of 82 deaths among children and adults over 10 years. If child safety seats are required, airlines may require adults wishing to use child safety seats to purchase an extra seat for the child’s safety seat. FAA officials told us that they could not require that the seat next to a parent be kept open for a nonpaying child. However, NTSB has testified that the scenarios for passengers taking other modes of transportation are flawed because FAA assumed that airlines would charge full fares for infants currently traveling free. NTSB noted in 1996 that airlines would offer various discounts and free seats for infants in order to retain $6 billion in revenue that would otherwise be lost to auto travel. Airlines have already responded to parents who choose to use child restraint systems with scheduling flexibility, and many major airlines offer a 50 percent discount off any fare for a child under 2 to travel in an approved child safety seat. The 1995 DOT study, however, estimated that even if a child’s seat on an airplane were discounted 75 percent, some families would still choose car travel and that the choice by those families to drive instead of fly would result in a net increase of 17 child and adult deaths over 10 years. In FAA tests, some but not all commercially available automobile child restraint systems have provided adequate protection in tests simulating airplane accidents. Prices range from less than $100 for a child safety seat marketed for use in both automobiles and airplanes to as much as $1,300 for a child safety seat developed specifically for use in airplanes. A drawback to having parents, rather than airlines, provide child safety seats for air travel is that some models may be more difficult to fit into airplane seat belts, making a proper fit more challenging. While the performance of standardized airline-provided seats may be better than that of varied FAA-certified auto-airplane seats, one airline said that providing seats could present logistical problems for them. However, Virgin Atlantic Airlines supplies its own specially developed seats and prohibits parents from using their own child seats. Because turbulence can be a more frequent danger to unrestrained children than accidents, one expert told us that a compromise solution might include allowing some type of alternative in-flight restraint. Child safety seats are currently available for use on aircraft. The technical issues involved in designing and manufacturing safe seats for children to use in both cars and airplanes have largely been solved, according to FAA policy officials and FAA researchers. Federal regulations establish requirements for child safety seats designed for use in both highway vehicles and aircraft by children weighing up to 50 pounds. FAA officials explained that regulations requiring child safety seats have been delayed, in part, because of public policy concerns that parents would drive rather than fly if they were required to buy seats for their children. On February 18, 1998, FAA asked for comments on an advanced notice of proposed rule- making to require the use of child safety seats for children under the age of 2. FAA sponsored a conference in December 1999 to examine child restraint systems. At that conference, the FAA Administrator said the agency would mandate child safety seats in aircraft and provide children with the same level of safety as adults. FAA officials told us that they are still considering requiring the use of child safety seats but have not made a final decision to do so. If FAA does decide to provide “one level of safety” for adults and children, as NTSB advocates, parents may opt to drive to their destinations to avoid higher travel costs, thereby statistically exposing themselves and their children to more danger. In addition, FAA will have to decide whether the parents or airlines will provide the seats. If FAA decides to require child safety seats, it will need to harmonize its requirements with those of other countries where requirements differ, as the regulations on child restraint systems vary. In Canada, as in the United States, child safety seats are not mandatory on registered aircraft. In Europe, the regulations vary from country to country, but no country requires their use. Australia’s policy permits belly belts but discourages their use. An Australian official said in 1999 that Australia was waiting for the United States to develop a policy in this area and would probably follow that policy. Lap belts with inflatable air bags are designed to reduce the injuries or death that may result when a passenger’s head strikes the airplane interior. These inflatable seat belts adapt advanced automobile air bag technology to airplane seats in the form of seat belts with embedded air bags. If a passenger loses consciousness because of a head injury in an accident, even a minor, nonfatal concussion can cause death if the airplane is burning and the passenger cannot evacuate quickly. Slowing the duration of the impact with an air bag lessens its lethality. According to a manufacturer’s tests using airplane seats on crash sleds, lap belts with air bags can likely reduce some impact injuries to survivable levels. FAA does not require seats to be tested in sled tests for head impact protection when there would be “no impact” with another seat row or bulkhead wall, such as when spacing is increased to 42 inches from the more typical 35 inches. While more closely spaced economy class seat rows can provide head impact protection through energy-absorbing seat backs, seats in no impact positions have tested poorly in head injury experiments, resulting in severe head strikes against the occupants’ legs or the floor, according to the manufacturer. This no impact exemption from FAA’s head injury criteria can include exit rows, business class seats, and seats behind bulkhead walls and could permit as many as 30 percent of seats in some airplanes to be exempt from the head impact safety criteria that row-to-row seats must meet. According to the manufacturer, 13 airlines have installed about 1,000 of the devices in commercial airliners, mainly at bulkhead seats; about 200 of these are installed in the U.S. fleet. All of the orders and installations so far have been done to meet FAA’s seat safety regulations rather than for marketing reasons, according to the manufacturer. The airlines would appear to benefit from using the devices in bulkhead seats if that would allow them to install additional rows of seats. While the amount of additional revenue would depend on the airplane design and class of seating, two additional seats may produce more net revenue per year than the cost for the devices to be installed throughout an aircraft.Economic constraints are acquisition costs, maintenance costs, and increased fuel costs due to weight. The units currently weigh about 3 pounds per seat, or 2 pounds more than current seat belts. According to the manufacturer, the air bag lap belts currently cost $950 to $1,100, including maintenance. The manufacturer estimated that if 5 percent of all U.S. seat positions were equipped with the devices (about 50,000 seats per year), the cost would drop to about $300 to $600 per seat, including installation. Lap belt air bags have been commercially available for only a few years. FAA’s Civil Aerospace Medical Institute assisted the developers of the devices; manufacturers for both passenger and military use (primarily helicopter) are conducting ongoing research. FAA and other regulatory bodies have no plans to require their installation, but airlines are allowed to use them. The extent to which these devices are installed will depend on each airline’s analysis of the cost and benefits. This appendix presents information on the background and status of potential advancements in fire safety that we identified, including the following: preventing fuel tank explosions with fuel tank inerting; preventing in-flight fires with arc fault circuit breakers; identifying in-flight fires with multisensor fire and smoke detectors; suppressing in-flight and postcrash fires by using water mist fire mitigating postcrash damage and injury by using less flammable fuels; mitigating in-flight and postcrash fires by using fire-resistant thermal mitigating fire-related deaths and injuries by using ultra-fire-resistant mitigating fire deaths and injuries with sufficient airport rescue and fire fighting. Fuel tank inerting involves pumping nitrogen-enriched air into an airliner’s fuel tanks to reduce the concentration of oxygen to a level that will not support combustion. Nitrogen gas makes a fuel tank safer by serving as a fire suppressant. The process can be performed with both ground-based and onboard systems, and it significantly reduces the flammability of the center wing tanks, thereby lowering the likelihood of a fuel tank explosion. Following the crash of TWA Flight 800 in 1996, in which 230 people died, NTSB determined that the probable cause of the accident was an explosion in the center wing fuel tank. The explosion resulted from the ignition of flammable fuel vapors in this tank, which is located in the fuselage in the space between the wing junctions. NTSB subsequently placed the improvement of fuel tank design on its list of “Most Wanted Safety Improvements” and recommended that fuel tank inerting be considered an option to eliminate the likelihood of fuel tank explosions. FAA issued Special Federal Aviation Regulation 88 to eliminate or minimize the likelihood of ignition sources by revisiting the fuel tank’s design. Issued in 2001, the regulation consists of a series of FAA regulatory actions aimed at preventing the failure of fuel pumps and pump motors, fuel gauges, and electrical power wires inside these fuel tanks. In late 2002, FAA amended the regulation to allow for an “equivalent level of safety” and the use of inerting as part of an alternate means of compliance. In a 2001 report, an Aviation Rule-making Advisory Committee tasked with evaluating the benefits of inerting the center wing fuel tank estimated these benefits in terms of lives saved. After projecting possible in-flight and ground fuel tank explosions and postcrash fires from 2005 through 2020, the committee estimated that 132 lives might be saved from a ground-based system and 253 lives might be saved from an onboard system. Neither of the two major types of fuel tank inerting—ground-based and onboard—is currently available for use on commercial airliners because additional development is needed.Both types offer benefits and drawbacks. A ground-based system sends a small amount of nitrogen into the center wing tank before departure. Its benefits include that (1) it requires no new technology development for installation, (2) the tank can be inerted in 20 minutes, and (3) it carries a lesser weight penalty. Its drawbacks include that it is unable to inert for descent, landing, and taxiing to the destination gate, and nitrogen supply systems are needed at each terminal gate and remote parking area at every airport. An onboard system generates nitrogen by transferring some of the engine bleed air – air extracted from the jet engines to supply the cabin pressurization system in normal flight—through a module that separates air into oxygen and nitrogen and discharges the nitrogen enriched air into the fuel tank. Its benefits include that (1) it is self-reliant and (2) it significantly reduces an airplane’s vulnerability to lightning, static electricity, and incendiary projectiles throughout the flight’s duration.Its drawbacks include that it (1) weighs more, (2) increases the aircraft’s operating costs, and (3) may decrease the aircraft’s reliability. According to FAA, its fire safety experts’ efforts to develop a lighter-weight system for center wing tank inerting have significantly increased the industry’s involvement. Boeing and Airbus are working on programs to test inerting systems in flight. For example, Boeing has recently completed a flight test program with a prototype system on a 747. None of the U.S. commercial fleet is equipped with either ground-based or onboard inerting systems, though onboard systems are in use in U.S. and European military aircraft. Companies working in this field are focused on developing new inerting technologies or modifying existing ones. A European consortium is developing a system that combines onboard center wing fuel tank inerting with sensors and a water-mist-plus-nitrogen fire suppression system for commercial airplanes. In late 2002, FAA researchers successfully ground-tested a prototype onboard inerting system using current technology on a Boeing 747SP. New research also enabled the agency to ease a design requirement, making the inerting technology more cost-effective. This new research showed that reducing the oxygen level in the fuel tank to 12 percent—rather than 9 percent, as was previously thought—is sufficient to prevent fuel tank explosions in civilian aircraft. FAA also developed a system that did not need the compressors that some had considered necessary. Together, these findings allowed for reductions in the size and power demands of the system. FAA plans to focus further development on the more practical and cost- effective onboard fuel tank inerting systems. For example, to further improve their cost-effectiveness, the systems could be designed both to suppress in-flight cargo fires, thereby allowing them to replace Halon extinguishing agents, and to generate oxygen for emergency depressurizations, thereby allowing them to replace stored oxygen or chemical oxygen generators. NASA is also conducting longer-term research on advanced technology onboard inert gas-generating systems and onboard oxygen-generating systems. Its research is intended (1) to develop the technology to improve its efficiency, weight, and reliability and (2) to make the technology practical for commercial air transport. NASA will fund the development of emerging technologies for ground-based technology demonstration in fiscal year 2004. NASA is also considering the extension of civilian transport inerting technology to all fuel tanks to help protect airplanes against terrorist acts during approaches and departures. The cost of the system, its corresponding weight, and its unknown reliability are the most significant factors affecting the potential use of center wing fuel tank inerting. New cost and weight estimates are anticipated in 2003. In 2001, FAA estimated total costs to equip the worldwide fleet at $9.9 billion for ground-based, and $20.8 billion for onboard, inerting systems. In 2002, FAA officials developed an onboard system for B-747 flight- testing. The estimated cost was $460,000. The officials estimated that each system after that would cost about $200,000. The weight of the FAA prototype system is 160 pounds. A year earlier, NASA estimated the weight for a B-777 system with technology in use in military aircraft at about 550 pounds. Arcing faults in wiring may provide an ignition source that can start fires. Electrical wiring that is sufficiently damaged might cause arcing or direct shorting resulting in smoking, overheating, or ignition of neighboring materials. A review of data produced by FAA, the Airline Pilots Association, and Boeing showed that electrical systems have been a factor in approximately 50 percent of all aircraft occurrences involving smoke or fire and that wiring has been implicated in about 10 percent of those occurrences. In addition, faulty or malfunctioning wiring has been a factor in at least 15 accidents or incidents investigated by NTSB since 1983. Properly selecting, routing, clamping, tying, replacing, marking, separating, and cleaning around wiring areas and proper maintenance all help mitigate the potential for wire system failures, such as arcing, that could lead to smoke, fire and loss of function. Chemical degradation, age induced cracking, and damage due to maintenance may all create a scenario which could lead to arcing. Arcing can occur between a wire and structure or between different wire types. Wire chafing is a sign of degradation; chafing happens when the insulation around one wire rubs against a component tougher than itself (such as structure or control cable) exposing the wire conductor. This condition can lead to arcing. When arcing wires are too close to flammable materials or are flammable themselves, fires can occur. In general, wiring and wiring insulation degrade for a variety of reasons, including age, inadequate maintenance, chemical contamination, improper installation or repair, and mechanical damage. Vibration, moisture, and heat can contribute to and accelerate degradation. Consequences of wire systems failures include loss of function, smoke, and fire. Since most wiring is bundled and located in hidden or inaccessible areas, it is difficult to monitor the health of an aircraft’s wiring system during scheduled maintenance using existing equipment and procedures. Failure occurrences have been documented in wiring running to the fuel tank, in the electronics equipment compartment, in the cockpit, in the ceiling of the cabin, and in other locations. To address the concerns with arcing, arc fault circuit breakers for aircraft use are being developed. The arc fault circuit breaker cuts power off as it senses a wire beginning to arc. It is intended to prevent significant damage before a failure develops into a full-blown arc, which can produce extremely localized heat, char insulation, and generally create problems in the wire bundles. Arc fault circuit protection devices would mitigate arcing events, but will not identify the wire breaches and degradation that typically lead up to these events. FAA, the Navy, and the Air Force are jointly developing arc fault circuit breaker technology. Boeing is also developing a monitoring system to detect the status of and changes in wiring; and power shuts down when arcing is detected. This system may be able to protect wiring against both electrical overheating and arcing and is considered more advanced than the government’s circuit breaker technology. FAA developed a plan called the Enhanced Airworthiness Program for Airplane Systems to address wiring problems, which includes development of arc fault circuit breaker technology and installation guidance along with proposals of new regulations. The plan provides means for enhancing safety in the areas of wire system design, certification, maintenance, research and development, reporting, and information sharing and outreach. FAA also tasked an Aging Transport Systems Rule-making Advisory Committee to provide data, recommendations, and evaluation specifically on aging wiring systems. The new regulations being considered are entitled the Enhanced Airworthiness Program for Airplane Systems Rule and are expected by late-2005. Under this rule-making package, inspections would evaluate the health of wiring and all of its components for operation, such as connectors and clamps. Part of the system includes visual inspections of all wiring within arm’s reach, enhanced by the use of hand-held mirrors. This improvement is expected to catch more wiring flaws than current visual inspection practices. Where visual inspections can not be assumed to detect damage, detailed inspections will be required. The logic process to establish proper inspections is called the Enhanced Zonal Analysis Procedure, which will be issued as an Advisory Circular. This procedure is specifically directed towards enhancing the maintenance programs for aircraft whose current program does not include tasks derived from a process that specifically considers wiring in all zones as the potential source of ignition of a fire. Additional development and testing will be required before advanced arc fault circuit breakers will be available for use on aircraft. The FAA currently is in the midst of a prototype program where arc fault circuit breakers are installed in an anticollision light system on a major air carrier’s Boeing 737. The FAA and the Navy are currently analyzing tests of the circuit breakers to assess their reliability. The Society of Automotive Engineers is in the final stages of developing a Minimum Operating Performance Specification for the arc fault circuit breaker. Multisensor detectors, or “electronic noses,” could combine one or more standard smoke detector technologies; a variety of sensors for detecting such gases as carbon monoxide, carbon dioxide, or hydrocarbon; and a thermal sensor to more accurately detect and locate overheated or burning materials. The sensors could improve existing fire detection by discovering and locating potential or actual fires sooner and reducing the incidence of false alarms. These “smart” sensors would ignore the “nuisance sources” such as dirt, dust, and condensation that are often responsible for triggering false alarms in existing systems. According to studies by FAA and the National Institute of Standards and Technology, many current smoke and fire detection systems are not reliable. A 2000 FAA study indicated that cargo compartment detection systems, for example, resulted in at least one false alarm per week from 1988 through 1990 and a 200:1 ratio of false alarms to actual fires in the cargo compartment from 1995 through 1999. FAA has since estimated a 100:1 cargo compartment false alarm ratio, partly because reported actual incidents have increased According to FAA’s Service Difficulty Report database,about 990 actual smoke and fire events were reported for 2001. Multisensor detectors could be wired or wireless and linked to a suppression system. One or several sensor signals or indicators could cause the crew to activate fire extinguishers in a small area or zone, a larger area, or an entire compartment, resulting in a more appropriate and accurate use of the fire suppressant. For example, in areas such as the avionics compartment, materials that can burn are relatively well-defined. Multisensor detectors the size of a postage stamp could be designed to detect smoldering fires in cables or insulation or in overheated equipment in that area. Placing the detectors elsewhere in the airplane could improve the crew’s ability to respond to smoke or fire, including occurrences in hidden or inaccessible areas. Improved sensor detection technologies would both enhance safety by increasing crews’ confidence in the reliability of alarms and reduce costs by avoiding the need to divert aircraft in response to false alarms. One study estimated the average cost of a diversion at $50,000 for a wide-body airplane and $30,000 for a narrow-body airplane. A diversion can also present safety concerns because of the possible increased risk of an accident and injuries to passengers and crew if there is (1) an emergency evacuation, (2) a landing at an unfamiliar airport, (3) a change to air traffic patterns, (4) a shorter runway, (5) inferior fire-fighting capability, (6) a loss of cargo load, or (7) inferior navigation aids. In 2002, 258 unscheduled landings due to smoke, fire, or fumes occurred. In addition, 342 flights were interrupted; some of these flights had to return to the gate or abort a takeoff. FAA established basic detector performance requirements in 1965 and 1980. Detectors were to be made and installed in a manner that ensured their ability to resist, without failure, all vibration, inertia, and other loads to which they might normally be subjected; they also had to be unaffected by exposure to fumes, oil, water, or other fluids. Regulations in 1986 and 1998 further defined basic location and performance requirements for detectors in different areas of the cargo compartment. In 1998, FAA issued a requirement for detection and extinguishment systems for one class of cargo compartments, which relied on oxygen starvation to control fires. This requirement significantly increased the number of detectors in use. Multisenor detectors are not currently available because additional research is needed. Although they have been demonstrated in the laboratory and on the ground, they have not been flight-tested. FAA and NASA have multisensor detector research and development efforts under way and are working to develop “smart” sensors and criteria for their approval. FAA will also finish revising an Advisory Circular that establishes test criteria for detection systems, designed to ensure that they respond to fires, but not to nonfire sources. In addition, several companies currently market “smart” detectors, mostly for nonaviation applications. For example, thermal detection systems sense and count certain particles that initially boil off the surface of smoldering or burning material. A European consortium has been developing a system, FIREDETEX, that combines the use of multisensor detectors, onboard fuel tank inerting, and water-mist-plus-nitrogen fire suppression systems for commercial airplanes. This program and associated studies are still ongoing and flight testing is planned for the last quarter of calendar year 2003. The results of tests on this system are expected to be made public in early 2004, and will help to clarify the possible costs, benefits, and drawbacks of the combined system. Additional research, development, and testing will be required before multisensor technology is ready for use in commercial aviation. NASA, FAA, and private companies are pursuing various approaches. Some experts believe that some forms of multisensor technology could be in use in 5 years. When these units become available, questions may arise about where their use will be required. For example, the Canadian Transportation Safety Board has recommended that some areas in addition to those currently designated as fire zones may need to be equipped with detectors. These include the electronics and equipment bay (typically below the floor beneath the cockpit and in front of the passenger cabin), areas behind interior wall panels in the cockpit and cabin areas, and areas behind circuit breaker and other electronic panels. For over two decades, the aviation industry has evaluated the use of systems that spray water mist to suppress fires in airliner cabins, cargo compartments, and engine casings (see fig. 7). This effort was prompted, in part, by a need to identify an alternative to Halon, the primary chemical used to extinguish fires aboard airliners. With few exceptions, Halon is the sole fire suppressant installed in today’s aircraft fire suppression systems. However, the production of Halon was banned under the 1987 Montreal Protocol on Substances that Deplete the Ozone Layer, and its use in many noncritical sectors has been phased out. Significant reserves of Halon remain, and its use is still allowed in certain “critical use” applications, such as aerospace, because no immediate viable replacement agent exists. To enable the testing and further development of suitable alternatives to and substitutes for Halon, FAA has drafted detailed standards for replacements in the cargo and engine compartments. These standards typically require replacement systems to provide the same level of safety as the currently used Halon extinguishing system. According to FAA and others in the aviation industry, successful water mist systems could provide benefits against an in-flight or postcrash fire, including cooling the passengers, cabin surfaces, furnishings and overall cabin decreasing toxic smoke and irritant gases; and delaying or preventing “flashover” fires from occurring. In addition, a 1996 study prepared for the British Civil Aviation Authority examined 42 accidents and 32 survivability factors and found that cabin water spray was the factor that showed the greatest potential for reducing fatality and injury rates. In the early 1990s, a joint FAA and Civil Aviation Authority study found that cabin water mist systems would be highly effective in improving survivability during a postcrash fire. However, the cost of using these systems outweighed the benefits, largely because of the weight of the water that airliners would be required to carry to operate them. In the mid- and late-1990s, FAA and others began examining water mist systems in airliner cargo compartments to help offset the cost of a cabin water mist system because the water could be used or shared by both the cargo compartment and the cabin. European and U.S. researchers also designed systems that required much less water because they targeted specific zones within an aircraft to suppress fires rather than spraying water throughout the cabin or the cargo compartment. In 2000, Navy researchers found a twin-fluid system to be highly reliable and maintenance-free. Moreover, this system’s delivery nozzles could be installed without otherwise changing cabin interiors. The Navy researchers’ report recommended that FAA and NTSB perform follow-up testing leading to the final design and certification of an interior water mist fire suppression system for all passenger and cargo transport aircraft. Also in 2000, a European consortium began a collaborative research project called FIREDETEX, which combines multisensor fire detectors, water mist, and onboard fuel tank inerting into one fire detection and suppression system. In 2001 and 2002, FAA tested experimental mist systems to determine what could meet its preliminary minimum performance standards for cargo compartment suppression systems. A system that combines water mist with nitrogen met these minimum standards. In this system, water and nitrogen “knock down” the initial fire, and nitrogen suppresses any deep- seated residual fire by inerting the entire compartment. In cargo compartment testing, this system maintained cooler temperatures than had either a plain water mist system or a Halon-based system. Additional research and testing are needed before water mist technology can be considered for commercial aircraft. For example, the weight and relative effectiveness of any water mist system would need to be considered and evaluated. In addition, before it could be used in aircraft, the consequences of using water will need to be further evaluated. For example, Boeing officials noted that using a water mist fire suppression system in the cabin in a post crash fire might actually reduce passenger safety if the mist or fog creates confusion among the passengers, leading to longer evacuation times. Further, of concern is the possible shorting of electrical wiring and equipment and damage to aircraft interiors (e.g., seats, entertainment equipment, and insulation). Water cleanup could also be difficult and require special drying equipment. Burning fuel typically dominates and often overwhelms postcrash fire scenarios and causes even the most fire-resistant materials to burn. Fuel spilled from tanks ruptured upon crash impact often forms an easily ignitable fuel-air mixture. A more frequent fuel-related problem is the fuel tank explosion, in which a volatile fuel-air mixture inside the fuel tank is ignited, often by an unknown source. For example, it is believed that fuel tank explosions destroyed a Philippines Air 737 in 1990, TWA Flight 800 in 1996, and a Thai Air 737 in 2001. Therefore, reducing the flammability of fuel could improve survivability in postcrash fires as well as reduce the occurrence of fuel tank explosions. Reducing fuel flammability involves limiting the volatility of fuel and the rate at which it vaporizes.Liquid fuel can burn only when enough fuel vapor is mixed with air. If the fuel cannot vaporize, a fire cannot occur. This principle is behind the development of higher-flashpoint fuel, whose use can decrease the likelihood of a fuel tank explosion. The flash point is the lowest temperature at which a liquid fuel produces enough vapor to ignite in the presence of a source of ignition—the lower the flash point, the greater the risk of fire. If the fuel is volatile enough, however, and air is sucked into the fuel tank area upon crash impact, limiting the fuel’s vaporization can prevent a burnable mixture from forming. This principle supports the use of additives that modify the viscosity of fuel to limit postcrash fires; for example, antimisting kerosene contains such additives. According to FAA and NASA, however, these additives do nothing to prevent fuel tank explosions. From the early 1960s to the mid-1980s, FAA conducted research on fuel safety. The Aviation Safety Act of 1988 required that FAA undertake research on low-flammability aircraft fuels, and, in 1993, FAA developed plans for fuel safety research. In 1996, a National Research Council experts’ workshop on aviation fuel summarized existing fuel safety research efforts. The participants concluded that although postcrash fuel-fed aircraft fires had been researched, limited progress had been achieved and little work had been published. As part of FAA’s research, fuels have been modified with thickening polymer additives to slow down vaporization in crashes. Participants in the 1996 National Research Council workshop identified several long-term research goals for consideration in developing modified fuels and fuel additives to improve fire safety. They also agreed that a combination of effective fire-safe fuel additives could probably be either selected or designed, provided that fuel performance requirements were identified in advance. In addition, they agreed that existing aircraft designs that reduce the chance of fuel igniting do not present major barriers to the implementation of a fire-safe fuel. A 1996 European Transport Safety Council report suggested that antimisting kerosene be at least partially tested on regular military transport flights (e.g., in one tank, feeding one engine) to demonstrate its operational compatibility. The report also recommended the consideration of a study comparing the costs of the current principal commercial fuel and the special, higher-flashpoint fuel used by the Navy. According to NASA and FAA fire-safe fuels experts, military fuel is much harder to burn in storage or to ignite in a pan because of its lower volatility; however, it is just as flammable as aviation fuel when it is sprayed into an engine combustor. Fire-safe fuels are not currently available and are in the early stages of research and development. In January 2002, NASA opened a fire-safe fuels research branch at its Glenn Research Center in Ohio. NASA-Glenn is conducting aviation fuel research that evaluates fuel vapor flammability in conjunction with FAA’s fuel tank inerting program, including the measurement of fuel “flash points.” NASA is examining the effects of surfactants, gelling agents, and chemical composition changes on the vaporization and pressure characteristics of jet fuel. In addition to FAA’s and NASA’s research, some university and industry researchers have made progress in developing fire-safe fuels. Many use advanced analytical, computational modeling technologies to inform their research. A council of producers and users of fuels is also coordinating research on ways to use such fuels. NASA fuel experts remain optimistic that small changes in fuel technologies can have a big impact on fuel safety. Developing fire-safe fuels will require much more research and testing. There are significant technical difficulties associated with creating a fuel that meets aviation requirements while meaningfully decreasing the flammability of the fuel. To keep an airplane quieter and warmer, a layer of thermal acoustic insulation material is connected to paneling and walls throughout the aircraft. This insulation, if properly designed, can also prevent or limit the spread of an in-flight fire. In addition, thermal acoustic insulation provides a barrier against a fire burning through the cabin from outside the airplane’s fuselage (See fig. 8.). Such a fire, often called a postcrash fire, may occur when fuel is spilled on the ground after a crash or an impact. While this thermal acoustic insulation material could help prevent the spread of fire, some of the insulation materials that have been used in the past have contributed to fires. For example, FAA indicated that an insulation material, called metallized Mylar®, contributed to at least six in- flight fires. Airlines have stopped using this material and are removing it from existing aircraft. FAA’s two main efforts in this area are directed toward preventing fatal in- flight fire and improving postcrash fire survivability. Since 1998, FAA has been developing test standards for preventing in- flight fires in response to findings that fire spread on some thermal acoustic insulation blanket materials. In 2000, FAA issued a notice of proposed rule-making that outlined new flammability test criteria for in- flight fires. FAA’s in-flight test standards require thermal acoustic insulation materials to protect passengers. According to the standards, insulation materials installed in airplanes will not propagate a fire if ignition occurs. FAA is also developing more stringent burnthrough test standards for postcrash fires. FAA has been studying the penetration of the fuselage by an external fire—known as fuselage burnthrough—since the late 1980s and believes that improving the fire resistance of thermal acoustic insulation could delay fuselage burnthrough. In laboratory tests conducted from 1999 through 2002, an FAA-led working group determined that insulation is the most potentially effective and practical means of delaying the spread of fire or creating a barrier to burnthrough. In 2002, FAA completed draft burnthrough standards outlining a proposed methodology for testing thermal acoustic insulation. The burnthrough standards would protect passengers and crews by extending by at least 4 minutes the time available for evacuation in a postcrash fire. Various studies have estimated the potential benefits from both test standards: A 1999 study of worldwide aviation accidents from 1966 through 1993 estimated that about 10 lives per year would have been saved if protection had provided an additional 4 minutes for occupants to exit the airplane. A 2000 FAA study estimated that about 37 U.S. fatalities would be avoided between 2000 and 2019 through the implementation of both proposed standards. A 2002 study by the British Civil Aviation Authority of worldwide aviation accidents from 1991 through 2000 estimated that at least 34 lives per year would have been saved if insulation had met both proposed standards. Insulation designed to replace metallized Mylar® is currently available. A 2000 FAA airworthiness directive gave the airlines 5 years to remove and replace metallized Mylar® insulation in 719 affected airplanes. Replacement insulation is required to meet the in-flight standard and will be installed in these airplanes by mid-2005. In that airworthiness directive, FAA indicated that it did not consider other currently installed insulation to constitute an unsafe condition. Thermal acoustic insulation is currently available for installation on commercial airliners. This insulation has been demonstrated to reduce the chance of fatal in-flight fires and to improve postcrash fire survivability. On July 31, 2003, FAA issued a final rule requiring that after September 2, 2005, all newly manufactured airplanes having a seating capacity of more than 20 passengers or over 6,000 pounds must use thermal acoustic insulation that meets more stringent standards for how quickly flames can spread. In addition, for aircraft of this size manufactured before September 2, 2003, replacement insulation in the fuselage must also meet the new, higher standard. Research is continuing to develop thermal acoustic insulation that provides better in-flight and burnthrough protection. Even when this material is available, the high cost of retrofitting airplanes may limit its use to newly manufactured aircraft. For example, FAA estimates that the metallized Mylar® retrofit alone will cost a total of $368.4 million, discounted to present value terms, for the 719 affected airplanes. Because thermal acoustic insulation is installed throughout the pressurized section of the airplane for the life of its service, retrofitting the entire fleet would cost several billion dollars. Polymers are used in aircraft in the form of lightweight plastics and composites and are selected on the basis of their estimated installed cost, weight, strength, and durability. Most of the aircraft cabin is made of polymeric material. In the event of an in-flight or a postcrash fire, the use of polymeric materials with reduced flammability could give passengers and crew more time to evacuate by delaying the rate at which the fire spreads through the cabin. FAA researchers are developing better techniques to measure the flammability of polymers and to make polymers that are ultra fire resistant. Developing these materials is the long-term goal of FAA’s Fire Research Program, which, if successful, will “eliminate burning cabin materials as a cause of death in aircraft accidents.” Materials being improved include composite and adhesive resins, textile fibers, rubber for seat cushions, and plastics for molded parts used in seats and passenger electronics. (See fig. 9.) Adding flame-retardant substances to existing materials is one way to decrease their flammability. For example, some manufacturers add substances that release water when they reach a high temperature. When a material, such as wiring insulation, is heated or burns, the water acts to absorb the heat and cools down the fire. Other materials are designed to become surface-scorched on exposure to fire, causing a layer of char to protect the rest of the material from burning. Lastly, adding a type of clay can have a flame-retardant effect. In general, these fire-retardant polymers are formulated to pass an ignition test but do not meet FAA’s criterion for ultra fire resistance, which is a 90 percent reduction in the rate at which the untreated material would burn. To meet this strict requirement, FAA is developing new “smart” polymers that are typical plastics under normal conditions but convert to ultra-fire-resistant materials when exposed to an ignition source or fire. FAA has adopted a number of flammability standards over the last 30 years. In 1984, FAA passed a retrofit rule that replaced 650,000 seat cushions with flame-retardant seat cushions at a total cost of about $75 million. The replacement seat cushions were found to delay cabin flashover by 40 to 60 seconds. Fire-retardant seat cushions can also prevent ramp and in-flight fires that originate at a seat and would otherwise burn out of control if left unattended. In 1986 and 1988, FAA set maximum allowable levels of heat and smoke from burning interior materials to decrease the amount of smoke that they would release in a postcrash fire. These standards affected paneling in all newly manufactured aircraft. Airlines and airframe manufacturers invested several hundred million dollars to develop these new panels. Ultra-fireresistant polymers are not currently available for use on commercial airliners. These polymers are still in the early stages of research and development. To reduce the cost and simplify the testing of new materials, FAA is employing a new technique to characterize the flammability and thermal decomposition of new products; this technique requires only a milligram of sample material. The result has been the discovery of several new compositions of matter (including “smart” polymers). The test identifies key thermal and combustion properties that allow rapid screening of new materials. From these materials, FAA plans to select the most promising and work with industry to make enough of the new polymers to fabricate full-scale cabin components like sidewalls and stowage bins for fire testing. FAA’s phased research program includes the selection in 2003 of a small number of resins, plastics, rubbers, and fibers on the basis of their functionality, cost, and potential to meet fire performance guidelines. In 2005, FAA plans to fabricate decorative panels, molded parts, seat cushions, and textiles for testing from 2007 through 2010. Full-scale testing is scheduled for 2011 but is contingent upon the availability of program funds and commercial interest from the private sector. Research continues on ultra-fire-resistant polymers that will increase protection against in-flight fires and cabin burnthrough. According to an FAA fire research expert, issues facing this research include (1) the current high cost of ultra-fire-resistant polymers; (2) difficulties in producing ultra- fire-resistant polymers with low to moderate processing temperatures, good strength and toughness, and colorability and colorfastness; and (3) gaps in understanding the relationship between material properties and fire performance and between chemical composition and fire performance, scaling relationships, and fundamental fire-resistance mechanisms. In addition, once the materials are developed and tested, getting them produced economically and installed in aircraft will become an issue. It is expected that such new materials with ultra fire resistance would be more expensive to produce and that the market for such materials would be uncertain. Because of the fire danger following a commercial airplane crash, having airport rescue and fire-fighting operations available can improve the chances of survival for the people involved. Most airplane accidents occur during takeoff or landing at the airport or in the surrounding community. A fire outside the airplane, with its tremendous heat, may take only a few minutes to burn through the airplane’s outside shell. According to FAA, firefighters are responsible for creating an escape path by spraying water and chemicals on the fire to allow the passengers and crew to evacuate the airplane. Firefighters use one or more trucks to extinguish external fires, often at great personal risk, and use hand-held attack lines when attempting to put out fires within the airplane fuselage. (See fig. 10). Fires within the fuselage are considered difficult to control with existing equipment and procedures because they involve complex conditions, such as smoke-laden toxic gases and high temperatures in the passenger cabin. FAA has taken actions to control both internal and external postcrash fires, including requiring major airports to have airport rescue and fire-fighting operations. In 1972, FAA first proposed regulations to ensure that major airports have a minimal level of airport rescue and fire-fighting operations. Some changes to these regulations were made in 1988. The regulations establish, among other things, equipment standards, annual testing requirements for response times, and operating procedures. The requirements depend on both the size of the airport and the resources the locality has agreed to make available as needed. In 1997, FAA compared airport rescue and fire-fighting missions and standards for civilian airports with DOD’s for defense installations and reported that DOD’s requirements were not applicable to civilian airports. In 1988, and again in 1998, Transport Canada Civil Aviation also studied its rescue and fire-fighting operations and concluded that the expenditure of resources for such unlikely occurrences was difficult to justify from a benefit-cost perspective. This conclusion highlighted the conflict between safety and cost in attempting to define rescue and fire-fighting requirements. A coalition of union organizations and others concerned about aviation safety released a report critical of FAA’s standards and operational regulations in 1999. According to the report, FAA’s airport rescue and fire- fighting regulations were outdated and inadequate. In 2002, FAA incorporated measures recommended by NTSB into FAA’s Aeronautical Information Manual Official Guide to Basic Flight Information and Air Traffic Control Procedures. These measures (1) designate a radio frequency at most major airports to allow direct communication between airport rescue and fire-fighting personnel and flight crewmembers in the event of an emergency and (2) specify a universal set of hand signals for use when radio communication is lost. In March 2001, FAA responded to the reports criticizing its airport rescue and fire-fighting standards by tasking its Aviation Regulatory Advisory Committee to review the agency’s rescue and fire-fighting requirements to identify measures that could be added, modified, or deleted. In 2003, the committee is expected to propose requirements for the number of trucks, the amount of fire extinguishing agent, vehicle response times, and staffing at airports and to publish its findings in a notice of proposed rule-making. Depending on the results of this FAA review, additional resources may be needed at some airports. The overall cost of improving airport rescue and fire-fighting response capabilities could be a significant barrier to the further development of regulations. For example, some in the aviation industry are concerned about the costs of extending requirements to smaller airports and of appropriately equipping all airports with resources. According to FAA, extending federal safety requirements to some smaller airports would cost at least $2 million at each airport initially and $1 million annually thereafter. This appendix presents information on the background and status of potential advancements in evacuation safety that we identified, including the following: improved passenger safety briefings; exit seat briefings; photo-luminescent floor track marking; crewmember safety and evacuation training; acoustic attraction signals; exit slide testing; overwing exit doors; evacuation procedures for very large transport aircraft; and personal flotation devices. Federal regulations require that passengers receive an oral briefing prior to takeoff on safety aspects of the upcoming flight. FAA also requires that oral briefings be supplemented with printed safety briefing cards that pertain only to that make and model of airplane and are consistent with the air carrier’s procedures. These two safety measures must include information on smoking, the location and operation of emergency exits, seat belts, compliance with signs, and the location and use of flotation devices. In addition, if the flight operates above 25,000 feet mean sea level, the briefing and cards must include information on the emergency use of oxygen. FAA published an Advisory Circular in March 1991 to guide air carriers’ development of oral safety briefings and cards. Primarily, the circular defines the material that must be covered and suggests material that FAA believes should be covered. The circular also discusses the difficulty in motivating passengers to attend to the safety information and suggests making the oral briefing and safety cards as attractive and interesting as possible to increase passengers’ attention. The Advisory Circular suggests, for example, that flight attendants be animated, speak clearly and slowly, and maintain eye contact with the passengers. Multicolored safety cards with pictures and drawings should be used over black and white cards. Finally, the circular suggests the use of a recorded videotape briefing because it ensures a complete briefing with good diction and allows for additional visual information to be presented to the passengers. (See fig. 11.) Despite efforts to improve passengers’ attention to safety information, a large percentage of passengers continue to ignore preflight safety briefings and safety cards, according to a study NTSB conducted in 1999. Of 457 passengers polled, 54 percent (247) reported that they had not watched the entire briefing because they had seen it before. An additional 70 passengers indicated that the briefing provided common knowledge and therefore there was no need to watch it. Of 431 passengers who answered a question about whether they had read the safety card, 68 percent (293) indicated that they had not, many of them stating that they had read safety cards on previous flights. Safety briefings and cards serve an important safety purpose for both passengers and crew. They are intended to prepare passengers for an emergency by providing them with information about the location and operation of exits and emergency equipment that they may have to operate—and whose location and operation may differ from one airplane to the next. Well-briefed passengers will be better prepared in an emergency, thereby increasing their chances of surviving and lessening their dependence on the crew for assistance. In its emergency evacuation study, NTSB recommended that FAA instruct airlines to “conduct research and explore creative and effective methods that use state-of-the-art technology to convey safety information to passengers.” NTSB further recommended “the presented information include a demonstration of all emergency evacuation procedures, such as how to open the emergency exits and exit the aircraft, including how to use the slides.” That research found that passengers often view safety briefings and cards as uninteresting and the information as intuitive. FAA has requested that commercial carriers explore different ways to present the materials to their passengers, adding that more should be done to educate passengers about what to do after an accident has occurred. Passengers seated in an exit row may be called on to assist in an evacuation. Upon a crewmember’s command or a personal assessment of danger, these passengers must decide if their exit is safe to use and then open their exit hatch or door for use during an evacuation. In October 1990, FAA required airlines to actively screen passengers occupying exit seats for “suitability” and to administer one-on-one briefings on their responsibilities. This rule does not require specific training for exit seat occupants, but it does require that the occupants be duly informed of their distinct obligations. According to NTSB, preflight briefings of passengers in exit rows could contribute positively to a passenger evacuation. In a 1999 study, NTSB found that the individual briefings given to passengers who occupy exit seats have a positive effect on the outcome of an aircraft evacuation. The studies also found that as a result of the individualized briefings, flight attendants were better able to assess the suitability of the passengers seated in the exit seats. According to FAA’s Flight Standards Handbook Bulletin for Air Transportation, several U.S. airlines have identified specific cabin crewmembers to perform “structured personal conversations or briefings,” designed to equip and prepare passengers in exit seats beyond the general passenger briefing. Also, the majority of air carriers have procedures to assist crewmembers with screening passengers seated in exit rows. FAA’s 1990 rule requires that passengers seated in exit rows be provided with information cards that detail the actions to be taken in the case of an emergency. However, individual exit row briefings, such as those recommended by NTSB, are not required. Also included on the information cards are provisions for a passenger who does not wish to be seated in the exit row to be reseated. Additionally, carriers are required to assess the exit row passenger’s ability to carry out the required functions. The extent of discussion with exit row passengers depends on each airline’s policy. In June 1983, an Air Canada DC-9 flight from Dallas to Toronto was cruising at 33,000 feet when the crew reported a lavatory fire. An emergency was declared, and the aircraft made a successful emergency landing at the Cincinnati Northern Kentucky International Airport. The crew initiated an evacuation, but only half of the 46 persons aboard were able to escape before becoming overcome by smoke and fire. In its investigation of this accident, NTSB learned that many of the 23 passengers who died might have benefited from floor tracking lighting. As a result, NTSB recommended that airlines be equipped with floor-level escape markings. FAA determined that floor lighting could improve the evacuation rate by 20 percent under certain conditions, and FAA now requires all airliners to have a row of lights along the floor to guide passengers to the exit should visibility be reduced by smoke. On transport category aircraft, these escape markings, called floor proximity marking systems, typically consist primarily of small electric lights spaced at intervals on the floor or mounted on the seat assemblies, along the aisle. The requirement for electricity to power these systems has made them vulnerable to a variety of problems, including battery and wiring failures, burned-out light bulbs, and physical disruption caused by vibration, passenger traffic, galley cart strikes, and hull breakage in accidents. Attempts to overcome these problems have led to the proposal that nonelectric, photo-luminescent (glow-in-the-dark) materials be used in the construction of floor proximity marking systems. The elements of these new systems are “charged” by the normal airplane passenger cabin lighting, including the sunlight that enters the cabin when the window shades are open during daylight hours. (See fig. 12.) Floor track marking using photo-luminescent materials is currently available but not required for U.S. commercial airliners. Performance demonstrations of photo-luminescent technology have found that strontium aluminate photo-luminescent marking systems can be effective in providing the guidance for egress that floor proximity marking systems are intended to achieve. According to industry and government officials, such photo-luminescent marking systems are also cheaper to install than electric light systems and require little to no maintenance. Moreover, photo-luminescent technology weighs about 15 to 20 pounds less than electric light systems and, unlike the electric systems, illuminates both sides of the aisle, creating a pathway to the exits. Photo-luminescent floor track marking technology is mature and is currently being used by a small number of operators, mostly in Europe. In the United States, Southwest Airlines has equipped its entire fleet with the photo-luminescent system. However, the light emitted from photo- luminescent materials is not as bright as the light from electrically operated systems. Additionally, the photo-luminescent materials are not as effective when they have not been exposed to light for an extended period of time, as after a long overseas nighttime flight. The estimated retail price of an entire system, not including the installation costs, is $5,000 per airplane. FAA requires crewmembers to attend annual training to demonstrate their competency in emergency procedures. They have to be knowledgeable and efficient while exercising good judgment. Crewmembers must know their own duties and responsibilities during an evacuation and be familiar with those of their fellow workers so that they can take over for others if necessary. The requirements for emergency evacuation training and demonstrations were first established in 1965. Operators were required to conduct full- scale evacuation demonstrations, include crewmembers in the demonstrations, and complete the demonstrations in 2 minutes using 50 percent of the exits. The purpose of the demonstrations was to test the crewmembers’ ability to execute established emergency evacuation procedures and to ensure the realistic assignment of functions to the crew. A full-scale demonstration was required for each type and model of airplane when it first started passenger-carrying operations, increased its passenger seating capacity by 5 percent or more, or underwent a major change in the cabin interior that would affect an emergency evacuation. Subsequently, the time allowed to evacuate the cabin during these tests was reduced to 90 seconds. The aviation community took steps in the 1990s to develop a program called Crew Resource Management that focuses on overall improvements in crewmembers’ performance and flight safety strategies, including those for evacuation. FAA officials told us that they plan to emphasize the importance of effective communication between crewmembers and are considering updating a related Advisory Circular. Effective communication between cockpit and cabin crew are particularly important with the added security precautions being taken after September 11, 2001, including the locking the cockpit door during flight. The traditional training initiative now has an advanced curriculum, Advanced Crew Resource Management. According to FAA, this comprehensive implementation package includes crew resource management procedures, training for instructors and evaluators, training for crewmembers, a standardized assessment of the crew’s performance, and an ongoing implementation process. This advanced training was designed and developed through a collaborative effort between the airline and research communities. FAA considers training to be an ongoing development process that provides airlines with unique crew resource management solutions tailored to their operational demands. The design of crew resource management procedures is based on principles that require an emphasis on the airline’s specific operational environment. The procedures were developed to emphasize these crew resource management elements by incorporating them into standard operating procedures for normal as well as abnormal and emergency flight situations. Because commercial airliner accidents are rare, crewmembers must rely on their initial and recurrent training to guide their actions during an emergency. Even in light of advances and initiatives in evacuation technology, such as slides and slide life rafts, crewmembers must still assume a critical role in ensuring the safe evacuation of their passengers. Airline operators have indicated that it is very costly for them to pull large numbers of crewmembers off-line to participate in training sessions. FAA officials told us that improving flight and cabin crew communication holds promise for ensuring the evacuation of passengers during an emergency. To improve this communication and coordination between flight and cabin crew, FAA plans to update the related Advisory Circular, oversee training, and charge FAA inspectors with monitoring air carriers during flights to see that improvements are being implemented. In addition, FAA is enhancing its guidance to air carriers on preflight briefings for flight crews to sharpen their responses to emergency situations and mitigate passengers’ confusion. FAA expects this guidance to bolster the use and quality of preflight briefings between pilots and flight attendants on security, communication, and emergency procedures. According to FAA, these briefings have been shown to greatly improve the flight crew’s safety mind-set and to enhance communication. Acoustic attraction signals make sounds to help people locate the doors in smoke, darkness, or when lights and exit signs are obscured. When activated, the devices are intended to help people to determine the direction and approximate distance of the sound—and of the door. Examples of audio attraction signals include recorded speech sounds, broadband multifrequency sounds (“white noise”), or alarm bells. Research to determine if acoustic attraction signals can be useful in aircraft evacuation has included, for example, FAA’s Civil Aeromedical Institute testing of recorded speech sounds in varying pitches, using phrases such as “This way out,” “This way,” and “Exit here.” Researchers at the University of Leeds developed Localizer Directional Sound beacons, which combine broadband, multifrequency “white noise” of between 40Hz and 20kHz with an alerting sound of at least one other frequency, according to the inventor (see fig. 13). The FAA study noted above of acoustic attraction signals found that in the absence of recorded speech signals, the majority of participants evacuating a low-light-level, vision-obscured cabin will head for the front exit or will follow their neighbors. In contrast, participants exposed to recorded speech sounds will select additional exits, even those in the rear of the airplane. During aircraft trials conducted by Cranfield University and University of Greenwich researchers, tests of directional sound beacons found that under cabin smoke conditions, exits were used most efficiently when the cabin crew gave directions and the directional sound beacons were activated. With this combination, the distribution of passengers to the available exits was better than with cabin crew directions alone, sound beacons alone, no cabin crew directions, or no sound beacons. Researchers found that passengers were able to identify and move toward the closest sound source inside the airplane cabin and to distinguish between two closely spaced loudspeakers. However, in 2001, Airbus conducted several evacuation test trials of audio attraction signals using an A340 aircraft. According to Airbus, the acoustic attraction signals did not enhance passengers’ orientation, and, overall, did not contribute to passengers’ safety. While acoustic attraction signals are currently available, further research is needed to determine if their use is warranted on commercial airliners. FAA, Transport Canada Civil Aviation, and the British Civilian Aviation Authority do not currently mandate the use of acoustic attraction signals. The United Kingdom’s Air Accidents Investigation Branch made a recommendation after the fatal Boeing 737 accident at Manchester International Airport in 1985 that research be undertaken to assess the viability of audio attraction signals and other evacuation techniques to assist passengers impaired by smoke and toxic or irritant gases. The Civilian Aviation Authority accepted the recommendation and sponsored research at Cranfield University; however, it concluded from the research results that the likely benefit of the technology would be so small that no further action should be taken, and the recommendation was closed in 1992. The French Direction Generale de l’Aviation Civile funded aircraft evacuation trials using directional sound beacons in November 2002, with oversight by the European Joint Aviation Authorities. The trials were conducted at Cranfield University’s evacuation simulator with British Airways cabin crew and examined eight trial evacuations by two groups of ‘passengers.’ The study surveyed the participants’ views on various aspects of their evacuation experience and measured the overall time to evacuate. The speed of evacuation was found to be biased by the knowledge passengers’ gained in the four successive trials, and by variations in the number of passengers participating on the 2 days (155 and 181). The four trials by each of the two groups of passengers also involved different combinations of crew and sound in each. The study concluded that the insufficient number of test sessions further contributed to bias in the results, and that further research would be needed to determine whether the devices help to speed overall evacuation. Further research and testing are needed before acoustic attraction signals can be considered for widespread airline use. The signals may have drawbacks that would need to be addressed. For example, the Civil Aviation Authority found that placing an audio signal in the bulkhead might disorient or confuse the first few passengers who have to pass and then move away from the sound source to reach the exit. Such hesitation slowed passengers’ evacuation during testing. The researchers at Cranfield University trials in 1990 concluded that an acoustic sound signal did not improve evacuation times by a statistically significant amount, suggesting that the device might not be cost-effective. Smoke hoods are designed to provide the user with breathable, filtered air in an environment of smoke and toxic gases that would otherwise be incapacitating. More people die from smoke and toxic gases than from fire after an air crash. Because only a few breaths of the dense, toxic smoke typically found in aircraft fires can render passengers unconscious and prevent their evacuation, the wider use of smoke hoods has been investigated as a means of preventing passengers from being overcome by smoke and of giving them enough breathable air to evacuate. However, some studies have found that smoke hoods are only effective in certain types of fires and in some cases may slow the evacuation of cabin occupants. As shown in figure 14, a filter smoke hood can be a transparent bag worn over the head that fits snugly at the neck and is coated with fire-retardant material; it has a filter but no independent oxygen source and can provide breathable air by removing some toxic contaminants from the air for a period ranging from several minutes to 15 minutes, depending on the severity and type of air contamination. The hood has a filter to remove carbon monoxide—a main direct cause of death in fire-related commercial airplane accidents, as well as hydrogen cyanide—another common cause of death, sometimes from incapacitation that can prevent evacuation. Hoods also filter carbon dioxide, chlorine, ammonia, acid gases such as hydrogen chloride and hydrogen sulfide, and various hydrocarbons, alcohols, and other solvents. Some hoods also include a filter to block particulate matter. One challenge is where to place the hoods in a highly accessible location near each seat. Certain smoke hoods have been shown to filter out many contaminants typically found in smoke from an airplane cabin fire and to provide some temporary head protection from the heat of fire. In a full-scale FAA test of cabin burnthrough, toxic gases became the driving factor determining survivability in the forward cabin, reaching lethal levels minutes before the smoke and temperature rose to unsurvivable levels. A collaborative effort to estimate the potential benefits of smoke hoods was undertaken in 1986 by the British Civil Aviation Authority (CAA), the Federal Aviation Administration, the Direction Générale de l’Aviation Civile (France) and Transport Canada Civil Aviation. The resulting 1987 study examined the 20 accidents where sufficient data was available out of 74 fire-related accidents worldwide from 1966 to 1985. The results were sensitive to assumptions regarding extent of use and delays due to putting on smoke hoods. The study concluded that smoke hoods could significantly extend the time available to evacuate an aircraft and would have saved approximately 179 lives in the 20 accidents studied, assuming no delay in donning smoke hoods. Assuming a 10 percent reduction in the evacuation rate due to smoke hood use would have resulted in an estimated 145 lives saved in the 20 accidents with adequate data. A 15 second delay in donning the hoods would have saved an estimated 97 lives in the 20 accidents. When the likelihood of use of smoke hoods was included in the analysis for each accident, the total net benefit was estimated at 134 lives saved in the 20 accidents. The study also estimated that an additional 228 lives would have been saved in the 54 accidents where less data was available, assuming no delay in evacuation. The U.S. Air Force and a major manufacturer are developing a drop-down smoke hood with oxygen. Because current oxygen masks in airplanes are not airtight around the mouth, they provide little protection from toxic gases and smoke in an in-flight fire. To provide protection from these hazards, as well as from decompression and postcrash fire and smoke, the Air Force’s drop-down smoke hood with oxygen uses the airplane’s existing oxygen system and can fit into the overhead bin of a commercial airliner where the oxygen mask is normally stowed. This smoke hood is intended to replace current oxygen masks but also be potentially separated from the oxygen source in a crash to provide time to evacuate. Smoke hoods are currently available and produced by several manufacturers; however, not all smoke hoods filter carbon monoxide. They are in use on many military and private aircraft, as well as in buildings. An individually-purchased filter smoke hood costs about $70 or more, but according to one manufacturer bulk order costs have declined to about $40 per hood. In addition, they estimated that hoods cost about $2 a year to install and $5 a year to maintain. They weigh about a pound or less and have to be replaced about every 5 years. Furthermore, airlines could incur additional replacement costs due to theft if smoke hoods were placed near passenger seats in commercial aircraft. Neither the British CAA, the FAA, the DGAC, nor Transport Canada Civil Aviation has chosen to require smoke hoods. The British Air Accident Investigations Branch recommended that smoke hoods be considered for aircraft after the 1985 Manchester accident, in which 48 of 55 passengers died on a runway from an engine fire before takeoff, mainly from smoke inhalation and the effects of hydrogen cyanide. Additionally, a U.K. parliamentary committee recommended research into smoke hoods in 1999, and the European Transport Safety Council, an international nongovernmental organization whose mission is to provide impartial advice on transportation safety to the European Commission and Parliaments, recommended in 1997 that smoke hoods be provided in all commercial aircraft. Canada’s Transportation Safety Board has taken no official position on smoke hoods, but has noted a deficiency in cabin safety in this area and recommended further evaluation of voluntary passenger use. Although smoke hoods are currently available, they remain controversial. Passengers are allowed to bring filter type smoke hoods on an airplane, but FAA is not considering requiring airlines to provide smoke hoods for passengers. The debate over whether smoke hoods should be installed in aircraft revolves mainly around regulatory concerns that passengers will not be able to put smoke hoods on quickly in an emergency; that hoods might hinder visibility, and that any delay in putting on smoke hoods would slow down an evacuation. FAA’s and CAA’s evacuation experiments—to determine how long it takes for passengers to unpack and don smoke hoods and whether an evacuation would be slowed by their use—have reached opposite conclusions about the effects of smoke hoods on evacuation rates. The CAA has noted that delays in putting on smoke hoods by only one or two people could jeopardize the whole evacuation. An opposite view by some experts is that the gas and smoke-induced incapacitation of one or two passengers could also delay an evacuation. FAA believes that an evacuation might be hampered by passengers’ inability to quickly and effectively access and don smoke hoods, by competitive passenger behavior, and by a lack of passenger attentiveness during pre-flight safety briefings. FAA noted that smoke hoods can be difficult to access and use even by trained individuals. However, other experts have noted that smoke hoods might reduce panic and help make evacuations more orderly, that competitive behavior already occurs in seeking access to exits in a fire, and that passengers could learn smoke hood safety procedures in the pre-flight safety briefings in the same way they learn to use drop-down oxygen masks or flotation devices. The usefulness of smoke hoods varies across fire scenarios depending on assumptions about how fast hoods could be put on and how much time would be available to evacuate. One expert told us that the time needed to put on a smoke hood might not be important in several fire scenarios, such as an in-flight fire in which passengers are seeking temporary protection from smoke until the airplane lands and an evacuation can begin. In other scenarios—a ground evacuation or postcrash evacuation — some experts argue that passengers in back rows or far from an exit may have their exit path temporarily blocked as other passengers exit and, because of the delay in their evacuation, may have a greater need and more time available to don smoke hoods than passengers seated near usable exits. Exit slide systems are rarely used during their operational life span. However, when such a system is used, it may be under adverse crash conditions that make it important for the system to work as designed. To prevent injury to passengers and crew escaping through floor-level exits located more than 6 feet above the ground, assist devices (i.e., slides or slide-raft systems) are used. (See fig. 15.) The rapid deployment, inflation, and stability of evacuation slides are important to the effectiveness of an aircraft’s evacuation system, as was illustrated in the fatal ground collision of a Northwest Airlines DC-9 and a Northwest Airlines 727 in Romulus, Michigan, in December 1990. As a result of the collision, the DC-9 caught fire, but there were several slide problems that slowed the evacuation. For example, NTSB later found that the internal tailcone exit release handle was broken, thereby preventing the tailcone from releasing and the slide from deploying. Because of concerns about the operability of exit slides, NTSB recommended in 1974 that FAA improve its maintenance checks of exit slide operations. In 1983, FAA revised its exit slide requirements to specify criteria for resistance to water penetration and absorption, puncture strength, radiant heat resistance, and deployment as flotation platforms after ditching. All U.S. air carriers have an FAA-approved maintenance program for each type of airplane that they operate. These programs require that the components of an airplane’s emergency evacuation system, which includes the exit slides, be periodically inspected and serviced. An FAA principal maintenance inspector approves the air carrier’s maintenance program. According to NTSB, although most air carriers’ maintenance programs require that a percentage of emergency evacuation slides or slide rafts be tested for deployment, the percentage of required on-airplane deployments is generally very small. For example, NTSB found that American Airlines’ FAA-approved maintenance program for the A300 requires an on-airplane operational check of four slides or slide rafts per year. Delta Air Lines’ FAA- approved maintenance program for the L-1011 requires that Delta activate a full set of emergency exits and evacuation slides or slide rafts every 24 months. Under an FAA-approved waiver for its maintenance program, United is not required to deploy any slide on its 737 airplanes. NTSB also found that FAA allows American Airlines to include inadvertent and emergency evacuation deployments toward the accomplishment of its maintenance program; therefore, it is possible that American would not purposely deploy any slides or slide rafts on an A300 to comply with the deployment requirement during any given year. In addition, NTSB found that FAA also allows Delta Air Lines to include inadvertent and emergency evacuation deployments toward the accomplishment of its maintenance program. NTSB holds that because inadvertent and emergency deployments do not occur in a controlled environment, problems with, or failures in, the system may be more difficult to identify and record, and personnel qualified to detect such failures may not be present. For example, in an inadvertent or emergency slide or slide raft deployment, observations on the amount of time it takes to inflate the slide or slide raft, and the pressure level of the slide or slide raft are not likely to be documented. For these reasons, a 1999 NTSB report said that FAA’s allowing these practices could potentially leave out significant details about the interaction of the slide or slide raft with the door or how well the crew follows its training mock-up procedures. Accordingly, in 1999, NTSB recommended that FAA stop allowing air carriers to count inadvertent and emergency deployments toward meeting their maintenance program requirement because conditions are not controlled and important information (on, for example, the interface between the airplane and the evacuation slide system, timing, durability, and stability) is not collected. The recommendation continues to be open at the NTSB. NTSB officials said they would be meeting to discuss this recommendation with FAA in the near future. Additionally, NTSB recommended that FAA, for a 12-month period, require that all operators of transport-category aircraft demonstrate the on- airplane operation of all emergency evacuation systems (including the door-opening assist mechanisms and slide or slide raft deployment) on 10 percent of each type of airplane (at least one airplane per type) in their fleets. NTSB said that these demonstrations should be conducted on an airplane in a controlled environment so that qualified personnel can properly evaluate the entire evacuation system. NTSB indicated that the results of the demonstrations (including an explanation of the reasons for any failures) should be documented for each component of the system and should be reported to FAA. Prompted by a tragedy in which 57 of the 137 people on board a British Airtours B-737 were killed because passengers found exit doors difficult to access and operate, the British Civil Aviation Authority initiated a research program to explore changes to the design of the overwing exit (Type III) door. Trained crewmembers are expected to operate most of the emergency equipment on an airplane, including most floor-level exit doors. But overwing exit doors, termed “self-help exits,” are expected to be and will primarily be opened by passengers without formal training. NTSB reported that even when flight attendants are responsible for opening the overwing exit doors, passengers are likely to make the first attempt to open the overwing exit hatches because the flight attendants are not physically located near the overwing exits. There are now two basic types of overwing exit doors—the “self-help” doors that are manually removed inward and then stowed and the newer “swing out” doors that open outward on a hinge. According to NTSB, passengers continue to have problems removing the inward-opening exit door and stowing it properly. The manner in which the overwing exit is opened and how and where the hatch should be stowed is not intuitively obvious to passengers, nor is it easily or consistently depicted graphically. NTSB recently recommended to FAA that Type III overwing exits on newly manufactured aircraft be easy and intuitive to open and have automatic stowage out of the egress path. NTSB has indicated that the semiautomatic, fast-opening, Type III overwing exit hatch could give passengers additional evacuation time. Over-wing exit doors that “swing out” on hinges rather than requiring manual removal are currently available. The European Joint Aviation Authorities (JAA) has approved the installation of these outward-opening hinged doors on new-production aircraft in Europe. In addition, Boeing has redesigned the overwing exit door for its next-generation 737 series. This redesigned, hinged door has pressurized springs so that it essentially pops up and outward, out of the way, once its lever is pulled. The exit door handle was also redesigned and tested to ensure that anyone could operate the door using either single or double handgrips. Approximately 200 people who were unfamiliar with the new design and had never operated an overwing exit tested the outward-opening exit door. These tests found that the average adult could operate the door in an emergency. The design eliminates the problem of where to stow the exit hatch because the door moves up and out of the egress route. While the new swing-out doors are available, it will take some time for them to be widely used. Because of structural difficulties and cost, the new doors are not being considered for the existing fleet. For new-production airplanes, their use is mixed because JAA requires them in Europe for some newer Boeing 737s, but FAA does not require them in the United States. However, FAA will allow their use. As a result, some airlines are including the new doors on their new aircraft, while others are not. For example, Southwest Airlines has the new doors on its Boeing 737s. The extent to which other airlines and aircraft models will have the new doors installed remains to be seen and will likely depend on the cost of installation, the European market for the aircraft, and any additional costs to train flight attendants in its use. Airbus, a leading aircraft manufacturer, has begun building a family of A380 aircraft, also called Large Transport Aircraft (see fig. 16). Early versions of the A380, which is scheduled to begin flight tests in 2005 and enter commercial service in 2006, will have 482 to 524 seats. The A380-800 standard layout references 555 seats. Later larger configurations could accommodate up to 850 passengers. The A380 is designed to have 16 emergency doors and require 16 escape slides, compared with the 747, which requires 12. Later models of the A380 could have 18 emergency exits and escape slides. The advent of this type of Large Transport Aircraft is raising questions about how passengers will exit the aircraft in an emergency. The upper deck doorsill of the A380 will be approximately 30 feet above the ground, depending on the position (attitude) of the aircraft. According to an Airbus official responsible for exit slide design and operations, evacuation slides have to reach the ground at a safe angle even if the aircraft is tipped up; however, extra slide length is undesirable if the sill height is normal. Previously, regulations would have required slides only to touch the ground in the tip-up case, even if that meant introduction of relatively steep sliding surfaces. However, because of the sill height, passengers may hesitate before jumping and their hesitation may extend the total evacuation time. Because some passengers may be reluctant to leap onto the slide when they can see how far it is to the ground, the design concept of the A380 evacuation slides includes blinder walls at the exit and a curve in the slide to mask the distance to the ground. A next-generation evacuation system developed by Airbus and Goodrich called the “intelligent slide” is a possible solution to the problem of the Large Transport Aircraft’s slide length. The technology is not a part of the slide, but is connected to the slide through what is called a door management system composed of sensors. The “brains” of the technology will be located inside the forward exit door of the cabin, and the technology is designed to adjust the length of the slide according to the fuselage’s tipping angle to the ground. The longest upper-deck slide for an A380 could exceed 50 feet. The A380 slides are made of a nylon-based fabric that is coated with urethane or neoprene, and they are 10 percent lighter than most other slides on the market. They have to be packed tightly into small bundles at the foot of emergency exit doors and are required to be fully inflated in 6 seconds. Officials at Airbus noted that the slides are designed to withstand the radiant heat of a postimpact fire for 180 seconds, compared with the 90 seconds required by regulators. According to a Goodrich official, FAA will require Goodrich to conduct between 2,000 and 2,500 tests on the A380 slides to make sure they can accommodate a large number of passengers quickly and withstand wind, rain, and other weather conditions. The upper-level slides, which are wide enough for two people, have to enable the evacuation of 140 people per minute, according to Airbus officials. An issue to be resolved is whether a full-scale demonstration test will be required or whether a partial test using a certain number of passengers, supplemented by a computer simulation of an evacuation of 555 passengers, can effectively demonstrate an evacuation from this type of aircraft. Airbus officials told us that a full-scale demonstration could result in undesirable injuries to the participants and is therefore not the preferred choice. Officials at the Association of Flight Attendants have expressed concern that there has not been a full-scale evacuation demonstration involving the A380. They are concerned that computer modeling might not really match the human experience of jumping onto a slide from that height. In addition, they are concerned that other systems involved in emergency exiting, such as the communication systems, need to be tested under controlled conditions. As a result, they believe a full-scale demonstration under the current 90-second standard is necessary. All commercial aircraft that fly over water more than 50 nautical miles from the nearest shore are required to be equipped with flotation devices for each occupant of the airplane. According to FAA, 44 of the 50 busiest U.S. airports are located within 5 miles of a significant body of water. In addition, life vests, seat cushions, life rafts, and exit slides may be used as flotation devices for water emergencies. FAA policies dictate that if personal flotation devices are installed beneath the passenger seats of an aircraft, the devices must be easily retrievable. Determinations of compliance with this requirement are based on the judgment of FAA as the certifying authority. FAA is conducting research and testing on the location and types of flotation devices used in aircraft. When it has completed this work, it is likely to provide additional guidance to ensure that the devices are easily retrievable and usable. FAA’s research is designed to analyze human performance factors, such as how much time passengers need to retrieve their vests, whether and how the cabin environment physically interferes with their efforts, and how physically capable passengers are of reaching their vests while seated and belted. FAA is reviewing four different life vest installation methods and has conducted tests on 137 human subjects. According to an early analysis of the data, certain physical installation features significantly affect both the ability of a typical passenger to retrieve an underseat life vest and the ease of retrieval. This work may lead to additonal guidance on the location of personal flotation devices. FAA’s research may also indicate a need for additional guidance on the use of personal flotation devices. In a 1998 report on ditching aircraft and water survival, FAA found that airlines differed in their instructions to passengers on how to use personal flotation devices. For example, some airlines advise that passengers hold the cushions in front of their bodies, rest their chins on the cushions, wrap their arms around the cushions with their hands grasping the outside loops, and float vertically in the water. Other airlines suggest that passengers lie forward on the cushions, grasp and hold the loops beneath them, and float horizontally. FAA also reported that airlines’ flight attendant training programs differed in their instructions on how to don life vests and when to inflate them. This appendix presents information on the background and status of potential advancements in general cabin occupant safety and health that we identified, including the following: advanced warnings of turbulence; preparations for in-flight medical emergencies; reductions in health risks to passengers with certain medical conditions, including deep vein thrombosis; and improved awareness of radiation exposure. This appendix also discusses occupational safety and health standards for the flight attendant workforce. According to FAA, the leading cause of in-flight injuries for cabin occupants is turbulence. In June 1995, following two serious events involving turbulence, FAA issued a public advisory to airlines urging the use of seat belts at all times when passengers are seated, but concluded that the existing rules did not require strengthening. In May 2000, FAA instituted a public awareness campaign, called Turbulence Happens, to stress the importance of wearing safety belts to the flying public. Because of the potential for injury from unexpected turbulence, ongoing research is attempting to find ways to better identify areas of turbulence so that pilots can take corrective action to avoid it. In addition, FAA’s July 2003 draft strategic plan targets a 33 percent reduction in the number of turbulence injuries to cabin occupants by 2008—from an annual average of 15 injuries per year for fiscal years 2000 through 2002 to no more than 10 injuries per year. FAA is currently evaluating new airborne weather radar and other technologies to improve the timeliness of warnings to passengers and flight attendants about impending turbulence. For example, the Turbulence Product Development Team, within FAA’s Aviation Weather Research Program, has developed a system to measure turbulence and downlink the information in real time from commercial air carriers. The International Civil Aviation Organization has approved this system as an international standard. Ongoing research includes (1) detecting turbulence in flight and reporting its intensity to augment pilots’ reports, (2) detecting turbulence remotely from the ground or in the air using radar, (3) detecting turbulence remotely using LIDAR or the Global Positioning System’s constellation of satellites, and (4) forecasting the likelihood of turbulence over the continental United States during the next 12 hours. Prototypes of the in- flight detection system have been installed on 100 737-300s operated by United Airlines, and two other domestic air carriers have expressed an interest in using the prototype. FAA also plans to improve (1) training on standard operating procedures to reduce injuries from turbulence, (2) the dissemination of pilots’ reports of turbulence, and (3) the timeliness of weather forecasts to identify turbulent areas. Furthermore, FAA encourages and some airlines require passengers to keep their seatbelts fastened when seated to help avoid injuries from unexpected turbulence. Currently, pilots rely primarily on other pilots to report when and where (e.g., specific altitudes and routes) they have encountered turbulent conditions en route to their destinations; however, these reports do not accurately identify the location, time, and intensity of the turbulence. Further research and testing will be required to develop technology to accurately identify turbulence and to make the technology affordable to the airlines, which would ultimately bear the cost of upgrading their aircraft fleets. The Aviation Medical Assistance Act of 1998 directed FAA to determine whether the current minimum requirements for air carriers’ emergency medical equipment and crewmember emergency medical training should be modified. In accordance with the act, FAA collected data for a year on in-flight deaths and near deaths and concluded that enhancements to medical kits and a requirement for airlines to carry automatic external defibrillators were warranted. Specifically, the agency found that these improvements would allow cabin crewmembers to deal with a broader range of in-flight emergencies. On April 12, 2001, FAA issued a final rule requiring air carriers to equip their aircraft with enhanced emergency medical kits and automatic external defibrillators by May 12, 2004. Most U.S. airlines have installed this equipment in advance of the deadline. In the future, new larger aircraft may require additional improvements to meet passengers’ medical needs. For example, new large transport aircraft, such as the Airbus A-380, will have the capacity to carry about 555 people on long-distance flights. Some aviation safety experts are concerned that with the large number of passengers on these aircraft, the number of in- flight medical emergencies will increase and additional precautions for in- flight medical emergencies (e.g., dedicating an area for passengers who experience medical emergencies in flight) should be considered. Airbus has proposed a medical room in the cabin of its A-380 as an option for its customers. Passengers with certain medical conditions (e.g., heart and lung diseases) can be at higher risk of health-related complications from air travel than the general population. For example, passengers who have limited heart or lung function or have recently had surgery or a leg injury can be at greater risk of developing a condition known as deep vein thrombosis (DVT) or travelers’ thrombosis, in which blood clots can develop in the deep veins of the legs from extended periods of inactivity. Air travel has not been linked definitively to the development of DVT, but remaining seated for extended periods of time, whether in one’s home or on a long-distance flight, can cause blood to pool in the legs and increase the chances of developing DVT. In a small percentage of cases, the clots can break free and travel to the lungs, with fatal results. In addition, the reduced levels of oxygen available to passengers in-flight can have detrimental health effects on passengers with heart, circulatory, and respiratory disorders because lower levels of oxygen in the air produce lower levels of oxygen in the body—a condition known as hypoxia. Furthermore, changes in cabin pressure (primarily when the aircraft ascends and descends) can negatively affect ear, nose, and throat conditions and pose problems for those flying after certain types of surgery (e.g., abdominal, cardiac, and eye surgery). Information on the potential effects of air travel on passengers with certain medical conditions is available; however, additional research, such as on the potential relationship between DVT and air travel, is ongoing. The National Research Council, in a 2001 report on airliner cabin air quality, recommended, among other things, that FAA increase efforts to provide information on health issues related to air travel to crewmembers, passengers, and health professionals. According to FAA’s Federal Air Surgeon, since this recommendation was received, the agency has redoubled its efforts to make information and recommendations on air travel and medical issues available through its Web site www.cami.jccbi.gov/aam-400/PassengerHandS.htm. This site also includes links to the Web sites of other organizations with safety and health information for air travelers, such as the Aerospace Medical Association, the American Family Physician (Medical Advice for Commercial Air Travelers), and the Sinus Care Center (Ears, Altitude, and Airplane Travel), and videos on safety and health issues for pilots and air travelers. The Aerospace Medical Association’s Web site, http://www.asma.org/publication.html, includes guidance for physicians to use in advising passengers about the potential risks of flying based on their medical conditions, as well as information for passengers to use in determining whether air travel is advisable given their medical conditions. Furthermore, some airlines currently encourage passengers to do exercises while seated, to get up and walk around during long flights, or to do both to improve blood circulation; however, walking around the airplane can also put passengers at risk of injuries from unexpected turbulence. In addition, a prototype of a seat has been designed with imbedded sensors, which record the movement of a passenger and send this information to the cabin crew for monitoring. The crew would then be able to track passengers seated for a long time and could suggest that these passengers exercise in their seats or walk in the cabin aisles to enhance circulation. While FAA’s Web site on passenger and pilot safety and health provides links to related Web sites and videos (e.g., cabin occupant safety and health issues), historically, the agency has not tracked who uses its Web site or how frequently it is used to monitor the traveling public’s awareness and use of this site. Agency officials told us that they plan to install a counter capability on its Civil Aerospace Medical Institute Web site by the end of August 2003 to track the number of visits to its aircrew and passenger health and safety Web site. The World Health Organization has initiated a study to help determine if a linkage exists between DVT and air travel. Further, FAA developed a brochure on DVT that has been distributed to aviation medical examiners and cited in the Federal Air Surgeon’s Bulletin. The brochure is aimed at passengers rather than airlines and suggests exercises that can be done to promote circulation. Pilots, flight attendants, and passengers who fly frequently are exposed to cosmic radiation at higher levels (on a cumulative basis) than the average airline passenger and the general public living at or near sea level. This is because they routinely fly at high altitudes, which places them closer to outer space, which is the primary source of this radiation. High levels of radiation have been linked to an increased risk of cancer and potential harm to fetuses. The amount of radiation that flight attendants and frequent fliers are exposed to—referred to as the dose—depends on four primary factors: (1) the amount of time spent in flight; (2) the latitude of the flight— exposure increases at higher latitudes; for example, at the same altitude, radiation levels at the poles are about twice those at the equator; (3) the altitude of the flight—exposure is greater at high altitudes because the layer of protective atmosphere becomes thinner; and (4) solar activity— exposure is higher when solar activity increases, as it does every 11 years or so. Peak periods of solar activity, which can increase exposure to radiation by 10 to 20 times, are sometimes called solar storms or solar flares. FAA’s Web site currently makes available guidance on radiation exposure levels and risks for flight and cabin crewmembers, as well as a system for calculating radiation doses from flying specific routes and specific altitudes. To increase crewmembers’ awareness of in-flight radiation exposure, FAA issued two Advisory Circulars for crewmembers. The first Advisory Circular, issued in 1990, provided information on (1) cosmic radiation and air shipments of radioactive material as sources of radiation exposure during air travel; (2) guidelines for exposure to radiation; (3) estimates of the amounts of radiation received on air carriers’ flights on various routes to and from, or within, the contiguous United States; and (4) examples of calculations for estimating health risks from exposure to radiation. The second Advisory Circular, issued in 1994, recommended training for crewmembers to inform them about in-flight radiation exposure and known associated health risks and to assist them in making informed decisions about their work on commercial air carriers. The circular provided a possible outline of courses, but left it to air carriers to gather the subject matter materials. To facilitate the monitoring of radiation exposure levels by airliner crewmembers and the public (e.g., frequent fliers), FAA has developed a computer model, which is publicly available via the agency’s Web site. This Web site also provides guidance and recommendations on limiting radiation exposure. However, it is unclear to what extent flight attendants, flight crews, and frequent fliers are aware of and use FAA’s Web site to track the radiation exposure levels they accrue from flying. Agency officials told us that they plan to install a counter capability its Civil Aerospace Medical Institute Web site by the end of August 2003, to track the number of visits to its aircrew and passenger health and safety Web site. FAA also plans to issue an Advisory Circular by early next year, which incorporates the findings of a just completed FAA report, “What Aircrews Should Know About Their Occupational Exposure to Ionizing Radiation.” This Advisory Circular will include recommended actions for aircrew and information on solar flare event notification of aircrew. While FAA provides guidance and recommendations on limiting the levels of cosmic radiation that flight attendants and pilots are exposed to, it has not developed any regulations. In contrast, the European Union issued a directive for workers in May 1996, including air carrier crewmembers (cabin and flight crews) and the general public, on basic safety and health protections against dangers arising from ionizing radiation. This directive set dose limits and required air carriers to (1) assess and monitor the exposure of all crewmembers to avoid exceeding exposure limits, (2) work with those individuals at risk of high exposure levels to adjust their work or flight schedules to reduce those levels, and (3) inform crewmembers of the health risks that their work involves from exposure to radiation. It also required airlines to work with female crewmembers, when they announce a pregnancy, to avoid exposing the fetus to harmful levels of radiation. This directive was binding for all European Union member states and became effective in May 2000. According to European safety officials, pregnant crewmembers are often given the option of an alternative job with the airline on the ground to avoid radiation exposure to their fetuses. Furthermore, when flight attendants and pilots reach recommended exposure limits, European air carriers work with crewmembers to limits or change their subsequent flights and destinations to minimize exposure levels for the balance of the year. Some air carriers ground crewmembers when they reach annual exposure limits or change their subsequent flights and destinations to minimize exposure levels for the balance of the year. In 1975, FAA assumed responsibility from the Occupational Health and Safety Administration (OSHA) for establishing safety and health standards for flight attendants. However, FAA has only recently begun to take action to provide this workforce with OSHA-like protections. For example, in August 2000, FAA and OSHA entered into a memorandum of understanding and issued a joint report in December 2000, which identified safety and health concerns for the flight attendant workforce and the extent to which OSHA-type standards could be used without compromising aviation safety. On September 29, 2001, the DOT Office of the Inspector General (DOT IG) reported that FAA had made little progress toward providing flight attendants with workplace protections and urged FAA to address the recommendations in the December 2000 report and move forward with setting safety and health standards for the flight attendant workforce. In April 2002, the DOT IG reported that FAA and OSHA had made no progress since it issued its report in September 2001. According to FAA officials, the joint FAA and OSHA effort was put on hold because of other priorities that arose in response to the events of September 11, 2001. FAA has not yet established occupational safety and health standards to protect the flight attendant workforce. FAA is conducting research and collecting data on flight attendants’ injuries and illnesses. On March 4, 2003, FAA announced the creation of a voluntary program for air carriers, called the Aviation Safety and Health Partnership Program. Through this program, the agency intends to enter into partnership agreements with participating air carriers, which will, at a minimum, make data on their employees’ injuries and illnesses available to FAA for collection and analysis. FAA will then establish an Aviation Safety and Health Program Aviation Rule-Making Committee to provide advice and recommendations to develop the scope and core elements of the partnership program review and analyze the data on employees’ injuries and illnesses; identify the scope and extent of systematic trends in employees’ injuries and illnesses; recommend remedies to FAA that use all current FAA protocols, including rule-making activities if warranted, to abate hazards to employees; and create any other advisory and oversight functions that FAA deems necessary. FAA plans to select members to provide a balance of viewpoints, interests, and expertise. The program preserves FAA’s complete and exclusive responsibility for determining whether proposed abatements of safety and health hazards would compromise or negatively affect aviation safety. FAA is also funding research through the National Institute for Occupational Safety and Health (NIOSH) to, among other things, determine the effects of flying on the reproductive health of flight attendants, much of which has been completed. FAA plans to monitor cabin air quality on a selected number of flights, which will help it set standards for the flight attendant workforce. The Association of Flight Attendants has collected a large body of data on flight attendants’ injuries and illnesses, which it considers sufficient for use in establishing safety and health standards for its workforce. Officials from the association do not believe that FAA needs to collect additional data before starting the standard-setting process. The European Union has occupational safety and health standards in place to protect flight attendants, including standards for monitoring their levels of radiation exposure. An official from an international association of flight attendants told us that while flight attendants in Europe have concerns similar to those of flight attendants in the United States (e.g., concerns about air quality in airliner cabins), the European Union places a heavier emphasis on worker safety and health, including safety and health protections for flight attendants. The following illustrates how a cost analysis might be conducted on each of the potential advancements discussed in this report. Costs estimated through this analysis could then be weighed against the potential lives saved and injuries avoided from implementing the advancements. This methodology would allow advancements to be compared using comparable cost data that when combined with similar analyses of effectiveness to help decisionmakers determine which advancements would be most effective in saving lives and avoiding injuries, taking into account their costs. The methodology provides for developing a cost estimate despite significant uncertainties by making use of historical data (e.g, historical variations in fuel prices) and best engineering judgments (e.g., how much weight an advancement will add and how much it will cost to install, operate, and maintain). The methodology formally takes into account the major sources of uncertainty and from that information develops a range of cost estimates, including a most likely cost estimate. Through a common approach for analyzing costs, the methodology facilitates the development of comparable estimates. This methodology can be applied to advancements in various stages of development. Inflatable lap belts are designed to protect passengers from a fatal impact with the interior of the airplane, the most common cause of death in survivable accidents. Inflatable seat belts adapt advanced automobile technology to airplane seats in the form of seat belts with air bags embedded in them. Several hundred of these seatbelt airbags have been installed in commercial airliners in bulkhead rows. We calculated that requiring these belts on an average-sized airplane in the U.S. passenger fleet would be likely to cost from $98,000 to $198,000 and to average about $140,000 over the life of the airplane. On an annual basis, the cost would be likely to range from $8,000 to $17,000 and to average $12,000. We considered several factors to explain this range of possible costs. The installation price of these belts is subject to uncertainty because of their limited production to date. In addition, these belts add weight to an aircraft, resulting in additional fuel costs. Fuel costs depend on the price of jet fuel and on how many hours the average airplane operates, both subject to uncertainty. Table 5 lists the results of our cost analysis for an average- sized airplane in the U.S. fleet. According to our analysis, the life-cycle and annualized cost estimates in table 5 are influenced most by variations in jet fuel prices, followed by the average number of hours flown per year and the installation price of the belts. The cost per ticket is influenced most by variations in jet fuel prices, followed by the average number of hours flown per year, the number of aircraft in the U.S. fleet, and the number of passenger tickets issued. To analyze the cost of inflatable lap belts, we collected data on key cost variables from a variety of sources. Information on the belts’ installation price, annual maintenance and refurbishment costs, and added weight was obtained from belt manufacturers. Historical information on jet fuel prices, extra gallons of jet fuel consumed by a heavier airplane, average hours flown per year, average number of seats per airplane, number of airplanes in the U.S. fleet, and number of passenger tickets issued per year was obtained from FAA and DOT’s Office of Aviation Statistics. To account for variation in the values of these cost variables, we performed a Monte Carlo simulation. In this simulation, values were randomly drawn 10,000 times from probability distributions characterizing possible values for the number of seat belts per airplane, seat belt installation price, jet fuel price, number of passenger tickets, number of airplanes, and hours flown.This simulation resulted in forecasts of the life-cycle cost per airplane, the annualized cost per airplane, and the cost per ticket. Major assumptions in the cost analysis are described by probability distributions selected for these cost variables. For jet fuel prices, average number of hours flown per year, and average number of seats per airplane, historical data were matched against possible probability distributions.Mathematical tests were performed to find the best fit between each probability distribution and the data set’s distribution. For the installation price, number of passenger tickets, and number of airplanes, less information was available. For these variables, we selected probability distributions that are widely used by researchers. Table 6 lists the type of probability distribution and the relevant parameters of each distribution for the cost variables. In addition to those named above, Chuck Bausell, Helen Chung, Elizabeth Eisenstadt, David Ehrlich, Bert Japikse, Sarah Lynch, Sara Ann Moessbauer, and Anthony Patterson made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Airline travel is one of the safest modes of public transportation in the United States. Furthermore, there are survivors in the majority of airliner crashes, according to the National Transportation Safety Board (NTSB). Additionally, more passengers might have survived if they had been better protected from the impact of the crash, smoke, or fire or better able to evacuate the airliner. As requested, GAO addressed (1) the regulatory actions that the Federal Aviation Administration (FAA) has taken and the technological and operational improvements, called advancements, that are available or are being developed to address common safety and health issues in large commercial airliner cabins and (2) the barriers, if any, that the United States faces in implementing such advancements. FAA has taken a number of regulatory actions over the past several decades to address safety and health issues faced by passengers and flight attendants in large commercial airliner cabins. GAO identified 18 completed actions, including those that require safer seats, cushions with better fire-blocking properties, better floor emergency lighting, and emergency medical kits. GAO also identified 28 advancements that show potential to further improve cabin safety and health. These advancements vary in their readiness for deployment. Fourteen are mature, currently available, and used in some airliners. Among these are inflatable lap seat belts, exit doors over the wings that swing out on hinges instead of requiring manual removal, and photoluminescent floor lighting. The other 14 advancements are in various stages of research, engineering, and development in the United States, Canada, or Europe. Several factors have slowed the implementation of airliner cabin safety and health advancements. For example, when advancements are ready for commercial use, factors that may hinder their implementation include the time it takes for (1) FAA to complete the rule-making process, (2) U.S. and foreign aviation authorities to resolve differences between their respective requirements, and (3) the airlines to adopt or install advancements after FAA has approved their use. When advancements are not ready for commercial use because they require further research, FAA's processes for setting research priorities and selecting research projects may not ensure that the limited federal funding for cabin safety and health research is allocated to the most critical and cost-effective projects. In particular, FAA does not obtain autopsy and survivor information from NTSB after it investigates a crash. This information could help FAA identify and target research to the primary causes of death and injury. In addition, FAA does not typically perform detailed analyses of the costs and effectiveness of potential cabin occupant safety and health advancements, which could help it identify and target research to the most cost-effective projects.
You are an expert at summarizing long articles. Proceed to summarize the following text: Woody biomass—small-diameter trees, branches, and the like—is generated as a result of timber-related activities in forests or on rangelands. Small-diameter trees may be removed to reduce the risk of wildland fire or to improve forest health, while treetops, branches, and limbs, collectively known as “slash,” are often the byproduct of traditional logging activities or thinning projects. Slash is generally removed from trees on site, before the logs are hauled for processing. It may be scattered on the ground and left to decay or to burn in a subsequent prescribed fire, or piled and either burned or hauled away for use or disposal. Woody biomass can be put to various uses. Among other uses, small- diameter logs can be sawed into structural lumber or can be chipped and processed to make pulp, the raw material from which paper, cardboard, and other products are made. Woody biomass also can be used for fuel. Various entities, including power plants, schools, pulp and paper mills, and others, burn woody biomass in boilers to turn water into steam, which can be used to make electricity, heat buildings, or provide heat for industrial processes. Federal, state, and local governments, as well as private organizations, are working to expand the use of woody biomass. Recent federal legislation contains provisions for woody biomass research and financial assistance. For example, the Consolidated Appropriations Act for Fiscal Year 2005 made up to $5 million in appropriations available for grants to create incentives for increased use of woody biomass from national forest lands. In response, the Forest Service awarded $4.4 million in such grants in fiscal year 2005. State and local governments also are encouraging the material’s use through grants, research, and technical assistance, while private corporations are researching new ways to use woody biomass, often in partnership with government and universities. The users in our review cited several factors contributing to their use of woody biomass. The primary factors they cited were financial incentives and benefits associated with its use, while other factors included having access to an affordable supply of woody biomass and environmental considerations. Financial incentives for, and benefits from, using woody biomass were the primary factors for its use among several users we reviewed. Three public entities—a state college in Nebraska, a state hospital in Georgia, and a rural school district in Montana—received financial grants covering the initial cost of the equipment that they needed to begin using woody biomass. The state college received a state grant of about $1 million in 1989, the Georgia hospital received about $2.5 million in state funds in the early 1980s, and the Montana school district received about $900,000 in federal funds in 2003 for the same purpose. A fourth user—a wood-fired power plant in California—received financial assistance in the form of tax- exempt state bonds to finance a portion of the plant’s construction. Three users in our review also received additional financial assistance, including subsidies and other payments that helped them continue their use of woody biomass. For example, the California power plant benefited from an artificially high price received for electricity during its first 10 years of operation, a result of California’s implementation of the federal Public Utility Regulatory Policies Act of 1978. Under the act, state regulators established rates for electricity from certain facilities producing it from renewable sources, including woody biomass. However, the initial prices set by California substantially exceeded market prices in some years, benefiting this user by increasing its profit margin. The Montana school district also received ongoing financial assistance from a nearby nonprofit organization. The nonprofit organization paid for the installation of a 1,000-ton wood fuel storage facility (capable of storing over a year’s supply of fuel) and financed the purchase of a year’s supply of fuel for the district, which the district repays as it uses the fuel. The third user, a Colorado power plant generating electricity by firing woody biomass with coal, realized ongoing financial benefits by selling renewable energy certificates associated with the electricity it generated from woody biomass. Energy cost savings also were a major incentive for using woody biomass among six users we reviewed. Two users—rural school districts in Pennsylvania and Montana—told us that they individually had saved about $50,000 and $60,000 in annual fuel costs by using wood instead of natural gas or fuel oil. Similarly, the state college in Nebraska typically saves about $120,000 to $150,000 annually, while the Georgia state hospital reported saving at least $150,000 in 1999, the last year for which information was available. And the two pulp and paper mills we reviewed each reported saving several million dollars annually by using wood rather than natural gas or fuel oil to generate steam heat for their processes. An affordable supply of woody biomass also facilitated its use, especially in areas where commercial activities such as logging or land clearing generated woody biomass as a byproduct. For example, the Nebraska state college was able to purchase woody biomass for an affordable price because logging companies harvested timber in the vicinity of the college, hauling the logs to sawmills and leaving their slash; the college paid only the cost to collect, chip, and transport the slash to the college for burning. Similarly, a Pennsylvania power plant obtains a portion of its wood fuel from land-clearing operations in which, according to a plant official, the developers clearing the land are required to dispose of the cleared material but are not allowed to burn or bury it. The plant official told us developers often are willing to partially subsidize removal and transportation costs in order to have an outlet for it. Thinning activities by area landowners also contributed to an affordable supply for a large pulp and paper mill in Mississippi. In this area, as in much of the southeastern United States, small-diameter trees are periodically thinned from forests to promote the growth of other trees, and traditionally have been sold for use in making pulp and paper. Further, according to mill officials, the level terrain and extensive road access typical of southeastern forests keep harvesting and hauling costs affordable—particularly in contrast to other parts of the country where steep terrain and limited road access may result in high harvesting and hauling costs. Three users cited potential environmental benefits, such as improved forest health and air quality, as prompting their use of woody biomass; other users told us about additional factors that increased their use of woody biomass. Two users—the Montana school district and the coal- fired power plant in Colorado—started using woody biomass in part because of concerns about forest health and the need to reduce hazardous fuels in forest land. They hoped that by providing a market for woody biomass, they could help stimulate thinning efforts. Another user, a Vermont power plant, began using woody biomass because of air-quality concerns. According to plant officials, the utilities that funded it were concerned about air quality and as a result chose to build a plant fired by wood instead of coal because wood emits lower amounts of pollutants. Other factors and business arrangements specific to individual users also made using woody biomass advantageous. For example, one user, which chips woody biomass for use as fuel in a nearby power plant, has an arrangement under which the plant purchases the user’s product at a price slightly higher than the cost the user incurred in obtaining and processing woody biomass, as long as the product is competitively priced and meets fuel-quality standards. Three users whose operations include chipping of woody biomass and other activities, such as commercial logging or sawmilling, also told us that having the operations within the same business is important because equipment and personnel costs can be shared between the chipping operation and the other activities. And some users helped offset the cost of obtaining and using woody biomass by selling byproducts resulting from its use. One pulp and paper mill in our review sold turpentine and other byproducts from the production of pulp and paper, while a wood-fired power plant sold steam extracted from its turbine to a nearby food-canning factory. Other byproducts sold by users in our review included ash used as a fertilizer and sawdust used by particle board plants. Users in our review experienced several factors that limited their use of woody biomass or made it more difficult or expensive to use. These factors included an insufficient supply of the material and increased costs related to equipment and maintenance. Seven users in our review told us they had difficulty obtaining a sufficient supply of woody biomass, echoing a concern raised by federal officials in our previous report. Two power plants reported to us that they were operating at about 60 percent of their capacity because they were unable to obtain sufficient woody biomass or other fuel for their plants. Officials at both plants told us that their shortages of wood were due at least in part to a shortage of nearby logging contractors, which prevented nearby landowners from carrying out all of the projects they wished to undertake. While officials at one plant attributed the plant’s shortage entirely to the lack of sufficient logging contractors, an official at the other plant stated that the lack of woody biomass from federal lands—particularly Forest Service lands—also was a significant problem. The lack of supply from federal lands was a commonly expressed concern among woody biomass users on the West Coast and in the Rocky Mountain region, with five of the seven users we reviewed in these regions telling us they had difficulty obtaining supply from federal lands. Users with problems obtaining supply from federal lands generally expressed concern about the Forest Service’s ability to conduct projects generating woody biomass; in fact, two users expressed skepticism that the large amounts of woody biomass expected to result from widespread thinning activities will ever materialize. One official stated, “We keep hearing about this coming ‘wall of wood,’ but we haven’t seen any of it yet.” In response to these concerns, officials from both the Department of the Interior and the Forest Service told us that their agencies are seeking to increase the availability of woody biomass from federal lands. Several users in our review told us they incurred costs to purchase and install the equipment necessary to use woody biomass beyond the costs that would have been required for using fuel oil or natural gas. The cost of this equipment varied considerably among users, from about $385,000 for a school district to $15 million for a pulp and paper mill. Wood utilization also increased operation and maintenance costs for some users, in some cases because of problems associated with handling wood. During our visit to one facility, wood chips jammed on a conveyor belt, dumping wood chips over the side of the conveyor and requiring a maintenance crew member to clear the blockage manually. At the power plant mixing woody biomass with coal, an official told us that a wood blockage in the feed mechanism led to a fire in a coal-storage unit, requiring the plant to temporarily reduce its output of electricity and pay $9,000 to rechip its remaining wood. Other issues specific to individual users also decreased woody biomass use or increased costs for using the material. For example, the Vermont wood-fired power plant is required by the state to obtain 75 percent of its raw material by rail, in order to minimize truck traffic in a populated area. According to plant officials, shipping the material by rail is more expensive than shipping by truck and creates fuel supply problems because the railroad serving the plant is unreliable and inefficient and experiences regular derailments. Another power plant was required to obtain a new emissions permit in order to begin burning wood in its coal- fired system. Our findings offer several insights for promoting greater use of woody biomass. First, rather than helping to defray the costs of forest thinning, attempts to encourage the use of woody biomass may instead stimulate the use of other wood materials such as mill residues or commercial logging slash. Second, government activities may be more effective in stimulating woody biomass use if they take into account the extent to which a logging and milling infrastructure to collect and process forest materials is in place. And finally, the type of efforts employed to encourage woody biomass use may need to be tailored to the scale and nature of individual recipients’ use. Unless efforts to stimulate woody biomass utilization are focused on small-diameter trees and other material contributing to the risk of wildland fire, such efforts may simply increase the use of alternative wood materials (such as mill residues) or slash from commercial logging operations. In fact, several users told us that they prefer such materials because they are cheaper or easier to use than woody biomass. Indeed, an indirect attempt to stimulate woody biomass use by one Montana user in our review led to the increased use of available mill residues instead. The Forest Service provided grant funds to finance the Montana school district’s 2003 conversion to a wood heating system in order to stimulate the use of woody biomass in the area. As a condition of the grant, the agency required that at least 50 percent of the district’s fuel consist of woody biomass during the initial 2 years of the system’s operation. Officials told us that the district complied with the requirement for those 2 years, but for the 2005-2006 school year, the district chose to use less expensive wood residues from a nearby log-home builder. It should be noted that the use of mill residues is not entirely to the detriment of woody biomass. Using mill residues can facilitate woody biomass utilization by providing a market for the byproducts (such as sawdust) of industries using woody biomass directly; this, in turn, can enhance these users’ profitability and thereby improve their ability to continue using the material cost-effectively. In addition, the availability of both mill residues and woody biomass provides diversity of supply, allowing users to continue operations even if one source of supply is interrupted or becomes prohibitively expensive. Nevertheless, these indirect effects, even where present, may be insufficient to substantially influence the use of woody biomass. Mill residues aside, even those users that consumed material we define as woody biomass often used the tops and limbs from trees harvested for merchantable timber or other uses rather than small-diameter trees contributing to the problem of overstocked forests. Logging slash can be cheaper to obtain than small-diameter trees when it has been removed from the forest by commercial logging projects—which often leave slash piles at roadside “landings,” where trees are delimbed before being loaded onto trucks. Unless woody biomass users specifically need small-diameter logs—for use in sawing lumber, for example—they may find it cheaper to collect slash piled in roadside areas than to enter the forest to cut and remove small-diameter trees. Government activities may be more effective in stimulating woody biomass use if they take into account the extent to which a logging and milling infrastructure is in place in potential users’ locations. The availability of an affordable supply of woody biomass depends to a significant degree on the presence of a local logging and milling infrastructure to collect and process forest materials. Without a milling infrastructure, there may be little demand for forest materials, and without a logging infrastructure, there may be no way to obtain them. For example, an official with the Nebraska college in our review told us that the lack of a local logging infrastructure could jeopardize the college’s future woody biomass use. The college relied on slash from commercial loggers working nearby, but this official told us that the loggers were based in another state and the timber they were harvesting was hauled to sawmills over 100 miles away. According to the official, if more timber-harvesting projects were offered closer to the sawmills, these loggers would move their operations in order to reduce transportation costs—eliminating the nearby source of woody biomass available to the college. In contrast, users located near a milling and logging infrastructure are likely to have more readily available sources of woody biomass. One Montana official told us that woody biomass in the form of logging slash is plentiful in the Missoula area, which is home to numerous milling and logging activities, and that about 90 percent of this slash is burned because it has no market. The presence of such an infrastructure, however, may increase the availability of mill residues or other materials, potentially complicating efforts to promote woody biomass use by offering more attractive alternatives. Government activities may be more effective in stimulating woody biomass use if their efforts are tailored to the scale and nature of the users being targeted. Most of the large wood users we reviewed were primarily concerned about supply, and thus might benefit most from federal efforts to provide a predictable and stable supply of woody biomass. Such stability might come, for example, from long-term contracts signed under stewardship contracting authority, which allows contracts of up to 10 years. In fact, one company currently plans to build a $23 million woody biomass power plant in eastern Arizona, largely in response to a nearby stewardship project that is expected to treat 50,000 to 250,000 acres over 10 years. Similarly, officials of a South Carolina utility told us that the utility was planning to invest several million dollars in equipment that would allow a coal-fired power plant to burn woody biomass from thinning efforts in a nearby national forest. In both cases, the assurance of a long-term supply of woody biomass was a key factor in the companies’ willingness to invest in these efforts. In contrast, small users we reviewed did not express concerns about the availability of supply, in part because their consumption was relatively small. However, three of these users relied on external financing for their up-front costs to convert to woody biomass use. Such users—particularly small, rural school districts or other public facilities that may face difficulties raising the capital to pay needed conversion costs—might benefit most from financial assistance such as grants or loan guarantees to fund their initial conversion efforts. And as we noted in our previous report on woody biomass, several federal agencies, particularly the Forest Service, provide grants for woody biomass use. However, federal agencies must take care that their efforts to assist users are appropriately aligned with the agencies’ own interests and do not create unintended consequences. For example, while individual grant recipients might benefit from using woody biomass—through fuel cost savings, for example—benefits to the government, such as reduced thinning costs, are uncertain. Without such benefits, agency grants may simply increase outlays but not produce comparable savings in thinning costs. The agencies also risk adverse ecological consequences if their efforts to develop markets for woody biomass result in these markets inappropriately influencing land management decisions. As noted in our prior report on woody biomass, agency and nonagency officials cautioned that efforts to supply woody biomass in response to market demand rather than ecological necessity might result in inappropriate or excessive thinning. Drawing long-term conclusions from the experiences of users in our review must be done with care because (1) our review represents only a snapshot in time and a small number of woody biomass users and (2) changes in market conditions could have substantial effects on the options available to users and the materials they choose to consume. Even so, the variety of factors influencing woody biomass use among users in our review—including regulatory, geographic, market-based, and other factors—suggests that the federal government may be able to take many different approaches as it seeks to stimulate additional use of the material. Because these approaches have different costs, and likely will provide different returns in terms of defraying thinning expenses, it will be important to identify what kinds of mechanisms are most cost-effective in different circumstances. In doing so, it also will be important for the agencies to take into account the variation in different users’ needs and available resources, differences in regional markets and forest types, and the multitude of available alternatives to woody biomass. If federal agencies are to maximize the long-term impact of the millions of dollars being spent to stimulate woody biomass use, they will need to design approaches that take these elements into account rather than using boilerplate solutions. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3841 or nazzaror@gao.gov. David P. Bixler, Lee Carroll, Steve Gaty, and Richard Johnson made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government is placing greater emphasis on thinning vegetation on public lands to reduce the risk of wildland fire. To help defray the cost of thinning efforts, it also is seeking to stimulate a market for the resulting material, including the smaller trees, limbs, and brush--referred to as woody biomass--that traditionally have had little or no commercial value. As GAO has reported in the past, the increased use of woody biomass faces obstacles, including the high cost of harvesting and transporting it and an unpredictable supply in some locations. Nevertheless, some entities, such as schools and businesses, are utilizing the material, potentially offering insights for broadening its use. GAO agreed to (1) identify key factors facilitating the use of woody biomass among selected users, (2) identify challenges these users have faced in using woody biomass, and (3) discuss any insights that these findings may offer for promoting greater use of woody biomass. This testimony is based on GAO's report Natural Resources: Woody Biomass Users' Experiences Offer Insights for Government Efforts Aimed at Promoting Its Use (GAO-06-336). Financial incentives and benefits associated with using woody biomass were the primary factors facilitating its use among the 13 users GAO reviewed. Four users received financial assistance (such as state or federal grants) to begin their use of woody biomass, three received ongoing financial support related to its use, and several reported energy cost savings over fossil fuels. Using woody biomass also was attractive to some users because it was available, affordable, and environmentally beneficial. Several users GAO reviewed, however, cited challenges in using woody biomass, such as difficulty obtaining a sufficient supply of the material. For example, two power plants reported running at about 60 percent of capacity because they could not obtain enough material. Some users also reported that they had difficulty obtaining woody biomass from federal lands, instead relying on woody biomass from private lands or on alternatives such as sawmill residues. Some users also cited increased equipment and maintenance costs associated with using the material. The experiences of the 13 users offer several important insights for the federal government to consider as it attempts to promote greater use of woody biomass. First, if not appropriately designed, efforts to encourage its use may simply stimulate the use of sawmill residues or other alternative wood materials, which some users stated are cheaper or easier to use than woody biomass. Second, the lack of a local logging and milling infrastructure to collect and process forest materials may limit the availability of woody biomass; thus, government activities may be more effective in stimulating its use if they take into account the extent of infrastructure in place. Similarly, government activities such as awarding grants or supplying woody biomass may stimulate its use more effectively if they are tailored to the scale and nature of the targeted users. However, agencies must remain alert to potential unintended ecological consequences of their efforts, such as excessive thinning to meet demand for woody biomass.
You are an expert at summarizing long articles. Proceed to summarize the following text: The WIA Adult and Dislocated Worker Programs provide employment services to a wide range of participants. The Adult Program serves all individuals age 18 and older, and the Dislocated Worker Program serves individuals who have been or will be terminated or laid off from employment, among others. The Adult Program prioritizes certain services for recipients of public assistance and other low-income individuals when program funds are limited. To enable individuals to participate, both programs may offer supportive services such as transportation, childcare, housing, and needs-related payments under certain circumstances. WIA requires that the Adult and Dislocated Worker Programs and other federally-funded employment and training programs provide services through one-stop centers—now called American Job Centers—so that jobseekers and employers can find assistance at a single location. DOL’s Employment and Training Administration administers the Adult and Dislocated Worker Programs and oversees their implementation, which is carried out by states and local areas. At the state level, the WIA Adult and Dislocated Worker Programs are administered by state workforce agencies. Each state has one or more local workforce investment areas, each governed by a WIB. WIBs select the entities that will operate American Job Centers, which provide most WIA services, and oversee the American Job Center network. WIA provides substantial flexibility to states and WIBs to determine how services are provided. WIA represented a fundamental shift from its predecessor program, the Job Training Partnership Act, by decreasing the focus on job training as the primary means to help adults and dislocated workers get a job. The Adult and Dislocated Worker Programs provide participants three types of services: Core services include basic services such as job searches and labor market information, and may be accessed with or without staff assistance. Intensive services include such activities as comprehensive assessment and case management, which require greater staff involvement. Intensive services are available to participants who are unable to obtain or retain employment after receiving at least one core service. Training services include such activities as occupational skills or on- the-job training. To be eligible for training services, participants must: (1) be unable to obtain or retain employment after receiving at least one intensive service, (2) be in need of training, and (3) have the skills and qualifications to successfully complete the training program, among other requirements. To assess participants’ skills and determine whether they need training, WIBs may require them to complete certain activities. We previously found that most WIBs required participants to complete skills assessments or gather information about the occupation for which they wanted training before entering a training program. DOL requires that participants eligible for training select approved training providers in consultation with case managers, but participants ultimately choose the training programs in which they participate. that training be directly linked to in-demand occupations, which DOL interprets to include both currently-available jobs and occupations that are projected to grow in the future. 20 C.F.R. § 663.440(c). financial sanctions and incentive funding. In addition, WIA requires states to use Unemployment Insurance (UI) wage records to track employment-related outcomes. In 2005, DOL began allowing states to request waivers to replace the WIA performance measures with a smaller set of common measures that focus on employment, retention, and earnings across multiple programs (see table 1). These measures do not include credential attainment and are calculated differently than the WIA measures. As of December 2013, a total of 48 states and territories, plus the District of Columbia, had obtained this waiver and used the common measures for the Adult and Dislocated Worker Programs. Although states and territories are no longer subject to financial sanctions or incentive funding for the credential attainment measure, DOL still requires them to report the number of training participants who earn credentials.credential attainment improves workers’ labor market experience through In general, higher earnings, greater mobility, and enhanced job security, according to DOL and research studies. We have previously raised concerns about the accuracy and comparability of DOL’s data on credential attainment because DOL’s guidance allowed states and local areas considerable flexibility in defining what constitutes a credential. In February 2002, we recommended that DOL more clearly define this term. Since that time, DOL issued guidance to clarify its definition of credential attainment. In addition, to verify that the data states report on credential attainment are accurate, DOL requires WIA programs to collect and retain documentation on participants’ credential attainment. In addition to the performance measures, WIA requires states to report a wide array of data that includes whether training participants find employment that relates to their training. A WIA participant’s employment is considered to be related to the training received if in the new job the participant uses “a substantial portion of the skills taught in the training.” However, DOL does not require that states collect and retain documentation for employment related to training to verify the accuracy of data they report. In a September 2011 report, DOL’s Office of Inspector General raised questions about the quality of DOL’s data on employment related to training. Specifically, the Office of Inspector General found that these data were “incomplete and unreliable” based on its review of the data DOL maintains in its WIA database. The Office of Inspector General recommended, among other things, that DOL provide guidance on the best methodology for reporting such data as well as provide oversight to ensure that states develop or identify best practices for increasing the rate of training-related employment. DOL agreed with these recommendations. Federal agencies that work in partnership with states and local areas to administer programs such as WIA must continually balance the competing objectives of collecting uniform performance data and giving program managers flexibility to meet local needs. Our prior work has found that federal agencies have considered key attributes of data quality for performance data, including: Completeness—the extent to which enough of the required data elements are collected from a sufficient portion of the target population or sample. Accuracy—the extent to which the data are free from significant errors. Consistency—the extent to which data are collected using the same procedures and definitions across collectors and times. Ease of use—how readily intended users can access data, aided by clear definitions, user-friendly software, and easy-to-use access procedures. Congress is currently considering legislation to reauthorize the Workforce Investment Act of 1998, which has been due for reauthorization since the end of fiscal year 2003. The Supporting Knowledge and Investing in Lifelong Skills (SKILLS) Act (H.R. 803), passed by the House, would establish both credential attainment and training-related employment as performance measures. The Workforce Investment Act of 2013 (S. 1356), introduced in the Senate and reported out by the Committee on Health, Education, Labor, and Pensions, would establish credential attainment as a performance measure, but not training-related employment. During the time-period from program year 2006 through program year 2011, the total number of participants in WIA’s Adult and Dislocated Specifically, in the Adult Worker Programs increased significantly.Program, the number of participants increased from about 625,000 to about 1.25 million, and in the Dislocated Worker Program, the number of participants increased from about 272,000 to about 761,000. During this same time-period, the number of participants who received training services also increased, but not as dramatically as the number of overall For example, the number of Dislocated Worker participants (see fig. 1).participants who received training increased from about 76,000 in program year 2006 to about 120,000 in program year 2011. Since the number of participants who received training services did not increase at the same rate as the number of participants who entered into WIA’s Adult and Dislocated Worker Programs, the percentage of participants who received training generally declined, according to DOL reports (see fig. 2). There are several reasons that may have contributed to the declining percentage of participants who received training. Beginning in December 2007, the significant increase in the overall number of participants is likely attributed to the downturn in the economy that led to a dramatic rise in unemployment and the subsequent infusion of additional funds to WIA programs from the American Recovery and Reinvestment Act of 2009 (Recovery Act). The Recovery Act provided $500 million for grants for the Adult Program and $1.25 billion for grants for the Dislocated Worker Program. By spring of 2009, DOL began allocating these funds to states to supplement existing WIA funds. DOL encouraged states to use Recovery Act funds to increase training in an effort to help Americans acquire new skills and return to work. DOL officials stated that the increase in demand for training services exceeded the increase in supply provided for by the Recovery Act, which may have led to shortages in training capacity that contributed to a relative decline in training. Another factor, according to DOL officials, is that some program participants had limited access to needs-based financial assistance and other supportive services, such as child care, which may have prevented them from entering WIA training services. Further, DOL noted that the preference of many program participants is for immediate employment rather than job training. Similarly, the percentage of training participants who earned credentials has also generally declined from program year 2006 through program year 2011 (see fig. 3). For example, DOL’s data show the percentage of those who earned credentials in the Adult Program dropped from about 74 percent in program year 2006 to about 58 percent in program year 2011. Likewise, the percentage of those in the Dislocated Worker Program who earned credentials dropped from about 75 percent in program year 2006 to about 63 percent in program year 2011. Despite the decline in the percentage of training participants who earned credentials, DOL reported that the total number of participants attaining credentials increased during this time period. For example, the number of training participants in the Dislocated Worker Program who attained credentials increased from about 48,000 in program year 2006 to about 79,000 in program year 2011. According to DOL officials, the percentage of training participants who earned credentials may have declined in part as a result of changes in the performance measures that states negotiated. In program year 2005, states began requesting waivers to replace the WIA performance measures with a set of common performance measures that do not include the employment and credential attainment measure for the Adult Officials in three of the six states we and Dislocated Worker Programs.reviewed said that after this request was approved, reporting data on credential attainment became a lower priority for them. Officials in one of these states also said they stopped collecting and reporting these data until DOL issued clarifying guidance in 2010 emphasizing the importance of credential attainment as a pathway to employment. DOL’s data on credential attainment also show that participants in the Dislocated Worker Program typically have higher credential attainment rates than participants in the Adult Program. DOL officials explained that the Dislocated Worker Program has a higher funding level that supports more training, and that participants in this program generally have longer work histories and more advanced education and so are more likely to enter into training programs that lead to credentials. In contrast, participants in the Adult Program are more likely to require training focused on remedial education and job readiness which are less likely to result in credentials as defined by DOL. Of the training participants who attained credentials during program year 2011—approximately 89,000 in the Adult Program and 84,000 in the Dislocated Worker Program— about two-thirds in each program earned an occupational credential, such as a welding certificate or clinical medical assistant certificate (see fig. 4). The next two most common types of credentials attained by training participants were an occupational skills license, such as a license in nursing, and an associate’s degree. The fact that training participants attained occupational credentials at higher rates than longer-term academic degrees is consistent with DOL’s recommendation that states shorten training duration in an effort to increase credential attainment. In our December 2013 report, we found that in program year 2011, of those in the Adult Program who entered training, 75 percent spent 1 year or less receiving training services, while 25 percent spent more time. Similarly, for the Dislocated Worker Program, 65 percent of training participants spent 1 year or less receiving training services. According to officials from one local workforce investment board we contacted, all training programs offered through their training providers must lead to a credential and must be completed in 12 months or less. We found DOL’s data on training-related employment unreliable for our purposes based on our analysis of the data, an Office of Inspector General report, and data quality reports. We were not able to determine how many participants in the Adult or Dislocated Worker Programs obtained employment related to their training in program year 2011. For the Adult Program, we found that states reported data on 48 percent of training participants, but had missing data for the remaining 52 percent. For the Dislocated Worker Program, states reported data on 74 percent of training participants, but had missing data for the remaining 26 percent. Further, our analysis of the reported data showed wide variation among states regarding the percentage of participants who obtained training- related employment, raising questions about the data’s reliability (see table 2). Our findings are consistent with a September 2011 Office of Inspector General report, which found that DOL’s data on training-related employment were “incomplete and unreliable.” Specifically, the Inspector General reported that 5 of the 53 state workforce agencies it reviewed did not report any data and 12 state workforce agencies reported unreasonably high or abnormally low data on training-related employment. Further, DOL’s WIA data quality report for the third quarter of program year 2011 raised questions about training-related employment data for 26 states. For example, some states reported that none of their training participants secured training-related employment. Workforce officials we interviewed in four of six states said that collecting data on credential attainment can be resource-intensive primarily because it requires manually tracking the information. Unlike employment-related outcomes—which states can generally obtain through the state’s UI wage record system—credentials are not generally recorded in a central, automated data system. As a result, case managers must manually collect this information from various sources including participants, training providers, and third-party credentialing organizations. DOL also requires documentation of credential attainment with a copy of a diploma, transcript, or other approved record in the participant’s case file. DOL monitors this requirement through its data validation process. The process of collecting and verifying a participant’s credential attainment generally entails one or more of the following steps: Contacting Participants. Workforce officials in most states we reviewed said they generally begin their efforts to determine credential attainment by attempting to contact training participants, though some are unresponsive or inaccessible. Several local officials noted that they use a variety of means, including phone, mail, email, and social media. Some training participants readily provide evidence of their credentials. For example, local officials in two states estimated that for about 70 percent of participants, credentials are fairly easy to verify. Other participants may be less responsive. Workforce officials in three states explained that participants who have already exited the program have little incentive to respond to their requests. Local officials in two states also noted that some participants relocate without providing updated contact information. Contacting Training Providers. Training providers are another potential source of credential information, though in some cases they may decline to share such information. If case managers cannot reach a participant, they generally contact the training providers to determine whether a credential was earned. However, workforce officials from three states noted that training providers often declined to provide this information, citing student privacy rights such as those established by the Family Educational Rights and Privacy Act of 1974, as amended (FERPA). Contacting Third-party Organizations. Third-party credentialing organizations represent an additional source of credential information. For some occupations, a license or certification is required before a person can be employed in that capacity, such as a licensed practical nurse. In these cases, third-party organizations, such as state regulatory bodies, issue credentials. Case managers can sometimes search licensing databases online to confirm credential attainment. Local officials from two states noted that such data are fairly easy to obtain because the information is generally centrally accessible. However, an official from another state said that third-party organizations do not always provide information on credential attainment before DOL’s reporting timeframes end. Because case managers may not always be able to track down the documentation needed to verify credential attainment, the actual number of participants who attain credentials may be underreported to DOL. For example, officials in one state we contacted said they believe their credential attainment rate should be about 65 percent, but the rate they actually report is about half of that. Despite such obstacles to verifying data on participant credentials, several workforce experts and officials noted the importance of collecting this information. Workforce experts from one national organization noted that credential attainment can demonstrate the value of the funds invested in training and show employers the value of workforce programs and their participants. Workforce experts from another national organization said that credential data could help officials determine which credentials are best aligned with good employment outcomes. Some employer groups also noted the value of credentials in some high- demand occupations, such as manufacturing and information technology. For example, representatives of employer groups in Illinois and Rhode Island said they value information technology and manufacturing credentials from certifying organizations because these programs prepare individuals to perform high-skill tasks. In 2010, DOL provided guidance to states to increase the quantity and quality of credentials attained and to clarify the definition of credential for reporting purposes. During early WIA implementation, we reported that the definition of credential varied within and across states. For example, some states strictly defined credential as a diploma from an accredited institution, and other states broadly defined credential to include certificates of job readiness or completion of a workshop. DOL issued guidance in 2006 that provided additional clarification on which credentials to report, but, according to some workforce officials and experts, allowed for some interpretation.that defined “credential” as an umbrella term that can include a range of In 2010, DOL issued guidance postsecondary degrees, diplomas, licenses, certificates, and certifications. DOL also clarified that credentials must show attainment of measurable technical or occupational skills necessary to obtain employment or advance within an occupation. For this reason, DOL specified that credentials related to remedial training, such as work- readiness certificates, would not be counted for the purposes of credential attainment. In addition to clarifying which credentials should be reported, the 2010 guidance also included strategies that state and local officials can use to increase the quantity and quality of credentials attained. It noted that the first step in increasing the quantity of credentials attained is to refer more participants to training. DOL’s guidance also encouraged officials to take steps to ensure that the training programs result in an industry-recognized credential and that participants complete these training programs. These steps include shortening the duration of training and providing supportive services that enable participants to succeed. Further, to improve the quality of credentials attained, DOL suggested that state and local agencies build the capacity of front-line staff to identify and assess valuable and appropriate credentials for participants. DOL has also stressed the importance of credential attainment by measuring it through an agency-wide performance goal for its workforce development programs including the WIA Adult and Dislocated Worker Programs. DOL officials noted that credential attainment rates for these WIA programs are higher than the rates of some other DOL programs included in the agency-wide performance goal. DOL first began tracking credential attainment data for its agency-wide performance goal in 2010 when it set out to increase the number of training participants who attain credentials through any one of multiple federal workforce programs. Specifically, the goal was an increase of 10 percent, up to a total of 220,000 training participants earning credentials. In fiscal year 2013, DOL continued to assess credential attainment through this performance goal and sought to increase the percentage of training participants who earn credentials from 57 to 62 percent.attainment rate of 59.4 percent through the first two quarters of fiscal year 2013. DOL officials also said that DOL has established a new credential attainment goal; specifically, that by September 30, 2015, the percentage of training participants who attain credentials will increase by 10 percent from the level reported as of the end of fiscal year 2013. DOL officials reported a credential In addition to issuing guidance and setting credential attainment goals, DOL also undertook a number of other related initiatives, including some that are specific to credentials and others that are more broadly designed, such as the Workforce Data Quality Initiative. See Table 3 for a description of DOL’s initiatives. Some states have stressed the importance of credential attainment by implementing broad, statewide efforts. Similar to DOL’s efforts to enhance credential attainment by establishing annual goals, three of the six states we reviewed have either implemented statewide credential attainment goals or are working to do so: Texas implemented an annual state performance measure on educational achievement that tracks credential attainment for multiple programs, including the WIA Adult and Dislocated Worker Programs. All WIBs in the state are held to this measure. Washington has made credential attainment a state performance measure, but defines credential more broadly than DOL. For example, Washington recognizes a larger range of credentials, such as completion of on-the-job training. Illinois re-implemented a credential attainment performance measure during program year 2012 and, according to state officials, is in the process of setting credential targets for program year 2014. According to officials from some states, their efforts to emphasize credential attainment and reporting may have a positive impact on participants’ reported rate of credential attainment. Moreover, officials in Alabama, Illinois, Kansas, and Rhode Island told us they targeted their training funds more narrowly on credential-yielding programs by only approving training providers with programs that resulted in credentials that met DOL’s definition. For one Chicago WIB, this strategy, along with its other efforts to streamline training options from 753 occupations to 40 in-demand occupations, reduced its number of training providers. This practice was one of many DOL suggested in its 2010 guidance as a means for states and local areas to improve the value of credentials for participants. Officials in nearly every state we interviewed reported that this guidance was helpful largely because it more clearly defined which credentials should be reported to DOL. In addition, selected states and local areas have taken steps to ease the resource-intensive process of collecting data on credentials by enhancing communication with participants and working to overcome privacy issues with training providers. Workforce officials in three states told us that case managers seek to build rapport with participants early in the process so they are more likely to be responsive after their training program ends. Regarding training providers, officials in four of six states said they have made efforts to address privacy concerns. Local officials in Alabama, Kansas, and Texas, for example, told us that they ask participants to sign consent forms to allow training providers to share credential information with officials. In Washington, state officials access some credential data from the National Student Clearinghouse and from their state database of community and technical colleges. They said that student privacy rights are generally not a barrier to accessing credential data in Washington because students attending their community and technical colleges are notified that such information can be released to other entities unless the student opts out of sharing it. Washington state officials noted that they have been refining their process for collecting data on credential attainment for 15 years and now have a fairly sophisticated approach. While these varied efforts to mitigate challenges may help reduce the resources required or improve the quality of reported data, workforce officials from three states and three experts we interviewed raised some additional considerations about measuring performance on credential attainment (see table 4). Establishing a performance measure on credential attainment may affect the type of training provided and which participants receive training. For example, neither work readiness training nor on-the-job training (OJT) leads to what DOL has defined as a credential for reporting purposes. However, these may be the most appropriate types of training for participants with basic skills or for particular industries, according to officials from two states we interviewed. Our December 2013 report found that participants in the WIA Adult and Dislocated Worker Programs often lacked the relevant qualifications and basic skills needed to participate in training that would meet the needs of employers seeking employees for in-demand occupations. In addition, work readiness certificates are generally valued by employers, according to several employer group representatives and local workforce officials we interviewed. Representatives from a few employer groups also noted that, in some cases, experience is more important than credentials. For example, local officials we interviewed in Illinois said that the vast majority of participants in OJT obtained jobs with the employers once their training was completed. The officials said OJT provided a good return on investment, despite the fact that these participants did not earn credentials. Currently, DOL’s credential attainment data do not include participants who completed these types of training programs. If credential attainment is established as a performance measure, it will be important to consider ways to address participants who are enrolled in certain types of training that do not lead to a credential, such as by excluding these participants from a credential attainment measure or considering if other measures, such as basic skill attainment, could capture the value of training provided to participants excluded from the measure. Workforce officials in most states we studied identified challenges reporting data on training-related employment that were greater than those for reporting data on credential attainment, including the high degree of resources required and the subjective nature of determining whether employment is linked to training. Similar to credential attainment, there is no definitive source for these data, so case managers must generally collect participants’ employment information from various sources, including participants, employers, and UI wage records. Then— in a step beyond what is required for reporting on credential attainment— they must piece this information together to determine whether participants’ employment is substantially related to their training. Also unlike reporting on credential attainment, DOL does not require that local WIA programs collect and retain documentation on training-related employment in the participants’ case files to verify the accuracy of data they report to states. Officials in four of the six states we studied, as well as at DOL, said this data collection process often requires considerable time and effort. Further, officials from DOL and four states emphasized the need to consider the balance between the time required to collect outcome data and the time case managers spend serving participants, especially in an environment of reduced resources. In addition, officials in all six states said that making training-related employment determinations can be subjective. According to DOL’s reporting guidance, participants’ employment is related to their training if it uses “a substantial portion of the skills taught in training.” workforce experts from one national organization, said that one case manager’s interpretation of what constitutes a substantial portion of the skills obtained in training may differ from another’s. The training-related employment decision can be straightforward if the training and job are clearly connected. For example, if a participant received training to attain a commercial driver’s license and was subsequently hired as a driver by a trucking company, the case manager can easily determine that the participant’s employment is substantially related to the training received. In other cases, however, the decision may be more subjective. For example, officials in one state could not agree whether a participant who had received aviation instruction training had secured training-related employment in his position as an airframe and power plant mechanic. Some state officials thought the skills obtained were transferable, but others were unsure. U.S. Department of Labor, Training and Employment Guidance Letter No. 17-09: Quarterly Submission of Workforce Investment Act Standardized Record Data (WIASRD) (March 10, 2010). Collecting participants’ employment information and attempting to determine whether it was training-related generally entails several steps (see fig. 5). Contacting participants. Several state and local officials we interviewed said that they generally begin the process of collecting data on training-related employment by attempting to contact participants, though some can be unresponsive or inaccessible, which workforce experts from one national organization noted as well. If case managers are successful, they ask participants for information such as the name of their employer and their job title. Some local areas also ask participants directly if their new jobs are—in the participants’ opinion—related to the training they received. In some cases, case managers make their training-related employment determinations based solely on information the participants provide about their employment. Contacting employers. Some case managers contact employers to obtain participants’ employment information, though employers may not be responsive. If case managers could not reach a participant but know where the person works, they may contact the employer to obtain their job title and description. They may also contact an employer to verify the information provided by a participant. Case managers can use the employment information obtained from an employer, or from both the participant and employer, to determine if a participant’s job is training-related. However, workforce officials in Illinois and Texas said employers may not be responsive because they are concerned about employee privacy or about the amount of follow-up required. Checking UI wage records. Some local workforce officials said that if they are unable to gather information about a participant’s employment from the participant or the employer, they check the UI wage records, which are generally not available until several months after a participant exits from WIA services. DOL officials and workforce experts at two national organizations said the UI wage records generally provide the name of the participant’s employer and a code associated with the employer’s industry, but specific information on the participant’s occupation is rarely included. In some cases, the industry code has a clear connection to the training received, making the case manager’s training-related employment determination straightforward. For example, if a participant who was trained as a nurse was hired by a hospital, the case manager can reasonably assume that the employment and training are related. However, some officials noted that the industry code is not always a good predictor of a participant’s occupation. For example, if the same participant was hired by the health unit of a manufacturing company, the industry code in the UI wage records would suggest that the person’s job was associated with manufacturing and not related to the nursing training. Other steps to determine training-related employment. If successful in accessing a participant’s employment information, including job title, some workforce officials said case managers use DOL’s Occupational Information Network (O*NET), which provides an online tool to match job title occupational codes to the skills code associated with the participant’s training. This can help case managers decide whether the participant’s employment is training- related. According to some local workforce officials, however, it can be difficult to find the precise occupational code that matches the participant’s new job. In addition, DOL officials told us that even attempting to match O*NET codes in this manner might not help case managers determine if participants’ employment is related to their training because the threshold of relatedness is still subjective, as mentioned previously. DOL officials also told us it is difficult to prescribe a standard definition for determining whether a job is related to training because it often requires some judgment on the part of local officials. To improve reporting, some states have taken steps to increase their access to information about participants’ employment. Similar to their efforts to collect information on credential attainment, some local workforce officials told us that case managers seek to build rapport with participants. At the same time, officials we interviewed in four of six states have taken steps to increase access to employment-related information. In Illinois, Rhode Island, and Texas, local workforce officials ask participants to sign release forms authorizing employers to release employment information to officials. Local workforce staff in Kansas and Texas also said they obtain information on participants’ employment, including the names of their employers, and—unlike UI wage records— their job titles, by subscribing to an online payroll database called The Work Number. This service verifies employment via a database of national payroll data but does not include all employers, and local WIA programs must pay to subscribe. Officials said they conduct this survey for internal use to collect and share data on training-related employment that is not otherwise available to program managers and state officials. Another official noted that a significant percentage of respondents who do not report their job as training-related find the training instrumental in getting the job or that the skills they acquired in training are useful on their job. In addition, local workforce officials in four states we contacted said they have developed strategies to help reduce the subjectivity in determining whether a participant’s employment is related to their training. For example, local officials in Texas told us that staff may consult their American Job Center’s local business services office, which often has specific knowledge about what skills correspond with particular job titles. Officials in Texas and Alabama also said case managers may consult with their peers or supervisors to reach consensus about a training- related employment decision. Further, state workforce officials in Kansas told us that when employers post jobs on the state’s job bank, they are required to enter occupational codes from DOL’s online O*NET database. If a training participant gets one of these jobs, case managers can compare the employer-provided occupational code with the training codes supplied by training providers to help them determine whether the job is training-related. State and local officials in Washington also said managers routinely use O*NET when making their training-related employment decisions. While DOL has recently issued guidance aimed at increasing reporting rates for training-related employment data, it has taken limited steps to address states’ ongoing reporting challenges. As previously discussed, a September 2011 report by DOL’s Office of Inspector General raised questions about the quality of these data and made recommendations to address this issue. In response, DOL issued a notice to states in September 2013 that reiterated the requirement for states to report these data and stressed the data’s importance for program analysis and evaluation efforts pertaining to the value of investments in WIA training. The DOL notice also acknowledged that reporting on training-related employment is challenging largely because the information must be collected manually. According to the DOL notice, nearly every state that participated in conference calls on the topic indicated that there was considerable cost in conducting the necessary follow-up for reporting on training-related employment and that this was the primary reason the data were not well-reported. State and local officials we interviewed also noted that such data may be underreported because of the difficulty of following up with participants and employers. DOL also concluded that states with larger training caseloads had less complete reporting on training-related employment. In the notice, DOL also described a few practices most common among states with higher reporting rates for training-related employment. For example, DOL cited the practice of instituting a data system check to ensure that training-related employment data are recorded before closing a participant’s case file. Use of crosswalk not necessarily a solution for determining training-related employment Another program DOL administers, Job Corps—a residential, educational, and career technical training program for disadvantaged youths—uses a crosswalk that links specific training codes and occupational codes to help staff determine training-related employment. However, in a September 2011 report, the DOL Inspector General found that this crosswalk included some matches that were either not related or poorly related. Moreover, DOL officials said that using a crosswalk for the WIA Adult and Dislocated Worker programs could make the training- employment link too restrictive and would require a considerable amount of resources to develop. DOL has not identified and disseminated strategies for increasing access to employment-related data or helping to minimize the subjectivity of training-related determinations, but instead has focused exclusively on increasing data reporting rates. While DOL officials maintain that manual follow-up with participants is the best approach for obtaining employment related data, they also recognize that such data collection is resource- intensive. In addition, as officials in all six of our selected states also noted, DOL officials acknowledged that determining whether a participant’s employment is training-related can be a subjective decision. They noted challenges in defining training-related employment more precisely. For example, officials said some training is intended to develop broad, nonspecific skills that may help participants get jobs but are not associated with a specific occupation or industry. We recognize that utilizing professional judgment is inherent in certain tasks such as determining whether a participant’s employment is related to the training the participant received. However, minimizing the amount of subjective decision-making involved to the extent possible could help ensure better quality data on training-related employment. Reasonable approaches for improving the quality of performance data focus on aspects of completeness, accuracy, consistency, and ease of use. By identifying and sharing with states practices to increase access to employment- related data and reduce the subjectivity of some determinations, DOL could help states improve their reporting of data on training-related employment. In addition to the strategies all six selected states use to mitigate reporting challenges, workforce officials from three states and workforce experts at two national organizations said some additional considerations should be taken into account in weighing a performance measure on training-related employment (see table 5). We previously noted concerns about the level of resources required and the subjectivity of determinations, both of which could affect the data’s completeness and consistency—key aspects of performance data quality. The state and local strategies we identified may help mitigate some reporting challenges. In addition, a participant’s successful placement in a training- related job depends on both the ability and decision to pursue such a position. Some local workforce officials we contacted said that it may take participants longer to find employment in their field of training than is allowed for reporting purposes. For example, workforce officials in Rhode Island said that due to the poor economy in their state, it is not uncommon for some participants to take 2 years or more to find a job. Additionally, workforce officials in three states we contacted said participants may decide to take a job unrelated to their training if it is the only job they can find or if they simply choose not to pursue a job in the field in which they were trained. To ensure that public funds invested in WIA’s Adult and Dislocated Worker Programs are spent wisely, program managers and policymakers need performance data that are accessible, complete, accurate and consistent. The current common performance measures—employment, retention, and earnings—provide a basis for assessing the overall value of the services the programs provide, primarily using a standardized data source (state UI wage records). Beyond these, data on outcomes such as credential attainment and training-related employment can potentially provide information more specifically on the value of training services. However, as we have noted, collecting data on these outcomes can be more resource-intensive, in part because there is no single readily available source of data. DOL has taken steps to elevate the importance of credential attainment and improve data quality for this outcome. We found credential attainment data reported to DOL to be reliable. In contrast, we found the data reported to DOL on training-related employment to be incomplete and inconsistent. While DOL has acknowledged challenges in collecting data on and determining training-related employment, it has taken only limited steps to address these challenges, focusing efforts exclusively on improving reporting rates. This effort alone will not improve the quality of the data being reported. Given the nature of the challenges we identified, there are no easy or complete solutions. However, we also identified strategies some states use that may help increase access to employment information and reduce the subjectivity of some training-related determinations. Sharing such strategies with other states, as well as identifying and communicating other approaches, could lead to incremental improvement in the quality of data reported. Without such action, the data states are required to report on training-related employment are likely to remain unusable. To provide policymakers and program managers with better quality information to assess the value of training provided by WIA’s Adult and Dislocated Worker Programs, we recommend that the Secretary of Labor identify and share with states strategies for collecting and reporting data on training-related employment that could either increase access to employment information or reduce the subjectivity of determining when training is related to employment. We provided a draft of this report to the Secretary of Labor and selected draft sections to the Secretary of Education. DOL and Education provided technical comments, which we incorporated as appropriate, and DOL provided a written response (see app. II). DOL agreed with our recommendation and noted that having reliable data on training-related employment is important to effectively manage and evaluate the Adult and Dislocated Worker Programs. DOL also agreed that states can benefit from learning what other states are doing to address challenges regarding access to and subjectivity of these data. Toward this end, DOL noted that it plans to conduct additional conference calls with state officials to reiterate the importance of identifying training-related employment and continue to discuss and share best practices to improve these data. DOL noted that this sharing of best practices would supplement actions it has already taken to improve data on training- related employment. These actions include coding changes to the WIASRD to capture additional information, conference calls with state workforce officials to discuss reporting on training-related employment, and a work group considering adding more data elements to the UI wage records such as an occupational code. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Labor, the Secretary of Education, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or sherrilla@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who make key contributions to this report are listed in appendix III. Our objectives for this study on the Workforce Investment Act’s (WIA) Adult and Dislocated Worker Programs were to examine: 1) the extent to which training participants obtain credentials and secure training-related employment, 2) challenges states face reporting on credential attainment and what steps, if any, they and DOL are taking to address them, 3) challenges states face in reporting on training-related employment and what steps, if any, they and DOL are taking to address them. To address these objectives, we reviewed relevant federal laws, regulations, and DOL’s guidance to states for reporting select data on participants in the WIA Adult and Dislocated Worker Programs. We interviewed DOL officials from the Employment and Training Administration and the Office of Inspector General as well as experts on workforce issues (see Selection of Experts, below). We also interviewed state and local workforce officials as well as organizations that provided an employer perspective from a nongeneralizeable sample of six states (see Selection of States and Local Areas, below). To assess the reliability of the data DOL collects on credential attainment and training-related employment, we analyzed data from the Workforce Investment Act Standardized Record Data (WIASRD) system for program year 2010 and program year 2011—the most recent years for which data were available—by testing the data electronically and interviewing knowledgeable agency officials. We found the data to be sufficiently reliable for providing estimates on: 1) participants who received training, 2) the minimum number of training participants who earned a credential, and 3) the types of credentials they earned. However, we determined that the data on training-related employment were not reliable for the purposes of our report (see Analysis of DOL’s Training Outcome Data below). We interviewed experts on workforce issues representing six organizations. We identified experts by first reviewing relevant literature and asking officials from DOL for recommendations. We further developed the list by asking each expert we interviewed for additional names. Only experts who were mentioned more than once were selected. The results of these interviews are not generalizeable, but provided insights about the overall challenges states face in reporting on these outcomes and any efforts by states to overcome them. We interviewed state and local workforce officials from a nongeneralizeable sample of six states. We conducted in-person interviews with officials in Alabama, Illinois, and Texas and telephone interviews with officials in Kansas, Rhode Island, and Washington (see table 4). These results are not generalizeable, but provided insights about the challenges they face in reporting data on credential attainment and training-related employment as well as any steps they are taking to address those challenges. We selected the states to reflect a mix of those that had leading-edge data practices (as identified by experts) and those with either incomplete data or relatively high rates of reporting errors (as identified by WIA data quality reports on credential attainment). We also factored geographic diversity into state selection. In each state, we interviewed state workforce officials and also local workforce officials from at least one workforce investment board and at least one American Job Center—formerly known as a one-stop center. We selected a non- generalizeable sample of local areas based on input from state workforce officials and, for states we visited, proximity to the state workforce agency. In addition, we interviewed at least one employer organization in each state. In selecting these entities, we considered states’ input on organizations that could provide us with an employer perspective on the value of credentials and certain types of training for various industries. They included statewide business associations, regional business associations, individual employers, and industry-specific representatives. In each state, we obtained information about the state and local area’s process of collecting data on credentials and employment related to training as well as any challenges they may have encountered. We also asked state officials about DOL’s related guidance. We used a semi- structured interview guide for the state and local interviews. To assess the reliability of DOL’s data on training, credential attainment, and training-related employment in the WIASRD database for participants in the WIA Adult and Dislocated Worker programs, we: (1) reviewed documentation related to reporting these data, including DOL’s Office of Inspector General reports; (2) tested the data electronically to identify potential problems with consistency, completeness, or accuracy; and (3) interviewed knowledgeable DOL officials about the data. Our electronic testing consisted of identifying inconsistencies, outliers, and missing values. In addition, we analyzed the publicly-available WIASRD data file for program years 2010 and 2011, which was produced for DOL by its data contractor, Social Policy Research Associates. As part of our analysis, we reviewed the steps the data contractor took to address data errors and, to the extent possible, compared the data DOL provided for our analysis to the publicly-available file, and found only slight discrepancies. We found the data on training and credential attainment to be sufficiently reliable for reporting estimates of: (1) participants who received training, (2) the minimum number of training participants who earned a credential, and (3) the types of credentials they attained. We were not able to reliably make state-to-state comparisons because two states are piloting a new reporting format for DOL, and therefore would not have been compatible with the others. For the purposes of this report, we did not find the data on training-related employment reliable. We reached this conclusion based on our analysis of the data, an Office of Inspector General report, and DOL’s data quality reports. We were not able to determine how many training participants in the Adult or Dislocated Worker Programs obtained employment related to their training in program year 2011. For the Adult Program, we found that states reported data on 48 percent of training participants, but had missing data for the remaining 52 percent. For the Dislocated Worker Program, states reported data on 74 percent of training participants, but had missing data for the remaining 26 percent. Further, an analysis of the reported data showed wide variation among states regarding the percentage of participants who obtained training-related employment raising questions about the data’s reliability. We conducted this performance audit from October 2012 to January 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit work to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Andrew Sherrill, (202) 512-7215 or sherrilla@gao.gov. In addition to the contact named above, Laura Heald, Assistant Director, John Lack, Jeffrey G. Miller, and Kathryn O’Dea Lamas made key contributions to this report. Also contributing to this report were James Bennett, Jessica Botsford, David Chrisinger, Elizabeth Curda, Kathy Leslie, Carol Patey, Rhiannon Patterson, Jerry Sandau, and Walter Vance. Workforce Investment Act: Local Areas Face Challenges Helping Employers Fill Some Types of Skilled Jobs. GAO-14-19. Washington, D.C.: December 2, 2013. Workforce Investment Act: DOL Should Do More to Improve the Quality of Participant Data. GAO-14-4. Washington, D.C.: December 2, 2013. Postsecondary Education: Many States Collect Graduates’ Employment Information, but Clearer Guidance on Student Privacy Requirements Is Needed. GAO-10-927. Washington, D.C.: September 27, 2010. Workforce Investment Act: Employers Found One-Stop Centers Useful in Hiring Low-Skilled Workers; Performance Information Could Help Gauge Employer Involvement. GAO-07-167. Washington, D.C.: December 22, 2006. Workforce Investment Act: Labor and States Have Taken Actions to Improve Data Quality, but Additional Steps Are Needed. GAO-06-82. Washington, D.C.: November 14, 2005. Workforce Investment Act: Substantial Funds Are Used for Training, but Little Is Known Nationally about Training Outcomes. GAO-05-650. Washington, D.C.: June 29, 2005. Workforce Investment Act: Labor Should Consider Alternative Approaches to Implement New Performance and Reporting Requirements. GAO-05-539. Washington, D.C.: May 27, 2005. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. Workforce Investment Act: Labor Actions Can Help States Improve Quality of Performance Outcome Data and Delivery of Youth Services. GAO-04-308. Washington, D.C.: February 23, 2004. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002.
As the economy recovers, some employers continue to face difficulty finding qualified workers. The WIA Adult and Dislocated Worker Programs provide services, including job training, which aims to help participants acquire skills and credentials employers need. Under WIA, states are required to report data on training participants who obtain credentials and on those who enter employment related to the training they receive. Given that a WIA reauthorization proposal would establish both of these outcomes as performance measures, GAO was asked to examine the capacity of states to report on these outcomes. This report addresses: 1) the extent to which training participants obtained credentials and training-related employment, 2) challenges states face in reporting data on credentials and what steps, if any, they and DOL are taking to address them, and 3) challenges states face in reporting data on training-related employment and what steps, if any, they and DOL are taking to address them. GAO interviewed DOL officials, workforce experts, and state and local officials and employer organizations from a nongeneralizeable sample of six states selected in part on the basis of geographic diversity. GAO also analyzed data on credential attainment and training-related employment for program years 2010 and 2011. Of the more than two million total participants in the Workforce Investment Act's (WIA) Adult and Dislocated Worker Programs, about 11 percent and 16 percent, respectively, received training in program year 2011, and about two-thirds of the training participants in each program attained a credential. Little is known, however, about how many participants got jobs related to their training. From program year 2006 through program year 2011, the percentages of training participants who earned a credential declined from about 74 percent to 58 percent for the Adult Program and from about 75 percent to 63 percent for the Dislocated Worker Program, according to data from the Department of Labor (DOL). Of those training participants who attained a credential in program year 2011, about 65 percent earned occupational credentials, such as a welding certificate, followed by lower percentages who earned occupational skill licenses and associate's degrees, among others. In contrast, GAO found training-related employment data unreliable primarily because a significant portion of the data was missing. Workforce officials in four of six selected states cited some obstacles in reporting data on credential attainment, and both DOL and states are taking steps to address challenges. Officials in four states GAO contacted said reporting such data can be resource-intensive, largely because case managers must manually track this information from various sources, including participants, training providers, and third-party organizations. To improve credential attainment and reporting, DOL clarified which credentials should be reported and began measuring credential attainment through an agency-wide goal in 2010. Officials in five states have taken similar steps, such as setting goals and tracking credential attainment, and enhancing data exchange with training providers. Officials in most of the six states GAO contacted noted some obstacles to obtaining such data. For example, officials from several states cited student privacy laws as a barrier in verifying credentials with training providers. Officials in three states told us that they ask participants to sign consent forms allowing training providers to give credential information to local officials. Workforce officials in most of the selected states identified even greater challenges reporting data on training-related employment, including the high degree of resources required and the subjective nature of determining whether employment is linked to training. DOL has taken only limited steps to address these challenges. To report such data, case managers seek participants' employment information from participants, employers, and wage records. Then they must piece it together to determine whether participants' employment is "substantially related" to their training. Officials in most of the six states described this process as resource-intensive and noted that making such determinations are subjective since one case manager's interpretation of "substantially related" may differ from another's. Given these challenges, officials in all six states have taken some steps to increase access to employment information or make decisions less subjective. DOL has recently stressed the importance of reporting data on training-related employment and shared a few practices with states to increase reporting rates; however, it has not identified and disseminated strategies to address the ongoing challenges states face regarding resource intensiveness and subjectivity, which could improve the quality of such data. GAO recommends that DOL identify and share with states strategies that may ease collection and improve the quality of training-related employment data. DOL agreed with GAO’s recommendation.
You are an expert at summarizing long articles. Proceed to summarize the following text: Greenhouse gases can affect the climate by trapping energy from the sun that would otherwise escape the earth’s atmosphere. Various human and natural activities emit greenhouse gases, with the production and burning of fossil fuels for energy contributing around two-thirds of man-made global emissions in 2005 (see fig. 1). The remaining third includes emissions from industrial processes, such as steel production and semiconductor manufacturing; agriculture, including emissions from the application of fertilizers and from ruminant farm animals; land use, such as deforestation and afforestation; and waste, such as methane emitted from landfills. Carbon dioxide is the most important of the greenhouse gases affected by human activity, accounting for about three-quarters of global emissions in 2005, the most recent year for which data were available. Carbon dioxide (other) The 14 nations in our study differ greatly in the quantity of their greenhouse gas emissions, the sources of those emissions, and their per- capita incomes. Emissions in 2005 ranged from about 7 billion metric tons of carbon dioxide equivalent in China and 6 billion metric tons in the United States, to about 300 million metric tons in Malaysia. The contribution of various sectors to national emissions also differed across nations, with emissions from energy and industrial processes accounting for more than 70 percent of emissions in most industrialized nations and 20 percent or less of emissions in Indonesia and Brazil (see fig. 2). The Convention established a Secretariat that, among other things, supports negotiations, coordinates technical reviews of reports and inventories, and compiles greenhouse gas inventory data submitted by nations. The Secretariat has about 400 staff, located in Bonn, Germany, and its efforts related to national inventories are funded by contributions from the Parties. For the Secretariat’s core budget, Parties provided $52 million for the 2008-2009 budget cycle, of which the United States contributed $9.5 million ($3.76 million in 2008 and $5.75 million in 2009), excluding fees. The Convention requires Parties to periodically report to the Secretariat on their emissions of greenhouse gases resulting from human activities. Parties do not generally measure their emissions, because doing so is not generally feasible or cost effective, and instead estimate their emissions. To help Parties develop estimates, the IPCC developed detailed guidelines—which have evolved over time—describing how to estimate emissions. The general approach is to use statistics on activities, known as activity data, and estimates of the rate of emissions per unit of activity, called emissions factors. For example, to estimate emissions from passenger cars, the inventory preparers could multiply the number of gallons of gasoline consumed by all cars by the estimated quantity of emissions per gallon. The IPCC guidelines allow nations to use various methods depending on their data and expertise. In some cases, with adequate data, estimates of emissions can be as accurate as direct measurements, for example for carbon dioxide emissions from the combustion of fossil fuels which contribute the largest portion of emissions for many nations. The Parties agreed to the following five principles for inventories from Annex I nations: Transparent. Assumptions and methodologies should be clearly explained to facilitate replication and assessment of the inventory. Consistent. All elements should be internally consistent with inventories of other years. Inventories are considered consistent if a Party uses the same methodologies and data sets across all years. Comparable. Estimates should be comparable among Parties and use accepted methodologies and formats, including allocating emissions to the six economic sectors defined by IPCC—energy, industrial processes, solvent and other product use, agriculture, land-use change and forestry, and waste. Complete. Inventories should cover all sources and sinks and all gases included in the guidelines. Accurate. Estimates should not systematically over- or underestimate true emissions as far as can be judged and should reduce uncertainties as far as practical. Annex I nations are to submit inventories annually consisting of two components—inventory data in a common reporting format and a national inventory report—both of which are publicly available on a Web site maintained by the Secretariat. The common reporting format calls for emissions estimates and the underlying activity data and emissions factors for each of six sectors—energy, industrial processes, solvent and other product use, agriculture, land-use change and forestry, and waste. It also calls for data on the major sources that contribute to emissions in each sector. The inventory data are to reflect a nation’s most recent reporting year as well as all previous years back to the base year, generally 1990. The 2010 reporting format called for nearly 150,000 items of inventory data and other information from 1990 through 2008. The common format and underlying detail facilitate comparisons across nations and make it easier to review the data by, for example, enabling automated checks to ensure emissions were properly calculated and to flag inconsistencies in data reported over time. The national inventory report should explain the development of the estimates and data in the common reporting format and should enable reviewers to understand and evaluate the inventory. The report should include, among other things, descriptions of the methods used to calculate emissions estimates, the rationale for selecting the methods used, and information about the complexity of methods and the resulting precision of the estimates; information on quality assurance procedures used; discussion of any recalculations affecting previously submitted inventory data; and information on improvements planned for future inventories. The Secretariat coordinates an inventory review process that, among other things, assesses the consistency of inventories from Annex I nations with reporting guidelines. The purposes of this process are to ensure that Parties are provided with (1) objective, consistent, transparent, thorough, and comprehensive assessments of the inventories; (2) adequate and reliable information on inventories from Annex I Parties; (3) assurance that inventories are consistent with IPCC reporting guidelines; and (4) assistance to improve the quality of inventories. In supporting the inventory review process, the Secretariat provides scientific and technical guidance on inventory issues and coordinates implementation of Convention guidelines. Inventory reviews are supervised by the head of the reporting, data, and analysis program within the Secretariat. By June each year, the Secretariat checks each inventory for completeness and format, called an initial check, and conducts a preliminary assessment before submitting it to an inventory review team for examination. The Secretariat assembles inventory review teams composed of scientists and other experts from around the world to review inventories from all Annex I Parties according to the Convention’s review guidelines. The inventory review teams assess inventories in September by reviewing activity data, emissions factors, methodologies, and other elements of an inventory to determine if a nation has employed appropriate standards, methodologies, and assumptions to compute its emission estimates. From February through March, the inventory review teams develop inventory review reports outlining their findings. In accordance with the Convention’s principle of common, but differentiated responsibilities, non-Annex I nations’ inventories’ format and frequency differ from those for Annex I nations. The reporting guidelines, which have evolved over time, encourage non-Annex I nations to use the IPCC methodological guidelines in developing their inventories, but do not specify that they must be used. While they submit inventories to the Secretariat, non-Annex I nations’ inventories are not stand-alone documents. Rather, a non-Annex I nation’s inventory is a component of its national communication, a report that discusses steps the nation is taking or plans to take to implement the Convention. Non-Annex I nations do not have to use the common reporting format or submit a national inventory report. Moreover, they do not submit an inventory each year, but instead the Parties to the Convention determine the frequency of their submissions. Parties have not agreed on a regular frequency for non- Annex I nations to submit their inventories. According to expert inventory review teams, the 2009 greenhouse gas inventories of seven Annex I nations were generally comparable and of high quality, although some of their emissions estimates have substantial uncertainty. In contrast, we found that the most recent inventories from seven non-Annex I nations, although they met reporting guidelines, were of lower quality and generally not comparable. Finally, experts identified several barriers to improving inventory comparability and quality. All of the inventories submitted in 2009 by the seven selected Annex I nations were generally comparable and of high quality, according to the most recent inventory reviews conducted by expert review teams under the Convention. The reviews found that six of the seven nations— Australia, Canada, Japan, Russia, the United Kingdom, and the United States—used appropriate methodologies and data, employed reasonable assumptions, and did not systematically either over- or underestimate emissions in their 2009 inventories (covering data from 1990 through 2007). The one exception to this was Germany’s 2009 inventory, which the review team said did not follow guidelines for its agricultural emissions, in part because of its attempt to use newer methods. The change significantly reduced estimated emissions from agriculture, though the sector is a relatively small contributor to Germany’s total emissions. One inventory reviewer familiar with Germany’s 2009 inventory said its overall quality was fairly good. In addition, Germany appears to have addressed the issue of its agricultural emissions in its 2010 inventory submission by returning to its previous methods, which had the effect of increasing its estimates of emissions from agriculture. Experts said that the seven selected inventories were generally comparable, which means they generally used agreed-upon formats and methods. In addition, nine experts we interviewed said they were of high quality and did not have major flaws. These findings show significant improvement in the seven nations’ inventories since our 2003 report. For example, we reported in 2003 that both Germany’s 2001 submission (covering data through 1999) and Japan’s 2000 submission (covering data through 1998) lacked a national inventory report, a critical element that explains the data and methods used to estimate emissions. Nearly all Annex I nations—including Germany and Japan—now routinely submit this report. In addition, the review team found Russia’s 2009 inventory showed major improvements. For example, Russia included a full uncertainty analysis for the first time and improved its quality assurance and quality control plan. Since our 2003 report, these 7 selected nations, and 34 other Annex I Parties, have submitted about seven inventories, which were generally on time and more comprehensive than previous inventories (see fig. 3). The inventory review reports noted several potential problems that, while relatively minor, could affect the quality of emissions estimates. For example, the review of the 2009 U.S. inventory noted that assumptions about the carbon content of coal are outdated because they are based on data collected between 1973 and 1989. The effect on emissions estimates is not clear, but the carbon content of the coal burned as fuel may change over time, according to the inventory review report. Any such change would affect emissions, since coal is the fuel for about half of all U.S. electricity generation. The U.S. inventory also used a value from a 1996 agricultural waste management handbook to estimate nitrous oxide emitted from livestock manure. The inventory review noted that livestock productivity, especially for dairy cows, has increased greatly since 1996, which would also increase each animal’s output of nitrous oxide emissions. Using the IPCC’s methodology for calculating emissions from excreted nitrogen, we estimated that this would lead to an underestimate of roughly 4.7 percent of total nitrous oxide emissions and 0.2 percent of total greenhouse gas emissions. Finally, the review of Russia’s 2009 inventory noted that it did not include carbon dioxide emissions from organic forest soils, which the inventory review report said could be significant. The inventory reviews and one expert we interviewed attributed many of the potential underestimations to a lack of data or an adequate IPCC-approved methodology and said that nations were generally working to address the issues. Even though the review teams found these seven inventories generally comparable and of high quality, the nations reported substantial uncertainty in many of the emissions estimates in their inventories. The term “uncertainty” denotes a description of the range of values that could be reasonably attributed to a quantity. All of the Annex I nations’ inventories we reviewed contained quantitative estimates of uncertainty. As shown in table 1, six of the seven nations reported uncertainties for their overall estimates between plus or minus 1 and 13 percent, and Russia reported overall uncertainty of about plus or minus 40 percent. That equates to an uncertainty of 800 million metric tons of carbon dioxide equivalent, slightly more than Canada’s total emissions in 2007. Russia’s relatively large uncertainty estimate could stem from several factors, such as less precise national statistics. In addition, Russia generally used aggregated national data rather than data that account for variation within the nation. This would increase uncertainty because aggregated data do not account for important differences that affect emissions, such as different types of technology used in the energy sector. Japan and Australia reported very low uncertainty in 2009. The inventory review report noted that Japan’s estimate was lower than estimates from other nations, but neither the report nor Japan’s inventory provides a full explanation. The review team for Australia said that its uncertainty ranges were generally consistent with typical uncertainty ranges reported for its sectors. Despite high levels of uncertainty in some instances, the inventory review teams found the seven inventories to be generally of high quality because the teams judge quality based on consistency with guidelines rather than strictly on the precision of the estimates. The uncertainty of emissions estimates also varies among the different sectors of a nation’s economy. For example, uncertainty is relatively low for estimates of carbon dioxide emissions from the combustion of fossil fuels because the data on fuel use are generally accurate and the process that generates emissions is well understood. Uncertainty is much higher for certain categories within agriculture and land-use. For example, some nations report that the uncertainty in their estimates of nitrous oxide emissions from agricultural soils is greater than 100 percent, in some cases much greater. According to a March 2010 report by a National Research Council committee, this results from scientific uncertainty in emission factors. Table 2 shows the contribution of the most important sources of uncertainty in the U.S. inventory. The sources of uncertainty in the other six Annex I nations’ inventories follow a broadly similar pattern: the largest sources of uncertainty are either large sources of emissions—such as fossil fuel combustion and land use—or small but highly uncertain categories—such as agricultural soils. Shortcomings in inventory reporting guidelines may decrease the quality and comparability of emissions estimates for land use, according to two experts we interviewed. For example, the guidelines state that nations should report all emissions from “managed forests,” but they have broad latitude in assigning forested land to this category. This choice may have a major effect on emissions; one expert said that it would be possible for some nations with large forested areas, such as Brazil, to offset all their emissions from deforestation by designating large areas of protected forest as managed and taking credit for all of the carbon dioxide absorbed by those forests. To address this potential inconsistency, the National Research Council committee report recommended taking inventory of all land-based emissions and sinks for all lands, not just man-made emissions on managed lands. Others said that designating land as managed forest remains the most practical way to estimate man-made emissions and removals because other methods are not well developed. Inventories from the non-Annex I nations we reviewed met the Convention’s relevant reporting guidelines. All of the seven non-Annex I nations we reviewed—Brazil, China, India, Indonesia, Malaysia, Mexico, and South Korea—had submitted their first inventories. In addition, Mexico submitted its second, third, and fourth inventories, and South Korea submitted its second. Secretariat officials said the other selected nations could submit their second inventories, as part of their national communications, over the next few years. The reporting guidelines call for non-Annex I nations to estimate emissions for 1990 or 1994 in their first submission, and for 2000 in their second submissions, and to include estimates for carbon dioxide, methane, and nitrous oxide in all submissions. We found that all selected non-Annex I nations reported for relevant years and these three gases, but we did not assess whether nations used appropriate methodologies and assumptions to develop these estimates. However, the seven inventories were generally not comparable and were of lower quality than inventories from Annex I nations in four ways: 1. Inventories from select non-Annex I nations were outdated. The most recent inventories from selected Annex I nations estimate emissions for 1990-2008. However, except for Mexico and South Korea, the most recently submitted inventories from selected non-Annex I nations are for emissions for 1994. (See figure 4.) 2. Some selected non-Annex I nations’ inventories do not estimate emissions of all gases. As shown in figure 4, inventories from China, India, Indonesia, and Malaysia did not include estimates of the emissions of synthetic gases. Independent estimates show that while synthetic gases were only 1 percent of global emissions in 2005, the emissions of synthetic gases increased by 125 percent between 1990 and 2005. Their emissions have also grown substantially in some non- Annex I nations, such as China, which had the largest absolute increase in synthetic gas emissions among all non-Annex I nations between 1990 and 2005, according to information from the International Energy Agency (IEA). 3. Select non-Annex I nations’ inventories, to varying degrees, lacked critical elements. We assessed inventories for several elements that, according to reporting guidelines, can improve the quality and transparency of inventories. First, only Brazil and Mexico provided a quantitative analysis of the uncertainty of their estimates. Second, we found that all inventories lacked adequate documentation of methodologies, emission factors, and assumptions and that most lacked descriptions of quality assurance and quality control measures. Third, none of the select nations reported in a comparable format, instead using different formats and levels of aggregation. For example, China estimated some methane emissions from various agricultural subsectors but grouped some of these estimates into only one category. In contrast, South Korea estimated these same emissions but reported them in separate categories. Overall, the lack of documentation and of a common reporting format limited our ability to identify and compare estimates across nations. Finally, only Mexico included an analysis of its key categories of emissions. 4. National statistics from some select non-Annex I nations are less reliable. According to three experts we interviewed and literature, some non-Annex I nations have less reliable national statistics systems than most Annex I nations. These systems are the basis for emissions estimates, and experts noted that the estimates are only as good as the underlying data. For example, researchers estimated that the uncertainty of carbon dioxide emissions from China’s energy sector was as high as 20 percent. In contrast, reported uncertainties in estimates of carbon dioxide emissions from fossil fuel use in many developed nations are less than 5 percent. In addition, the International Energy Agency noted a relatively large gap between its energy statistics and those used in the national inventories of some non-Annex I nations, highlighting a need for better collection of data and reporting of energy statistics by some non-Annex I nations. Emissions from Fuel Combustion, 2009 Edition, I.5. between 1990 and 2005, which was about the annual emissions of Canada, Germany, Japan, and Russia in 2005 combined. Recognizing the importance of information from non-Annex I nations, in March 2010, a National Research Council committee recommended that Framework gorous inventory reporting and Convention Parties extend regular, rigorous inventory reporting and review to developing nations. review to developing nations. National Research Council, Verifying Greenhouse Gas Emissions, 6. Experts we interviewed identified several barriers to improving the comparability and quality of inventories. First, 10 of the 12 experts who provided views about barriers said that a lack of data and scientific knowledge makes some types of emissions difficult to estimate for both Annex I and non-Annex I nations. For example, current estimates of emissions related to biological processes, such as those from agriculture and land use, can be uncertain because of limited data. Specifically, nations do not always collect data on livestock nutrition, which can affect methane emissions. In addition, emissions related to some biological processes are difficult to estimate because they are not fully understood or are inherently variable. Emissions related to agriculture, for example, depend on the local climate, topography, soil, and vegetation. In March 2010, a National Research Council committee recommended further scientific research and data collection to reduce the uncertainties in estimates of agriculture, forestry, and land-use emissions. Such emissions are important, contributing about one quarter of total global emissions in 2005, the most recent year for which global data were available. They are particularly important for some non-Annex I nations, where they can be the largest sources of emissions. In Brazil and Indonesia, for example, agriculture and land-use emissions accounted for about 80 percent of total emissions in 2005. Second, 11 experts said that non-Annex I nations have limited incentives to produce better inventories. The current international system encourages Annex I nations with commitments under the Kyoto Protocol to improve their inventories. This is because their ability to participate in the Kyoto Protocol’s flexibility mechanisms—which provide a cost- effective way to reduce emissions—is linked to, among other things, the quality of certain aspects of their inventories. Late submissions, omissions of estimates, or other shortcomings can all affect nations’ eligibility to use these mechanisms. Therefore, low-quality inventories can affect nations’ ability to lower the costs of achieving their emissions targets. While four experts we interviewed said that this linkage between inventories and the flexibility mechanisms in the Kyoto Protocol has driven improvements in many Annex I nations’ inventories, incentives for non-Annex I nations are limited. Furthermore, four experts said that some non-Annex I nations may avoid additional international reporting because they see it as a first step toward adopting commitments to limit emissions. In addition, experts and the national communications of selected non- Annex I nations identified several other barriers to improving the quality and comparability of inventories from non-Annex I nations, including: Less stringent reporting guidelines and lack of review. Reporting guidelines differ between Annex I and non-Annex I nations. Non-Annex I nations do not need to annually submit inventories or to report on as many gases, for as many years, with as much detail, or in the same format as Annex I nations. They also do not have to follow all IPCC methodological guidelines, although they are encouraged to do so. Six experts said that this less stringent reporting regime has contributed to the lack of quality and comparability in inventories from non-Annex I nations. In addition, non-Annex I nations have not benefited from the feedback of technical reviews of their inventories, according to one expert. Financial and other resource constraints. Though eight experts generally said that many non-Annex I nations may lack needed financial and other resources, they differed on the magnitude and importance of additional international support. Non-Annex I nations may lack resources to improve data collection efforts, conduct additional research, or establish national inventory offices. The developed nations of Annex I provided the majority of about $80 million that has been approved for the latest set of national communications, which include inventories, from non-Annex I nations. However, one expert said that this has not been sufficient to fully support the activities needed. In their national communications, China and India indicated needing funding to, for example, improve data collection. Two experts said that improving non-Annex I nations’ inventories may require significant resources. On the other hand, others said that the funds involved may be relatively small, or that financial constraints may not be significant, at least for major non-Annex I nations. For example, according to a report from a National Research Council committee, significant improvements in inventories from 10 of the largest emitting developing nations could be achieved for about $11 million over 5 years. While experts disagreed about the importance of additional funding, three said that international funding should support capacity development in each nation. They said that more continuous support would improve on the current, project-based method of funding, which encourages nations to assemble ad-hoc teams that collect data, write a report, and then disband. Lack of data and nation-specific estimates of emissions factors. According to four experts and the Convention Secretariat’s summary of constraints identified by non-Annex I nations in their initial national communications, the lack or poor quality of data or a reliance on default emissions factors limit the quality of inventories. Most non-Annex I nations identified that missing or inadequate data was a major constraint for estimating emissions in at least one sector. For example, Indonesia reported that it did not estimate carbon emissions from soils because the data required were not available. Though inventory guidelines encourage the use of nation-specific emissions factors that reflect national circumstances, most non-Annex I nations use default values provided by the IPCC. The reliance on default values can increase uncertainties of estimates because national circumstances can differ significantly from the defaults. For example, Denmark’s nation-specific emission factor for methane emissions from sheep is twice as large as the default. Thus, if Denmark had used the default value, it would have underestimated its emissions from sheep by half. Experts said that the process for reviewing inventories from Annex I nations has several notable strengths. They also identified three limitations, which may present challenges in the future. Moreover, we found that although the review process includes steps to help ensure the quality of reviews, there is no independent assessment of the process’ operations. Finally, there is no review process for inventories from non- Annex I nations. Eight of the experts we interviewed said the process for reviewing inventories from Annex I nations has several notable strengths that enable it to generally meet its goals of providing accurate information on the quality of inventories and helping nations improve their inventories. (Figure 6 below depicts the inventory review process.) Experts identified four broad categories of strengths: Rigorous review process. Five experts said the rigorous review process gives them confidence that review teams can identify major problems with inventory estimates. For example, the Secretariat and review teams compare data, emission factors, and estimates from each inventory (1) from year to year, (2) with comparable figures in other inventories, and (3) with data from alternative sources, such as the International Energy Agency (IEA) and the United Nations Food and Agriculture Organization. Reviewers also ensure methods used to estimate emissions are appropriate and meet accepted guidelines. In addition, IEA officials inform the inventory review process by reviewing energy data in inventories and independently identifying issues for review teams to consider further. Qualified and respected reviewers. Three experts we interviewed said that well-qualified and widely respected inventory reviewers give the process credibility. Secretariat officials told us that a relatively small number of people in the world have the expertise to evaluate inventories without further training. Parties nominate reviewers, including leading scientists and analysts, many of whom are also inventory developers in their home nations. Reviewers must take training courses and pass examinations that ensure they understand inventory guidelines and appropriate methodologies before serving on a review team. Two experts said reviewers’ experience and qualifications allow them to assess the strengths and weaknesses in inventories, including whether nations use appropriate methodologies. This is particularly important because some nations use advanced or nation-specific approaches, which can be difficult to assess. Capacity building. Three experts said the inventory review process builds expertise among reviewers from developed and developing nations. Specifically, they said the review process brings inventory specialists together from around the world, where they learn from each other and observe how various nations tackle challenges in compiling their inventories. Two experts said that reviewers return home and can use the knowledge and contacts gained from their review team experiences to improve their national inventories. Constructive feedback. Two experts said that the inventory reviews provide constructive feedback to improve inventories from Annex I nations. This feedback includes identifying both major and minor shortcomings in inventories. Secretariat officials said that review teams, when they identify issues, must also offer recommendations for addressing them. For example, reviewers noted Russia’s 2009 use of default assumptions for much of its uncertainty analysis, and recommended that Russia develop values that better match the methods and data used in making the emissions estimates. For these and other reasons, three experts we interviewed said that the review process has helped improve the quality of inventories from Annex I nations. Secretariat officials said that when review teams point out discrepancies or errors, many nations revise and resubmit estimates to correct problems. For example, Australia revised its estimates of carbon dioxide emissions from croplands after a review team pointed out that changes in croplands management affect emissions. Australia’s revisions decreased estimated emissions from croplands in 1990 by 138 percent, meaning the revisions had the effect of moving croplands from an estimated source of greenhouse gas emissions to a sink removing greenhouse gases from the atmosphere. For nations with Kyoto Protocol commitments, review teams may adjust estimates if they are not satisfied with a response to their findings. For example, the team reviewing Greece’s 2006 inventory concluded that estimates in several categories were based on methods, data, and emissions factors that did not adhere to reporting guidelines. The review team was not satisfied with Greece’s response, and recommended six adjustments to Greece’s estimates. These adjustments lowered Greece’s official baseline energy sector emissions by 5 percent, from 82 million to 78 million metric tons of carbon dioxide equivalent. Experts, literature, and several nations identified some limitations of the review process, which may present challenges in the future if, for example, the process is expanded to incorporate non-Annex I nations. First, six experts we interviewed said the process does not independently verify emissions estimates or the quality of the underlying data. Review teams primarily ensure the consistency of inventories with accepted standards but do not check underlying activity data, such as the amount of fuel burned. Review teams do compare underlying data with those reported in other sources, but these other sources are not fully independent because they also come from the nations that supply the inventories. Two experts said that more thorough verification might involve comparing estimates to observed measurements or independently constructing estimates from raw data. However, such approaches may be costly and, as a National Research Council committee reported, the other methods currently available do not allow independent verification of estimates. Furthermore, one expert said that the review of emissions estimates from agricultural soils and land-use sectors may be especially limited because of a lack of data and the inherent difficulty in measuring these emissions. The inability to more thoroughly assess inventories may reduce the reliability of review findings. For example, the inventory review process may have overlooked a significant shortcoming in at least one review. Specifically, in 2009, the national audit office of one Annex I nation found that its national inventory estimates may understate actual emissions by about a third because the inventory preparers used questionable statistics. The relevant agencies in that nation generally agreed with the audit office’s recommendations based on its assessment. The review for that inventory, however, did not identify this issue. Second, four experts we interviewed and several nations have expressed concerns about inconsistency across reviews, though the magnitude of this potential problem is unclear. The concerns relate to the potential for review teams to inconsistently apply standards when assessing an inventory. Secretariat officials said the process of reviewing inventories involves some degree of subjectivity, since reviewers use professional judgment in applying inventory review guidelines to a specific inventory. As a result, review teams might interpret and apply the guidelines differently across nations or over time. Four experts we spoke with, as well as several nations, have raised such concerns. For example, the European Community reported that some nations have received, on occasion, contradictory recommendations from inventory review teams. Secretariat officials said lead reviewers are ultimately responsible for consistent reviews but that Secretariat staff assist the review teams during the process, and two Secretariat staff read through all draft inventory reports, in part to identify and resolve possible inconsistencies. In addition, lead reviewers develop guidance on consistency issues at annual meetings. The magnitude of this potential problem is unclear, in part because it has not been evaluated by an independent third party. Third, three experts and officials we interviewed said there are not enough well-qualified reviewers to sustain the process. Three experts and Secretariat officials said that they did not know whether this shortage of available experts has affected the overall quality of reviews. The Secretariat has, in the past, reassigned staff and reviewers from work on national communications to the review of inventory reports, and it provides training to all reviewers to increase capacity and retain qualified reviewers. However, Secretariat officials said it may be difficult to sustain the quality of reviews in the future if the inventory review process is expanded to include inventories from non-Annex I nations without receiving additional resources, since this would substantially increase the demands on the review process. The review process includes steps to help ensure the quality of reviews, but we found that its quality assurance framework does not independently assess the process. Secretariat officials said that lead reviewers oversee the drafting of review reports, and review officers, lead reviewers, and review teams maintain a review transcript to keep track of potential issues they have identified with inventories, of nations’ responses to those issues, and of their resolution. However, lead reviewers, in the report of their 2009 meeting, expressed concern that these review transcripts are sometimes incomplete and are not always submitted to the Secretariat. In providing information on their experience with the review process and recommendations for improvements, the nations of the European Community suggested in late 2008 that the review process would benefit from establishing clear quality assurance and quality control procedures as well as from an annual analysis of its performance in relation to its objectives. Secretariat officials said they designated a Quality Control Officer who, along with the supervisor of the review process, reads all draft review reports and may identify problems and check underlying information in reports. Furthermore, Secretariat officials said that lead reviewers meet annually to discuss the review process, assess and prepare guidance about specific issues or concerns about the review process, and develop summary papers to report to Parties. Nonetheless, the review process lacks an independent assessment of its operation. We examined several other review processes and found that periodic external assessments by independent entities can provide useful feedback to management and greater assurance that the review processes are working as intended. Inventory guidelines call for Annex I nations to carry out quality assurance activities for their own inventories, including a planned system of reviews by personnel not directly involved in the process. Though some United Nations and Framework Convention oversight bodies have the ability to assess the inventory review process, none have done so. The Secretariat has internal auditors, but they have not audited the inventory review process and Secretariat officials said they did not know of any plans to do so. Although the Compliance Committee of the Kyoto Protocol has reviewed aspects of the review process, issuing a report with information on consistency issues, this report was not a systematic review and was not developed by people independent of the review process. As stated earlier, inventories from non-Annex I nations do not undergo formal reviews. The Secretariat compiled a set of reports summarizing inventory information reported by non-Annex I nations, such as inventory estimates, national circumstances, and measures to address climate change. However, Secretariat officials said they had not assessed the consistency of non-Annex I nations’ inventories with accepted guidelines. These officials also said that they did not plan to compile another report covering non-Annex I nations’ second inventories because the Parties have not agreed to this. An expert we interviewed said that the quality of inventories from non-Annex I nations is unknown because their inventories have not been formally reviewed. Two experts said that some non-Annex I nations have resisted increased scrutiny of their inventories because of sovereignty concerns, meaning that nations do not want to disclose potentially sensitive information or data to other political bodies. The growth in greenhouse gas emissions along with lower quality inventories in some non-Annex I nations is likely to increase the pressure for a public review of their inventories in the future. Most experts we interviewed said that the inventory system for Annex I and non-Annex I nations is generally sufficient for monitoring compliance with current agreements. However, they said that the system may not be sufficient for monitoring non-Annex I nations’ compliance with future agreements that include commitments for them to reduce emissions. Eleven of the experts we interviewed said the inventory system— inventories and the process for reviewing them—is generally sufficient for monitoring compliance with current agreements, though five raised some concerns. All 11 of the experts who provided their views on the implications of the inventory system expressed confidence that inventories and the Convention’s inventory review process are suitable for monitoring Annex I nations’ compliance with existing commitments to limit emissions. In part, this is because emissions in many Annex I nations primarily relate to energy and industrial activity, which can be more straightforward to estimate and monitor than emissions from land use and agriculture. Nevertheless, five experts raised at least one of two potential challenges facing the current system. First, three said they were cautious until they see how the system performs under the more demanding conditions of submitting and reviewing inventories that will show whether nations have met their binding emission targets under the Kyoto Protocol. When inventories are for years included in the Protocol’s commitment period, nations may be more concerned about meeting emissions targets, and review teams may face pressure to avoid negative findings. Second, three experts said that flexibilities in the current inventory system or difficulties in measuring and verifying emissions from some agriculture and land-use segments could create complications for international emissions trading under the Kyoto Protocol. Emissions trading under the Kyoto Protocol allows nations with emissions lower than their Kyoto targets to sell excess allowances to nations with emissions exceeding their targets. Though Parties to the Kyoto Protocol developed and agreed to the current system, three experts indicated that ensuring greater comparability of estimates between nations and types of emissions might be useful for emissions trading. For non-Annex I nations, eight experts said that their lower quality inventories and lack of review do not present a current problem since these nations do not have international commitments to limit their emissions. Seven of the experts said that the inventory system is sufficient to support international negotiations. To develop agreements, two experts said, negotiators need information on current and historic emissions from the nations involved. Annex I nations submit this information in their annual emissions inventories, the most recent of which cover emissions from 1990 to 2008. Although emissions estimates in most non-Annex I nations’ inventories are outdated, seven experts said that there are enough independent estimates to provide negotiators with adequate information. State officials said that independent estimates are useful, but official national inventories would be preferable because they can lead to more constructive discussions and can help create capacity in nations to better measure emissions. In international negotiations, State has emphasized the need for better information on emissions from all high-emitting nations, including non-Annex I nations. Different types of commitments would place different demands on the inventory system. Thus, the implications of the state of the inventory system for a future agreement will largely depend on the nature of that agreement. For Annex I nations, eight experts said that future commitments were likely to resemble current commitments and therefore the inventory system is likely to be sufficient. However, for non-Annex I nations, if future agreements include commitments to limit emissions, the current system is not sufficient for monitoring their compliance, according to nine experts. This is because non-Annex I nations do not submit inventories frequently, the quality of their inventories varies, and they do not undergo an independent technical review. Additional reporting and review could pose challenges since it could take time for non-Annex I nations to improve their inventories and Secretariat officials said that adding non-Annex I nations to the current inventory review process could strain the capacity of that system. Some types of commitments by non-Annex I nations could be especially difficult to monitor and verify, according to experts. In the nonbinding 2009 Copenhagen Accord, many nations submitted the actions they intended to take to limit their greenhouse gas emissions, with Annex I nations committing to emissions targets for 2020 and non-Annex I nations announcing various actions to reduce emissions. Experts identified several challenges with monitoring the implementation of some of the actions proposed by non-Annex I nations (see table 3). For example, two experts said that monitoring emissions reductions from estimates of future business-as-usual emissions may prove challenging. They said this is because such actions may require Parties to estimate reductions from a highly uncertain projection of emissions that would have otherwise occurred. Parties would also have to develop and agree on guidelines to estimate and review business-as-usual emissions in addition to actual emissions. Similarly, monitoring reductions in the intensity of greenhouse gas emissions—emissions per unit of economic output, or gross domestic product—could pose challenges because of uncertainties in estimates of gross domestic product. One expert said that these challenges arise because the Parties to the Convention created the current inventory system to monitor compliance and evaluate progress among Annex I nations with national targets. This expert added that Parties to a new agreement may need to supplement the system to support the types of actions under consideration by non-Annex I nations. Eight of the experts we interviewed said that Parties to a future agreement could overcome or mitigate many of the challenges related to inventories. For example, two experts said that Parties could design agreements that rely less on emissions estimates that are inherently uncertain or difficult to verify. For example, quantitative targets could apply only to sectors or gases that are relatively easy to measure and verify, such as carbon dioxide emissions from the burning of fossil fuels. Three experts said that barriers other than the inventory system pose greater challenges to designing and reaching agreements on climate change. For example, nations disagree on the appropriate emissions limits for developed and developing nations. According to three experts, such disagreements were more of an obstacle to a comprehensive agreement in the latest round of negotiations in Copenhagen than were inventory issues. In addition, one expert pointed out that Parties to international agreements generally have limited ability to get other Parties to comply. For example, at least one nation with a binding emissions target under the Kyoto Protocol is unlikely to meet its target based on current inventory estimates and policies, according to this expert. Nations may be reluctant to agree to an international agreement until they have some assurance that other nations will follow through on their commitments. High quality and comparable information on national greenhouse gas emissions is critical to designing and implementing international responses to climate change. The nations we reviewed meet their inventory reporting obligations, and review reports indicate this has resulted in generally high quality inventories from the seven highest emitting Annex I nations. However, the current inventory system does not request high quality emissions information from non-Annex I nations, which account for the largest and fastest growing share of global emissions. We found that the inventories from seven selected high emitting non-Annex I nations were generally outdated, not comparable, and of lower quality than inventories from Annex I nations. The existing gap in quality and comparability of inventories across developed and developing nations makes it more difficult to establish and monitor international agreements, since actions by both developed and developing nations will be necessary to address climate change under future international agreements. As a recent National Research Council committee study pointed out, extending regular reporting and review to more nations may require external funding and training, but the resources needed for the largest emitting developing nations to produce better inventories is relatively modest. While our work suggests that the current inventory review process has notable strengths, we identified limitations that may present challenges in the future. For example, some experts and nations have reported concerns about inconsistent reviews and that resources may not be sufficient in the future. Stresses on the review process are likely to increase as review teams begin to review inventories that cover years in which some nations have binding emissions targets and if inventories from non-Annex I nations are subjected to inventory review under a future agreement. The Convention Secretariat has internal processes in place to help ensure quality reviews, but no systematic independent review to assess the merits of concerns about the consistency of reviews or to assess the need for additional qualified reviewers in the future. Addressing these issues could benefit the Secretariat by further enhancing confidence in its processes and ensuring that it has the resources necessary to maintain high quality reviews. We are making two recommendations to the Secretary of State: 1. Recognizing the importance of high quality and comparable data on emissions from Annex I and non-Annex I Parties to the Convention in developing and monitoring international climate change agreements, we recommend that the Secretary of State continue to work with other Parties to the Convention in international negotiations to encourage non-Annex I Parties, especially high-emitting nations, to enhance their inventories, including by reporting in a more timely, comprehensive, and comparable manner, and possibly establishing a process for reviewing their inventories. 2. To provide greater assurance that the review process has an adequate supply of reviewers and provides consistent reviews, we recommend that the Secretary of State, as the U.S. representative to the Framework Convention, work with other Parties to the Convention to explore strengthening the quality assurance framework for the inventory review process. A stronger framework could include, for example, having an independent reviewer periodically assess the consistency of inventory reviews and whether the Secretariat has sufficient resources and inventory reviewers to maintain its ability to perform high quality inventory reviews. We provided State, the Convention Secretariat, and EPA with a draft of this report for review and comment. State agreed with our findings and recommendations and said that the department has been working with international partners in negotiations and through bilateral and multilateral partnerships to support and promote improved inventory reporting and review. State’s comments are reproduced in appendix III. The Convention Secretariat provided informal comments and said that it appreciated our findings and conclusions. The Secretariat said that the report provided a comprehensive overview of the existing system for reporting and reviewing inventories under the Convention and the Kyoto Protocol, as well as very useful recommendations on how this system could evolve in the future and steps to be taken to that end. The Secretariat noted our acknowledgement of the strengths of the inventory review process for Annex I nations. In addition, the Secretariat commented on our discussion of the limited availability of statistics against which to compare inventory data, saying that this lack of data does not imply that its review process lacks independent verification and that its review teams rely on available statistics in conducting their reviews. The Secretariat also said that the disparities in inventory quality across Annex I and non-Annex I nations should be viewed in the context of the “common but differentiated responsibilities” of developed and developing nations under the Convention. In addition, EPA and the Convention Secretariat provided technical comments and clarifications, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Secretary of State, Administrator of EPA, Executive Secretary of the Convention Secretariat, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our review provides information on: (1) the comparability, quality, and barriers to improving inventories submitted by developed and developing nations to the United Nations Framework Convention on Climate Change (the Convention); (2) the strengths and limitations of the Convention’s inventory review process; and (3) the views of experts on the implications for agreements to reduce greenhouse gas emissions. To address all of these objectives, we reviewed relevant literature and Convention documents; met with officials from the Environmental Protection Agency (EPA), Department of State (State), the Convention Secretariat, and others to understand inventories, the inventory review process, and international negotiations; and summarized the views of experts on these issues. Specifically, to address the first objective, we selected a nonprobability sample of 14 nations, seven Annex I nations—Australia, Canada, Germany, Japan, Russia, the United Kingdom, and the United States—and seven non- Annex I nations—Brazil, China, India, Indonesia, Malaysia, Mexico, and South Korea—based on the size of their emissions (including emissions from land-use and land-use change and forestry). We selected the largest emitting Annex I nations. For non-Annex I nations, we selected the largest emitting nations who had submitted inventories based on data available at the time. We omitted Myanmar because it did not submit an inventory to the Convention. We also ensured coverage of major variations in selected nation’s income and sectoral structure of their economies. To illustrate this variation, we used the World Bank’s data on per capita income levels, and data from the World Resources Institute and Convention Secretariat on emissions from the energy and industrial processes sectors. The selected 14 nations represented about two thirds of the world’s greenhouse gas emissions not related to land use and forestry in 2005. Our findings are not generalizable to other nations because the selected nations are not necessarily representative. To assess the comparability and quality of inventories from Annex I nations, we summarized the results of the Convention’s 2009 reviews of inventories from selected Annex I nations, the most recent reviews available. We did not independently assess the validity of data, assumptions, or methodologies underlying the inventories we reviewed. Though we identified some limitations with the inventory review process, we believe that reviews provide reasonable assessments of the comparability and quality of inventories from selected Annex I nations. For non-Annex I nations, we assessed whether the latest inventories from selected nations included estimates for all major greenhouse gases (carbon dioxide, methane, nitrous oxide, hydroflurocarbons, sulfur hexafluoride, and perfluorocarbons), for all sectors (energy, industrial processes, solvent and other product use, agriculture, land-use change and forestry, and waste) and various years, and checked for inclusion of key inventory characteristics, including descriptions of uncertainty and quality assurance and quality control measures, adequate documentation to support estimates, a comparable format, and analysis to identify emissions from key sources. Though inventory guidelines do not call for all of these from non-Annex I nations, we believe they are indicative of the quality and comparability of inventories. We did not independently assess emissions estimates from non-Annex I nations. We used the quality principles agreed to by Parties for Annex I nations—transparency, consistency, comparability, completeness, and accuracy—as the basis of our review of all inventories and in our discussions with experts. We also provide information on the reported uncertainty of emissions estimates, a more objective indicator of their precision, and on the timeliness of inventory submissions. To identify barriers to improving inventories, we reviewed relevant literature, including national communications from the seven selected non-Annex I nations, and summarized the views of our expert group. To address the second objective, we summarized the results of semi- structured interviews with experts and Secretariat officials. We reviewed Convention documentation about the inventory review process, including Compliance Committee and Subsidiary Body for Implementation reports. To address all three objectives, we summarized findings in the literature and the results of semi-structured interviews with experts. First, we identified 285 experts from our review of the literature and recommendations from U.S. and international government officials and researchers. From this list, we selected 15 experts based on (1) the relevance and extent of their publications, (2) recommendations from others in the inventory field, and (3) the extent to which experts served in the Consultative Group of Experts (a group assembled by the Convention to assist non-Annex I nations improve their national communications), as lead reviewers in the Convention’s inventory review process, or were members of the National Research Council’s committee on verifying greenhouse gas emissions. Finally, to ensure coverage and range of perspectives, we selected experts who had information about key sectors, like the agriculture and energy sectors, came from both Annex I and non- Annex I nations and key institutions, and provided perspectives from both those who were involved in the inventory review process and from those not directly involved in preparing or reviewing inventories. Appendix II lists the experts we interviewed, which included agency and international officials, researchers, and members of inventory review teams. We conducted a content analysis to assess experts’ responses and grouped responses into overall themes. The views expressed by experts do not necessarily represent the views of GAO. Not all of the experts provided their views on all issues. We identify the number of experts providing views where relevant. During the course of our review, we interviewed officials, researchers, and members of inventory review teams from State, EPA, and the Department of Energy in Washington, D.C.; the Convention Secretariat’s office in Bonn, Germany; and from various think tanks, nongovernmental organizations, and international organizations. We conducted this performance audit from September 2009 to July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Michael Hix (Assistant Director), Russell Burnett, Colleen Candrl, Kendall Childers, Quindi Franco, Cindy Gilbert, Jack Hufnagle, Michael Kendix, Thomas Melito, Kim Raheb, Ben Shouse, Jeanette Soares, Kiki Theodoropoulos, Rajneesh Verma, and Loren Yager made key contributions to this report. Climate Change: Observations on Options for Selling Emissions Allowances in a Cap-and-Trade Program. GAO-10-377. Washington, D.C.: February 24, 2010. Climate Change Policy: Preliminary Observations on Options for Distributing Emissions Allowances and Revenue under a Cap-and- Trade Program. GAO-09-950T. Washington, D.C.: August 4, 2009. Climate Change Trade Measures: Estimating Industry Effects. GAO-09-875T. Washington, D.C.: July 8, 2009. Climate Change Trade Measures: Considerations for U.S. Policy Makers. GAO-09-724R. Washington, D.C.: July 8, 2009. Climate Change: Observations on the Potential Role of Carbon Offsets in Climate Change Legislation. GAO-09-456T. Washington, D.C.: March 5, 2009. Climate Change Science: High Quality Greenhouse Gas Emissions Data are a Cornerstone of Programs to Address Climate Change. GAO-09-423T. Washington, D.C.: February 24, 2009. International Climate Change Programs: Lessons Learned from the European Union’s Emissions Trading Scheme and the Kyoto Protocol’s Clean Development Mechanism. GAO-09-151. Washington, D.C.: November 18, 2008. Carbon Offsets: The U.S. Voluntary Market is Growing, but Quality Assurance Poses Challenges for Market Participants. GAO-08-1048. Washington, D.C.: August 29, 2008. Climate Change: Expert Opinion on the Economics of Policy Options to Address Climate Change. GAO-08-605. Washington, D.C.: May 9, 2008. International Energy: International Forums Contribute to Energy Cooperation within Constraints. GAO-07-170. Washington, D.C.: December 19, 2006. Climate Change: Selected Nations’ Reports on Greenhouse Gas Emissions Varied in Their Adherence to Standards. GAO-04-98. Washington, D.C.: December 23, 2003. Climate Change: Information on Three Air Pollutants’ Climate Effects and Emissions Trends. GAO-03-25. Washington, D.C.: April 28, 2003. International Environment: Expert’s Observations on Enhancing Compliance With a Climate Change Agreement. GAO/RCED-99-248. Washington, D.C.: August 23, 1999. International Environment: Literature on the Effectiveness of International Environmental Agreements. GAO/RCED-99-148. Washington, D.C.: May, 1999. Global Warming: Difficulties Assessing Countries’ Progress Stabilizing Emissions of Greenhouse Gases. GAO/RCED-96-188. Washington, D.C.: September 4, 1996.
Nations that are Parties to the United Nations Framework Convention on Climate Change periodically submit inventories estimating their greenhouse gas emissions. The Convention Secretariat runs a review process to evaluate inventories from 41 "Annex I" nations, which are mostly economically developed nations. The 153 "non-Annex I" nations are generally less economically developed and have less stringent inventory reporting guidelines. The Department of State (State) represents the United States in international climate change negotiations. GAO was asked to report on (1) what is known about the comparability and quality of inventories and barriers, if any, to improvement; (2) what is known about the strengths and limits of the inventory review process; and (3) views of experts on implications for current and future international agreements to reduce emissions. GAO analyzed inventory reviews and inventories from the seven highest-emitting Annex I nations and seven of the highest emitting non-Annex I nations. GAO also selected and interviewed experts. Recent reviews by expert teams convened by the Secretariat found that the 2009 inventories from the selected Annex I nations--Australia, Canada, Germany, Japan, Russia, the United Kingdom, and the United States--were generally comparable and of high quality. For selected non-Annex I nations--Brazil, China, India, Indonesia, Malaysia, Mexico, and South Korea--GAO found most inventories were dated and of lower comparability and quality. Experts GAO interviewed said data availability, scientific uncertainties, limited incentives, and different guidelines for non-Annex I nations were barriers to improving their inventories. The lack of comparable, high quality inventories from non-Annex I nations is important because they are the largest and fastest growing source of emissions, and information about their emissions is important to efforts to address climate change. There are no inventory reviews for non-Annex I nations. Experts said the inventory review process has notable strengths for Annex I nations as well as some limitations. The review process, which aims to ensure nations have accurate information on inventories, is rigorous, involves well-qualified reviewers, and provides feedback to improve inventories, according to experts. Among the limitations experts identified is a lack of independent verification of estimates due to the limited availability of independent statistics against which to compare inventories' data. Also, GAO found that the review process's quality assurance framework does not independently assess concerns about a limited supply of reviewers and inconsistent reviews, which could pose challenges in the future. Experts said Annex I nations' inventories and the inventory review process are generally sufficient for monitoring compliance with current agreements to reduce emissions. For non-Annex I nations, however, experts said the current system may be insufficient for monitoring compliance with future agreements, which may require more reporting. As part of ongoing negotiations to develop a new climate change agreement, State has emphasized the need for better information on emissions from high-emitting non-Annex I nations. While improving the inventory system is important to negotiations, some experts said disagreements about emissions limits for developed and developing nations pose a greater challenge. GAO recommends that the Secretary of State work with other Parties to the Convention to (1) continue encouraging non-Annex I Parties to improve their inventories and (2) strengthen the inventory review process's quality assurance framework. State agreed with GAO's findings and recommendations.
You are an expert at summarizing long articles. Proceed to summarize the following text: The size of the health care sector and sheer volume of money involved make it an attractive target for fraud. Expected to total over $1 trillion in fiscal year 1995, health care spending will consume almost 15 percent of the gross national product, an increase from just over 12 percent in 1990. The amount of fraud within the health care system is, by its nature, impossible to accurately determine. We have previously reported that 10 percent of all health care expenditures may be lost to fraud and abuse.Similarly, industry estimates have placed the annual losses due solely to fraud at somewhere between 3 and 10 percent of all health care expenditures (between $30 billion and $100 billion based on estimated fiscal 1995 expenditures). By whatever estimate, this represents a significant monetary drain on our health care system. Health care fraud can take many forms, reach all facets of the industry, and be perpetrated by persons both within and outside the health care industry. A recent report by the Federal Bureau of Investigation (FBI), for example, notes that vulnerabilities to fraud exist throughout the entire health care system, and patterns of fraud are so pervasive that systemic criminal activity is accepted as a “way of doing business” in many segments of the health care industry. A 1995 Department of Justice (DOJ) report on health care fraud states that fraud is being perpetrated not only by individual physicians, but also by public corporations, medical equipment dealers, laboratories, hospitals, nursing homes, and individuals who provide no health care at all but prey upon the system with fraudulent scams. As the DOJ report goes on to note, everyone pays the price for health care fraud, as reflected by higher insurance premiums, increased costs for medical services and equipment, and greater expenditures for Medicare and other public health care programs. Successful health care fraud prosecutions illustrate the types of fraudulent activities taking place. Ranging from simple schemes to complex conspiracies, some frauds have even put lives at risk. As described in the 1995 DOJ report, examples of fraudulent activities that have been federally prosecuted include the following: • An optometrist defrauded Medicare and private insurance companies of over $1.5 million simply by billing for services that were unnecessary or not rendered. • A husband and wife set up a fraudulent network of offshore corporations and entities, which they used to defraud a private insurance company as well as employer insurance networks in several states. This scheme left policyholders with approximately $6 million in unpaid medical and reinsurance claims. • A medical supplier fraudulently submitted false statements to the Food and Drug Administration about the efficacy of heart catheters. Three persons died, and 22 others required emergency heart bypass surgery when these devices were distributed to hospitals and physicians. We have previously reported that several serious fraud problems are facing public and private payers. First, large financial losses to the health care system can occur as a result of even a single scheme. Second, fraudulent providers can bill insurers with relative ease. Third, efforts to prosecute and recover losses from those involved in the schemes are costly; even convictions often do not result in the recovery of losses. Finally, fraudulent schemes can be quickly replicated throughout the health care system. Moreover, as discussed below, a multiplicity of health care payers—each with its own operating policies and subject to various enforcement agencies—further complicates health care fraud enforcement efforts. Of the estimated $884 billion spent on health care in 1993, about 44 percent was paid with public sector dollars, and 56 percent was paid with private sector dollars. Federal health insurance programs—such as Medicare, Medicaid, the Veterans Health Administration, the Federal Employees’ Health Benefit Program, and the Civilian Health and Medical Program of the Uniformed Services—collectively accounted for almost three-quarters of all public health care expenditures. Private health insurance—which includes the various Blue Cross/Blue Shield plans, a host of other private health insurance companies, and many employers who self-insure—accounted for about 60 percent of all private sector expenditures. Public and private payers in the current health care system number over 1,000. Generally, each payer has its own system of processing health care claims and reimbursing providers. Payers may have different rules, reimbursement policies, claim forms, multiple identification numbers, coding systems, and billing procedures. When combined with the sheer size of the health care industry—an estimated 4 billion health claims are processed annually—this complex system of payers presents considerable challenges for those organizations responsible for detecting and pursuing health care fraud. Within this complex system, various federal enforcement agencies have responsibility for investigating health care fraud. For example, the Department of Health and Human Services (HHS) Office of Inspector General has primary responsibility for the Medicare and Medicaid programs, the Department of Veterans Affairs Office of Inspector General has primary responsibility for the Veterans Health Administration, the Department of Defense Criminal Investigative Service has primary responsibility for the Civilian Health and Medical Program of the Uniformed Services, and the Office of Personnel Management Office of Inspector General has primary responsibility for the Federal Employees’ Health Benefits Program. The FBI and the U.S. Postal Inspection Service, under existing federal criminal statutes, have broader authority to investigate fraud in any public or private program. Other agencies that are involved in health care fraud enforcement include the Drug Enforcement Administration, Internal Revenue Service, and Department of Labor. In addition to federal enforcement agencies, states also have health care fraud enforcement responsibility. Regarding Medicaid, for example, while HHS is responsible for oversight of the program, the agency has largely delegated primary investigative enforcement responsibility to state Medicaid Fraud Control Units, which are predominately funded through federal grants. Regarding private insurance, some states have established insurance fraud bureaus that investigate health care fraud; in other states, the department of insurance has a fraud unit that investigates fraud. While these state agencies can often pursue administrative and civil penalties for health care fraud, most criminal enforcement authority is in the hands of local prosecutors and attorneys general. The private sector is also active in health care fraud enforcement. Private insurers have established active anti-fraud programs and special investigative units that work with a wide range of public law enforcement agencies to investigate fraud. These units may report fraud cases to federal or state agencies with health care fraud enforcement responsibility. In addition, a group of private sector health insurers and public sector law enforcement agencies has established the National Health Care Anti-Fraud Association (NHCAA), which represents a cooperative effort to prevent health care fraud and improve capabilities to detect, investigate, and prosecute such fraud. The NHCAA conducts anti-fraud education seminars, provides a forum for members to share information on fraudulent schemes, and assists law enforcement in the investigation and prosecution of health care fraud. As shown in figure 1.1, while public sector health care fraud is primarily the responsibility of federal enforcement and program agencies, private sector fraud can be pursued by both state and federal enforcement agencies. In some states, private insurers are required by law to report suspected fraud to state enforcement agencies, while reports to federal enforcement agencies are optional at the discretion of the insurer. In addition to the reporting of suspected fraud from insurers to federal and state authorities, fraud-related information can also be shared between federal and state authorities. In commenting on a draft of this report, DOJ officials told us that the Department has placed increasing emphasis on working with the Medicaid Fraud Control Units and NHCAA. “In the 1970s, we found that we were largely dealing with individual providers who were involved in relatively uncomplicated schemes, such as filing false claims which resulted in a few thousand dollars of damage to the Medicare program. Today, it is more common to see cases involving groups of people who defraud the Government. Some of the schemes are relatively complex, often involving the use of sophisticated computer techniques, complicated business arrangements, and multiple locations across state lines. These crimes can cause losses in the tens of millions of dollars to Medicare and Medicaid, as well as to other public and private health insurance programs.” Along with the increasing complexity of health care fraud, law enforcement and regulatory agencies and insurers have recognized the importance of coordinating their enforcement efforts and exploring methods for sharing health care fraud information. A health care reform bill introduced before the U.S. Senate in November of 1993 identified a “national need” to coordinate health care fraud-related information and went on to state that control of fraud and abuse in health care services warrants greater efforts of coordination than those that can be undertaken by individual states or the various federal, state, and local law enforcement programs. A coordinated enforcement effort has to involve not only public law enforcement agencies, but also the private sector. Given their position of having daily interaction with health providers, private insurers often possess more information about a provider’s activities than state and federal agencies. This position, coupled with their own incentive to reduce costs, has made insurers another source of information for government investigators and prosecutors. Because many insurers have established special investigative units that pursue fraud, these private sector resources can be used to leverage existing public sector investigative resources. The insurance industry has developed sophisticated methods for detecting fraud, and development of contacts with the industry can provide a valuable source of fraud case referrals to the federal government. For example, the FBI recently reported that on the basis of fraud referrals by several private insurance company investigative units, government investigators were able to identify a medical billing company that defrauded $1.5 million from insurance companies across the country. Moreover, because fraudulent schemes often target public and private programs simultaneously, an active anti-fraud enforcement effort involving private insurers may lead to the discovery of additional fraud involving public sector health programs. In recent years, there have been various proposals designed to enhance information sharing among federal, state, and private entities involved in health care fraud enforcement. Some proposals, for example, have called for federal immunity legislation to provide protection for persons who report suspected fraud. The purpose of such immunity laws is to encourage insurers and private individuals to report suspected fraud to law enforcement agencies by protecting the individuals from subsequent civil actions. Other proposals have called for establishment of a national, centralized database of health care fraud-related information. The purpose of a centralized database would be to provide public and private sector fraud investigators with easy access to information about health care fraud activity nationwide and to enhance coordination of investigative efforts among insurers and law enforcement agencies. In an October 6, 1993, letter, the former Chairman of the House Subcommittee on Information, Justice, Transportation, and Agriculture, Committee on Government Operations, asked us to broadly examine health care fraud enforcement issues. On the basis of this request and subsequent discussions with the new Subcommittee’s Ranking Minority Member, we agreed to focus our work on questions about information-sharing issues that may affect health care fraud enforcement efforts, specifically: • What is the extent of federal and state immunity laws protecting persons who report health care fraud-related information (see ch. 2)? • What evidence exists for and against establishing a centralized health care fraud database to enhance information sharing and support enforcement efforts (see ch. 3)? To address these two questions, we first reviewed relevant literature to obtain a broad understanding of the importance of information sharing in anti-fraud enforcement efforts. This literature included reports issued by government and private sector organizations—such as DOJ, HHS, and NHCAA—that are responsible for managing and/or overseeing health care anti-fraud activities. We also reviewed the provisions of relevant proposals presented in recent years by administration and congressional sponsors that would enhance information sharing among the various federal, state, and private entities responsible for health care fraud enforcement. These included a 1992 proposal by the Bush administration, as well as proposals introduced in House and Senate bills during 1993 and 1995, respectively. To obtain a broad understanding of both the immunity and the centralized database issues, we contacted key governmental and private organizations that could provide nationwide perspectives. Federal government contacts included officials at DOJ, FBI, the U.S. Postal Inspection Service, and the Executive Office for U.S. Attorneys, as well as Office of Inspector General officials at the principal agencies responsible for managing major federal health care programs—HHS, the Office of Personnel Management, and the Department of Defense. Private sector contacts—representing industry, professional, and special interest organizations—included the American Medical Association, the Health Insurance Association of America, the National Insurance Crime Bureau, and NHCAA. In meetings with knowledgeable officials at several of these governmental and private organizations, we also obtained perspectives on S. 1088 (the Health Care Fraud and Abuse Prevention Act of 1995), which contains immunity and centralized database provisions and was pending further congressional consideration at the time of our review. To obtain additional perspectives on the immunity and centralized database issues, we contacted relevant public and private sector officials in four states—Florida, Maryland, Massachusetts, and Texas (see app. I). In judgmentally choosing these four states, we considered various factors, including (1) the status or scope of the state’s immunity law, (2) the extent of anti-fraud activities undertaken by applicable enforcement agencies, and (3) selection suggestions made to us by law enforcement officials and insurance industry organizations. To the extent practical with just four states, we wanted the selections to reflect a range of immunity and/or anti-fraud enforcement environments. For example, at the time we made the selections: • Maryland had no state immunity law protecting persons who reported suspected health care fraud, while Massachusetts’ immunity law protected only reports made to the state fraud bureau. Florida and Texas both had broader immunity laws that protected disclosures made to both federal and state agencies.• Florida and Texas both had state fraud units within the Department of Insurance; Maryland and Massachusetts had independent or stand-alone fraud bureaus. Further, an FBI official told us that FBI field offices in these four states were among the most active in private sector health care fraud investigations. In visiting each state, we met with officials from various federal and state prosecutive, investigative, and regulatory agencies—U.S. Attorneys offices, FBI and U.S. Postal Inspection Service field offices, state departments of insurance and fraud bureaus, and state attorneys general. We also met with general counsel and special investigative unit officials at selected private insurance companies in these four states. As appendix I shows, our total number of contacts in these 4 states included 12 federal offices, 6 state offices, and 8 private insurance companies. To obtain additional insurance company perspectives, we also visited five national insurers at their headquarters located in Connecticut and Illinois, respectively. Thus, in total, we visited 13 private insurance companies. At each organization visited, we interviewed those officials responsible for anti-fraud enforcement efforts. Regarding the immunity issue, we obtained information and views about (1) how fraud-related information is being shared between investigative and prosecutive entities, (2) what impact immunity laws (or the absence of such laws) have had on the willingness of individuals to report suspected fraud, and (3) whether a federal immunity law is needed to enhance information sharing. Regarding the centralized database issue, we obtained information about (1) how computerized databases are being used in investigating fraud; (2) whether a national, centralized database is needed to enhance enforcement efforts; and (3) what factors should be considered in establishing such a database. Our direct observations about health care fraud enforcement issues are limited to the locations visited and may not reflect circumstances or conditions in other locations. To obtain information on the immunity and centralized database issues from state officials responsible for insurance regulation, we mailed a questionnaire to all 50 state insurance commissioners (or to an equivalent state insurance regulatory official). We developed and pretested the questionnaire with input from officials with the National Association of Insurance Commissioners and an official with the survey population. We mailed the questionnaire in July 1995, and we received responses from all 50 states between July and November 1995. A copy of the questionnaire, with a tabulation of responses to each applicable question, is presented in appendix III. Although we surveyed and received responses from the population of state insurance commissioners, the practical difficulties of conducting any survey may, nonetheless, introduce unintended nonsampling errors. For example, variations in the wording of questions or the sources of information available to the respondents can introduce variability into the survey results. However, as noted above, in order to minimize these errors, we pretested our survey. Also, all survey data were verified during data entry and all computer analyses were reviewed by a second independent analyst. We did our work from September 1994 through December 1995 in accordance with generally accepted government auditing standards. NHCAA and the American Medical Association provided written comments on a draft of this report. These comments are included in appendixes V and VI and are summarized and evaluated at the end of chapters 2 and 3. DOJ provided technical and clarifying comments, which we incorporated where appropriate in this report. The purpose of immunity statutes is to encourage insurers and private individuals to report suspected fraud by protecting them from civil claims subsequently arising from insurance fraud investigations. We identified no federal law that protects persons providing health care fraud-related information to law enforcement agencies. However, there are some related immunity provisions on the federal level. Regarding Medicare and Medicaid, for example, current federal statutory law providing immunity from liability is limited to persons reporting information to peer review contractors about Medicare and Medicaid health care services. This immunity protection is further limited in that it does not apply to persons reporting fraud-related information to federal authorities, such as HHS, the FBI, and the U.S. Postal Inspection Service. It also does not apply to persons who report suspected health care fraud involving private sector insurance, even if the suspected fraud is reported to a federal agency. While most states have enacted immunity laws that protect persons who report suspected health care fraud more broadly than current federal law, the laws vary considerably. For example, some state laws protect sharing of suspected fraud information with any federal or state law enforcement authority, whereas some states protect information sharing only with certain state authorities. In recent years, various health care anti-fraud proposals (some included in health care reform bills) have been introduced by the administration and Congress to, among other matters, provide a broader federal immunity statute. The health care reform bills were not enacted, however, and at the time of our review, one health care anti-fraud bill (S. 1088)—which was awaiting congressional consideration—would provide immunity protection more broadly than current federal law. The health insurance and medical industry associations we contacted supported the concept of a federal immunity law. Additionally, nearly two-thirds of the federal and state government fraud investigators and prosecutors and 12 of 13 insurance company representatives we interviewed supported a federal immunity law. In fact, many of the individuals we spoke with thought that federal immunity protection should be broader than the immunity proposed under S. 1088. Broadly viewed, public policy supports both encouraging private entities to participate in the investigation and prosecution of fraud and providing protection to innocent people against unsubstantiated allegations made in bad faith or with malice. That is, given the public interest in crime prevention, reasonable private participation in the investigation and prosecution of crime is a desirable objective. Immunity statutes are one way of encouraging this objective. On the other hand, concerns have been raised about the need to incorporate safeguards to provide individuals with protection against bad faith allegations. Safeguards that have been considered include requirements governing the specificity and credibility of reported information and provisions giving individuals legal recourse against bad faith allegations that could seriously damage an individual’s life and livelihood if publicly disclosed. Immunity statutes represent part of the general public policy to encourage private involvement in the prosecution of crime by protecting persons against civil claims subsequently arising from insurance fraud investigations. For instance, the reporting of an individual suspected of fraud may result in the named, suspected party filing a civil suit against the reporter claiming defamation of character. Immunity statutes typically include limiting language, such as “in the absence of malice or bad faith,” which allows an individual claiming defamation the opportunity to show that the reporting party intended to harm the individual. However, where there is an applicable immunity statute and the individual claiming defamation is not able to show malice or bad faith, the reporting party is provided protection from these types of lawsuits. It is important to note that a number of civil claims can be raised against insurers stemming from fraud investigations and, in most cases, there is no way to prevent an aggrieved party from filing a civil lawsuit against an individual who reports suspected fraud. An immunity statute does, however, make it more difficult for a claimant to prevail. For example, in a 1992 civil action in Ohio, an individual brought action for damages against a health insurer for reporting suspected fraud to the state insurance department. The focus of the case was whether the plaintiff could recover from the insurer for defamation. The federal district court held that the insurer faced no civil liability for reporting suspected fraud because Ohio law provides immunity to persons who furnish information—in good faith and without malice or fraud—to the Ohio Department of Insurance. Since the court found nothing in the record suggesting bad faith, the insurer prevailed in its motion for summary judgment. “Notwithstanding any other provision of law, no person providing information to any [peer review] organization having a contract with the Secretary [of HHS] under this part shall be held, by reason of having provided such information, to have violated any criminal law, or to be civilly liable under any law of the United States or of any State (or political subdivision thereof) unless—(1) such information is unrelated to the performance of the contract of such organization; or (2) such information is false and the person providing it knew, or had reason to believe, that such information was false.” Our review did not identify any similar federal statutory immunity provisions applicable to other government health care programs, such as the Federal Employees’ Health Benefits Program (managed by the Office of Personnel Management) and the Civilian Health and Medical Program of the Uniformed Services. As indicated, the Medicare/Medicaid statute specifically protects only information disclosures made to a peer review organization under contract with the Secretary of HHS. Under federal law, the Secretary enters into contracts with peer review organizations for the purpose of promoting the effective, efficient, and economical delivery of quality health care services under Medicare. Composed primarily of health care practitioners from within a geographical area, these organizations perform quality assurance and utilization reviews of health care providers seeking reimbursement for their Medicare services. If a peer review organization determines that a practitioner or provider has persisted in violating his obligation to provide services that (1) are medically necessary, (2) meet professionally recognized standards of care, and (3) are cost-effective, the reviewer may recommend that the practitioner or provider be excluded from the Medicare program. In addition, states can also choose to use peer review organizations to review care received by Medicaid patients. “We believe that Medicare contractors who are carrying out official functions related to administration of the Medicare program, particularly those who are engaged in efforts to detect, prevent, or prosecute program fraud and abuse, should be entitled to protections similar to those enjoyed by Federal employees engaged in those activities. For that reason, the Department of Health and Human Services will ordinarily request, and the Department of Justice will ordinarily agree, that the Department of Justice will defend, at its own expense, any Medicare contractor or employee of a contractor, who is sued in connection with activities undertaken within the scope of the Medicare contract.” Further, the federal statutory immunity provision does not protect persons who report suspected fraud involving private sector insurance plans, even if the suspected fraud is reported to a federal agency such as the FBI or the U.S. Postal Inspection Service. Existing state immunity laws would provide some statutory immunity protection under these circumstances. Many states have enacted immunity laws to protect individuals who report suspected insurance fraud—including health care fraud. The responses to our survey of the 50 state insurance commissioners indicate that state immunity laws vary considerably in terms of protection provided to private insurers who disclose fraud-related information. As appendix II shows, at the time of our survey, 38 states had enacted immunity laws protecting the sharing of health care fraud-related information, while 12 states had no immunity laws applicable to health care fraud. In the 38 states with immunity laws, typically only specific reporting channels are covered. For example, 24 states provide immunity to insurers for sharing fraud-related information with state and federal law enforcement authorities, as well as with the state insurance commissioner. However, 10 states provide immunity for sharing fraud-related information only with the state insurance commissioner. Eight states provide immunity to insurers for sharing fraud-related information with other insurers. All 38 states with immunity laws place certain qualifications on the provision of immunity. In each of these states, for instance, the immunity is contingent on the absence of “bad faith” or “malice.” However, the meaning of these qualifications and the ultimate protection provided under the law are subject to interpretation by individual court systems. Much like existing federal immunity law, state immunity laws are also limited in the level of protection they provide. Due to variances in state laws, the immunity protection provided can be different from state to state. These differences can present concerns for private insurers that operate nationally, especially if a suspected fraudulent scheme involves more than one state. An insurer investigating a multistate fraud scheme, for example, may have concerns about which state’s immunity law applies to the sharing of case information. Also, as discussed above, 12 states have no immunity laws applicable to health care fraud. Anyone reporting suspected private insurer fraud in these states has no specific statutory protection from subsequent civil lawsuits. In recent years, federal proposals have been introduced that would broaden existing immunity protection. Ranging from a 1993 executive branch task force proposal to a 1995 anti-fraud bill currently awaiting consideration by Congress, these proposals would provide some federal immunity protection to persons who report suspected health care fraud—regardless of whether the fraud involves public or private payers within the health care system. In 1993, to enhance health care fraud enforcement efforts, a Bush administration task force recommended, among other things, providing immunity for reporting information to a national database, which the task force also recommended be created. Immunity from both federal and state claims would have been provided to database participants reporting fraud-related information to (or obtaining such information from) this database in good faith. Also, the proposed immunity provision contained a requirement that any complainant alleging malice or bad faith must plead with specificity the facts that constitute malice or bad faith in order to invoke an exception to immunity. The recommendations of the task force were not adopted, although, as noted below, the immunity proposal was reintroduced in subsequent congressional legislation. During 1993 and 1994, some form of federal immunity provision was included in various health reform bills introduced in Congress. In November 1993, for example, the Clinton administration introduced H.R. 3600 (the “Health Security Act”), which would have established federal immunity for reporting suspected health care fraud to HHS and DOJ. Because the proposed Health Security Act would have created a national framework for the delivery of health care, this immunity provision would have applied to the reporting of any suspected health care fraud throughout the health care system. By late summer 1994, the original Clinton reform bill had been essentially dropped by both the House and the Senate. Neither the Health Security Act nor any of the other health reform bills were enacted. In July 1995, Senator William Cohen introduced S. 1088 (the “Health Care Fraud and Abuse Prevention Act of 1995”), which contains various provisions designed to enhance anti-fraud enforcement efforts. One of the bill’s provisions would extend the protection provided in 42 U.S.C. 1320c-6(a) to include persons providing information about any public or private health plan to either HHS or DOJ. This expansion would address some of the limitations of current federal law. For example, a person reporting suspected Medicare fraud to HHS and a person reporting private insurance fraud to DOJ would be protected under this bill. Further, the bill’s immunity provision would provide some statutory protection in the 12 states that have no immunity law and also may provide, in those states with immunity laws, an additional channel for reporting suspected fraud. Although S. 1088 would provide immunity protection more broadly than current law, the provision does not address the role of all entities involved in anti-fraud enforcement. For example, under S. 1088, immunity protection would not be provided to persons who report fraud information to law enforcement entities other than DOJ and HHS. The proposal does not address the role of other federal agencies (such as the U.S. Postal Inspection Service) and state agencies (such as Attorneys General and state fraud bureaus) that also conduct health care fraud investigations. Also, the proposal does not address sharing from one insurer to another, a provision already included in some state immunity laws. One recent federal anti-fraud proposal does address insurer-to-insurer information sharing. H.R. 2408, introduced September 27, 1995, would extend the provisions of 42 U.S.C. 1320c-6(a) to provide immunity for, among other things, “health plans sharing information in good faith and without malice with any other health plan with respect to matters relating to health care fraud detection, investigation and prosecution.” Industry organizations, such as NHCAA and the Health Insurance Association of America, have stated that due to the potential for civil lawsuits, private insurers are concerned about sharing fraud-related information with law enforcement agencies. These industry organizations contend that a federal immunity law would facilitate the flow of information between insurers and law enforcement agencies and enhance investigation and prosecution of health care fraud cases. This contention includes a general recognition of the need for appropriate safeguards against bad faith allegations. Medical associations also told us they generally support immunity protection for individuals who report suspected fraud. Most of the investigative and prosecutive officials we interviewed—which included investigators and prosecutors at federal and state agencies and special investigative unit personnel at various insurance companies—also told us that a federal immunity law would enhance health care anti-fraud enforcement efforts. “The need for a concerted anti-fraud effort involving the sharing of information among private payers and with law enforcement is being widely acknowledged. However, while many states provide some immunity protection for those engaging in good faith fraud investigations, this protection varies tremendously by state; many states have no immunity statute. . . . This piecemeal state legislation simply does not protect insurers and other payers in many states or in multi-state investigations. “Therefore Congress should consider enacting an immunity statute that would immunize payers’ good-faith efforts to fight fraud and provide immunity from state tort liability. Such a statute would preempt the inconsistent, vague and often ill-considered state law jeopardy faced by insurers and other payers . . . and would create a standardized and effective tool to encourage fraud fighting. “Like many state statutes, this immunity protection would not be absolute, and reasonably would be limited to those investigations conducted with good faith or the absence of malice. However, to make this protection effective, Congress should consider the addition of a provision, modeled on Rule 9(b) of the Federal Rules of Civil Procedure, that requires a person to plead with specificity the facts that constitute malice or bad faith in order to invoke this exception to immunity.” In 1993 testimony before Congress, the NHCAA Executive Director stated that before forwarding a case for investigation and prosecution, insurers always have to consider the probability of lawsuits for defamation, slander, and malicious prosecution, and that these lawsuits, even if they are completely without merit, are at best very costly to the insurer.During our review, the NHCAA Executive Director told us that encouraging insurers to report fraud is important because the extent of health care fraud is increasing. He noted that, historically, insurers would simply write off fraud and pass the losses on to policyholders because fraud was not deemed to be that significant a problem. He added that some insurers saw no alternative to such write-offs because health care fraud was not considered to be a priority of law enforcement. However, the Executive Director said that insurers now have increased their anti-fraud efforts because they recognize that health care fraud is widespread and because law enforcement (federal and state) has taken a significant interest in investigating and prosecuting such fraud. Accordingly, NHCAA still supports federal immunity legislation. In 1993, the Health Insurance Association of America conducted a survey to determine the extent to which its member companies engaged in health care anti-fraud activities. The survey asked for the number and types of cases the companies investigated each year from 1990 through 1992. During each year of the survey period, member companies referred only 9 to 11 percent of their cases to law enforcement agencies. In 1992, for example, the companies investigated a total of 26,755 health insurance fraud cases but referred only 2,645 cases (or about 10 percent) to law enforcement agencies. According to an Association official, this relatively low referral percentage is due, in part, to insurers’ concerns about potential civil liability. This official further commented that the survey results indicate that federal legislation (preempting a “hodgepodge” of inadequate state statutes) is needed to provide immunity for insurers and others who provide fraud-related information to law enforcement authorities. An attorney in the American Medical Association’s Health Law Division told us that although the Association has not formally commented on the immunity provisions contained in the various health care anti-fraud and abuse bills introduced in Congress, the Association generally supports immunity for reporting suspected health care fraud because it recognizes the benefits to anti-fraud enforcement efforts. Therefore, the Association supports immunity for insurers that report suspected fraud to law enforcement entities, as well as for insurers sharing fraud-related information with each other. The attorney said, however, that since medical information is sensitive and private in nature, any legislation that grants immunity for insurer-to-insurer information sharing should include controls to ensure that the information is not used inappropriately. He explained that the legislation could include, for example, a requirement that shared information must have a certain level of specificity or credibility—such as confirmed (rather than unsubstantiated) fraud allegations. Representatives from several other medical associations, including the American Hospital Association, the American Health Care Association, and the National Association for Home Care, also indicated that their groups supported immunity protection. These representatives told us that their respective associations had not formally commented on the immunity provisions in any of the health care anti-fraud bills introduced in Congress, but they generally support the concept of immunity to protect individuals who report suspected fraud, as long as the immunity is qualified. That is, an immunity provision should include qualifications, such as absence of bad faith or malice, so that an individual who is actually harmed by a report of suspected fraud has a basis for filing a lawsuit. As noted in chapter 1, to better understand whether broad federal immunity is needed, we interviewed individuals responsible for investigating and prosecuting health care fraud cases—officials at 12 federal and 6 state investigative and prosecutive offices and investigators and general counsel at 13 insurance companies (see app. I). At 8 of the 12 federal investigative and prosecutive offices we visited, officials told us they believed a federal immunity law is needed to enhance anti-fraud enforcement efforts. The officials generally commented that a federal law may increase insurers’ willingness to report fraud. In one location we visited, an FBI supervisory agent told us that none of his office’s active health care fraud investigations were initiated on the basis of reports from private insurers. The supervisory agent said that on the basis of informal discussions between insurers and FBI agents, insurers indicated they considered reporting health care fraud to his office but have not due, in part, to concerns about possible exposure to civil liability. The supervisory agent also commented that federal immunity legislation may encourage insurers to report suspected health care fraud. As support for this opinion, he cited his experience in investigating bank fraud cases and explained that leads or referrals from banks increased after federal immunity legislation was strengthened. Federal officials at several of the field offices we visited also said that a federal law would be beneficial because it would provide a standard level of protection in all states. A U.S. Postal Inspection Service investigator told us that a federal immunity law would make it easier to investigate multistate fraud schemes because there would be no concerns about which state immunity statutes applied in specific cases. Assistant U.S. Attorneys at one office we visited told us that the effects of a federal law would be difficult to predict. These attorneys also noted, however, that their district does not receive many health care fraud case referrals from private insurers, partially due to the insurers’ concerns about being sued civilly for sharing health care fraud-related information with federal prosecutors. They further commented that since most fraud schemes affect both private and public health care plans, the government would benefit from increased private insurer fraud reports because more public program fraud could be identified. Federal officials at four field offices told us a federal immunity law would not enhance anti-fraud enforcement efforts in their respective jurisdictions because a high level of information sharing was already occurring. They added, however, that other jurisdictions might benefit from a federal law, and they cited increased information sharing as a possible benefit. In one of the states we visited, an FBI supervisory agent told us that the need for a federal immunity law possibly was being overstated by insurance company executives. The supervisory agent noted that at the field level, fraud-related information was being reported by insurers’ special investigative units to FBI agents—either informally on the basis of established working relationships or formally in response to government-issued subpoenas. Regarding the immunity provision in S. 1088, federal officials at four of the field offices we visited told us that the scope of the provision should be expanded beyond HHS and DOJ. The officials commented that immunity should be provided for reporting to other federal entities with health care fraud enforcement responsibilities, such as the U.S. Postal Inspection Service and the Internal Revenue Service, as well as with state law enforcement authorities or state insurance departments. Also, federal officials at four of the field offices said that S. 1088 should be expanded to provide immunity for insurers sharing fraud-related information with other insurers. The officials said this type of information sharing would be beneficial because insurers would be able to work together and identify the extent to which a fraud scheme is affecting more than one insurance company. One FBI supervisory agent we spoke with commented that the FBI would benefit from insurers sharing information with each other because his office would receive from the insurers more fully developed cases that are more likely to be accepted. Further, an Assistant U.S. Attorney told us that fraud perpetrators would be more reluctant to routinely defraud multiple insurers if they knew insurers routinely shared fraud-related information. Several of the officials also told us, however, that a possible disadvantage to allowing insurer-to-insurer disclosures is the potential for insurers to use the information for other than legitimate anti-fraud purposes. An insurer might, for example, disallow a health care provider from providing services under its health plan because of reports of suspected fraud by another insurer, even though the allegations have not been substantiated. The responses to our survey of the 50 state insurance commissioners (see app. III) indicated broad support for both state and federal immunity laws. For those 38 states that provided immunity at the time of the survey, 35 responded to our question about the positive effects from their respective states’ immunity laws. Twenty-four of the 35 respondents believed their states’ laws had positive effects on anti-fraud enforcement efforts. Almost all of these respondents (21) indicated the state immunity law increased reporting of suspected fraud. Two-thirds (16) indicated the immunity law increased information sharing, and over half (14) answered that the number of fraud cases investigated increased. Of the 35 respondents answering our question about the negative effects from their states’ immunity laws, 32 indicated there were no negative effects stemming from such legislation. The other respondents cited excessive sharing of questionable intelligence and increased workloads as possible negative effects. Our survey also asked for opinions about the effectiveness of a federal immunity law in facilitating the sharing of fraud-related information between (1) private health insurers and federal/state law enforcement authorities, (2) private health insurers and federal/state regulatory authorities, and (3) two or more private health insurers. Of the respondents who answered these questions, 33 (of 39) indicated that a federal law would be very or somewhat effective in facilitating fraud-related information sharing between private health insurers and federal/state law enforcement authorities, and 32 (of 38) answered that it would be very or somewhat effective in facilitating information sharing between private health insurers and federal/state regulatory authorities. Twenty-seven (of 36) respondents answered that a federal law would be very or somewhat effective in facilitating fraud-related information sharing between two or more private health insurers. At three of the six state investigative and prosecutive offices we visited, officials told us they believe a federal immunity law is needed to enhance anti-fraud enforcement efforts. The officials cited several potential positive effects of such a law, including increased information sharing among insurers and law enforcement agencies. Also, the officials said that a federal law would provide a minimum level of immunity protection to private insurers located or otherwise doing business in states without immunity statutes. An official from one state fraud bureau told us that a federal law might encourage insurers to become more actively involved in identifying and reporting fraud at its earliest stages, thereby improving the likelihood of effective case development. On the other hand, state officials at the offices that did not see a need for a federal immunity law cited various reasons for their viewpoints, such as: • A federal law is not needed because many health care fraud cases are handled at the state level. • Regulation of the insurance industry historically has been a state responsibility, and a federal law would be seen as encroaching on that responsibility. • A federal law would not have much effect because insurers have business-related reasons for not reporting fraud-related information. Regarding the latter opinion, a state fraud bureau official told us that insurers’ willingness to report suspected fraud generally depended more upon corporate policy than upon the existence or scope of immunity statutes. This official noted that some companies aggressively pursue fraud and seek prosecutions, while other companies prefer to settle matters internally. Regarding the federal immunity provision in S. 1088, state officials at five offices told us that the proposal should be more comprehensive. They generally said that the bill overlooks the important role the states play in health care fraud detection and prosecution and should be expanded to provide immunity for reporting information to nonfederal government entities. One state Department of Insurance official told us that the bill may result in insurers sharing health care fraud-related information with only the federal government. He believes this would reduce information sharing between private insurers and the states. Also, because the federal government would not have the resources to investigate all case referrals, many of the health care fraud cases that normally could be addressed at the state level would go unaddressed at the federal level. State officials at two offices told us that a federal immunity law should provide immunity for insurer-to-insurer sharing of fraud-related information. As a supporting example for this opinion, one official said that passage of a state statute allowing automobile insurance companies’ special investigative units to exchange information about suspected fraudulent claims helped to decrease automobile insurance fraud in that state. Further, this official commented that since passage of the state statute, automobile insurance companies’ special investigative units have been able to coordinate and report to law enforcement agencies more fully developed cases that are more likely to be accepted for prosecution. As with the federal investigators and prosecutors, however, a few of the state officials also told us that a possible disadvantage of allowing insurer-to-insurer disclosures is the potential for insurers to use the information for other than legitimate anti-fraud purposes. At 12 of the 13 insurance companies we visited, representatives told us a federal immunity law would help anti-fraud enforcement efforts. They generally said that a federal law would be beneficial because it would provide immunity protection uniformly applicable in every state. The Director of Investigations at one insurance company told us that fraud schemes tend to cut across state lines, typically affect more that one payer, and usually involve both public and private insurance programs. He added that a federal law would provide consistent protection during multistate or national fraud investigations, something that is not provided under the various state laws. Further, representatives of another insurance company said that a federal law would likely solve the problem of deciding which state’s immunity laws take precedence in multistate investigations. Representatives of several insurance companies also told us a federal law might result in more health care fraud case referrals to law enforcement agencies. They generally said the weight a federal law carries may encourage some insurers, who might not otherwise come forward under a state law, to report health care fraud cases to law enforcement agencies. Representatives from one national insurance company told us that they are aware of instances of fraud being committed against their company, but due to such factors as the lack of state immunity laws or poorly written state immunity laws, the company may internally address this fraud rather than refer it to law enforcement agencies. An attorney from another insurance company told us that a federal law could even help reduce the costs of defending against reactionary civil lawsuits because such legislation may provide a basis for summary judgments. One insurance company investigator told us a federal law is not needed because insurance regulation is a state responsibility, and states are addressing this issue by passing legislation to provide immunity. He further commented that some insurers, as a business decision, will be reluctant to get involved in reporting fraud no matter what immunity protection, federal or state, is provided. The investigator added, however, that a federal law would still be beneficial to national insurers because it would provide some consistency in immunity protection, which is not now the situation under the various state laws. Regarding the federal immunity provision in S. 1088, representatives of seven insurance companies told us that the immunity provision should be expanded to cover reporting to entities other than just HHS and DOJ. They said that immunity should be provided for sharing fraud-related information with other federal, as well as state, government entities that investigate or prosecute health care fraud. One company’s investigator said that S. 1088 might be problematic because the FBI does not have enough resources to address the additional referrals the agency would likely receive. A Director of Investigations at another insurance company told us the bill overlooks the large number of cases that are prosecuted at the state level of government because they do not involve large enough dollar losses to be of interest to the federal government. Representatives of nine insurance companies told us a federal law should provide immunity for insurer-to-insurer sharing of fraud-related information. These individuals generally said that allowing insurer-to-insurer information sharing would enable the companies to develop more significant cases—that is, cases involving larger dollar amounts and/or fraud schemes of wider scope—for referral to law enforcement agencies. Representatives from one insurance company told us that the quality of evidence will improve, resulting in more criminal prosecutions for health care fraud offenses. These representatives explained that by working cooperatively together, insurers will be able to show that multiple insurers were defrauded under the same scheme, and this will make it easier to prove the criminal element of intent (i.e., the suspects knowingly defrauded the insurers). To demonstrate the drawbacks of insurance companies not sharing fraud-related information with each other, the representatives noted the California “rolling labs” case. They told us that the rolling labs fraud scheme was able to continue for years without being detected and reported to law enforcement agencies because the insurers were reluctant to work together to identify and determine the full scope of the fraud scheme. The representatives further commented that the rolling labs case is what prompted the state of California to enact strong immunity laws. Similar to the concerns voiced by some of the federal and state investigators and prosecutors we interviewed, a few of the insurance company representatives told us that a possible disadvantage to allowing insurer-to-insurer disclosures is the potential for the information to be used for purposes other than fraud detection and prevention. One insurance company investigator said that to ensure against unfair or conspiratorial practices by insurers, a federal law allowing insurer-to-insurer sharing of fraud-related information should include parameters covering what information can be shared (e.g., only information that clearly shows fraud occurred) and specifically who in the insurance companies can have access to the information (e.g., only special investigative units). Immunity laws are designed to encourage insurers and other individuals to report suspected fraud by providing them protection against subsequent civil lawsuits related to such information sharing. Currently, there is no federal immunity protection for persons who report fraud-related information to law enforcement agencies. Statutory protection is provided for reports made about Medicare and Medicaid health care services to peer review organizations. However, insurers—the primary processors of health care claims—are not provided federal immunity protection for sharing fraud-related information concerning other public and private health plans. While most states have enacted immunity laws that provide some immunity protection to insurers, these laws vary from state to state. The law enforcement, regulatory, and industry officials we queried expressed widespread support for the concept of a broad federal immunity law that includes adequate safeguards against bad faith allegations. As benefits of a federal law, these officials cited increased information sharing by insurers and uniformity of coverage in every state. Many of the officials, however, told us that to be most useful, a federal immunity law should provide broader protection than the immunity proposed under pending congressional bill S. 1088. The officials favored immunity for insurers, not just for sharing fraud-related information with DOJ and HHS, but for sharing such information with any federal or state entity with health care fraud enforcement responsibilities. They also favored expanding the immunity provision to protect insurers for sharing information with other insurers. One potential drawback to the latter approach is the possibility that insurers would inappropriately use information obtained from other insurers. While this is a potential risk of allowing insurer-to-insurer information sharing, the risks may be decreased through precise statutory language that specifies the reasons and procedures by which insurers may share fraud-related information with other insurers. DOJ provided technical and clarifying comments, which we incorporated where appropriate. In its written comments on the draft report, NHCAA wholeheartedly endorsed the need for a federal immunity statute and commented that such a statute could play a significant role in expanding the private sector’s ability to initiate investigations and cooperate with law enforcement. To be fully effective, NHCAA suggested that the federal statute should • provide immunity protection with respect to all health care anti-fraud • extend to all law enforcement officers, not just those connected to the administration of the health care system; • apply to exchanges of information between private sector fraud investigators, i.e., information sharing between or among insurers; • require that any allegation of sharing false information be “pled with particularity,” a term of art under the Federal Rules of Civil Procedures; and • allow the recovery of attorney fees to a payer that is sued and subsequently found to be entitled to immunity. In its written comments on the draft report, the American Medical Association generally supported immunity for reporting fraudulent practices because it assists law enforcement efforts to bring perpetrators to justice. The Association added that any legislation granting immunity for insurers and any other entities to share information regarding suspected fraudulent behavior should include the following safeguards to ensure that such information is not used inappropriately: • The shared information must be related to specific conduct, and the conduct must be outside the realm of legitimate disagreements on what care is medically necessary. • There must be some substantiation of the information, so that its credibility is not in question. • There must be an opportunity for one who is harmed by “bad faith” sharing of information to seek legal recourse. Currently, there is no centralized national database in law enforcement to track patterns of criminal activity in the health care system. As a result, investigators and prosecutors use a variety of federal, state, and private industry databases to investigate health care fraud. To enhance access to health care fraud-related information and to help coordinate enforcement efforts, the proposed Health Care Fraud and Abuse Prevention Act of 1995 (S. 1088) calls for establishing a national, centralized database of health care fraud information. The proposed database would contain information about final adverse actions—such as license revocations, administrative sanctions, civil judgments, and criminal convictions—involving health care system participants. Such a database could be widely accessible, and it would assist investigators in developing background profiles on providers and other individuals under investigation. Many law enforcement and industry officials told us they support the establishment of a database of final adverse actions, but it is not essential to their enforcement efforts. Although this type of database could be widely accessible to federal, state, and private investigators, the benefits of such a database may not justify the largely unknown costs involved to operate it. These officials suggested two alternative databases—one including ongoing investigative information and another including suspected fraud referrals—that might provide more investigative benefits than a database of final adverse actions. But, in addition to unknown costs, such databases would be much riskier in terms of the need to protect against unauthorized disclosure and use of the information. Currently, there is no centralized national database in law enforcement to track patterns of criminal activity in the health care system. In an effort to obtain information on potential suspects and fraudulent schemes, health care fraud investigators can query various federal, state, and industry databases and other information resources (see app. IV). Data obtained from these systems can provide investigators with information needed to develop a comprehensive background profile on health care fraud suspects. Such data can also be useful to prosecutors in their efforts to obtain harsher sentences for recidivists. Although the databases and other information resources identified in appendix IV can be useful to investigators, each has certain limitations or disadvantages, as discussed below. Although the FBI’s National Crime Information Center is the nation’s most extensive criminal justice information system, the Center’s criminal history records are accessible only by authorized federal, state, and local criminal justice agencies. Private sector entities, such as insurance company investigative units, do not have direct access to these records. In addition, the records are not easily identifiable as relating to health care fraud. For example, because there is no federal health care fraud statute, federal criminal convictions for health care fraud could have been obtained under any of several general statutes involving mail fraud, false statements, or conspiracy. Finally, the Center does not have records of noncriminal actions—such as federal and state civil judgments—taken against health care providers. The HHS sanctions information, although nationwide in scope, covers only program exclusions taken against health care providers and practitioners for two public programs, Medicare and Medicaid. Although HHS recently began making its sanctions information more widely available, HHS does not identify individuals and entities sanctioned by Social Security or tax identification number. According to one insurance company official we interviewed, these identifiers are needed in order to make the sanctions data more useful for investigative purposes. This investigator explained that his company first had to crossmatch the names from the HHS sanctions report against another computer software program that contained names and Social Security/tax identification numbers; once that was done, the data were then downloaded into the company’s computer system for future use. Another federally sponsored information resource, the National Practitioner Data Bank, contains information on some, but not all, adverse actions taken against licensed health care practitioners. For example, it does not contain information about criminal convictions or civil judgments involving health care fraud. In addition, neither law enforcement agencies nor insurance company investigative units currently have access to the information in the Data Bank. While concentrating on investigations that are national or international in scope, the Financial Crimes Enforcement Network (FinCEN) uses the majority of its resources to assist law enforcement agencies in their investigations of financial aspects of the illegal narcotics trade. However, for other financial crimes (such as health care fraud) that may involve money laundering, investigators can use FinCEN for intelligence and analytical support to help identify and trace assets for seizure and forfeiture purposes. The National Association of Insurance Commissioners databases maintain nationwide information on regulatory and disciplinary actions taken against insurance agents and companies. This information is similar to that in the National Practitioner Data Bank, but it focuses on insurers and their agents rather than on health care providers. The Association’s information on regulatory and disciplinary actions is publicly accessible and focuses on all lines of insurance, including health. However, the information on adverse actions taken—for example, an insurance agent’s license revocation—is not necessarily related to fraudulent activity. NHCAA’s Provider Indexing Network System is available to member companies and participating law enforcement agencies, all of whom agree to abide by established procedures governing when and what type of information can be submitted, how data are to be updated, and what limited uses can be made of System data. On-line access to this database is limited to NHCAA member insurance companies and law enforcement members, such as HHS’ Office of Inspector General and the U.S. Postal Inspection Service. Nonmember law enforcement agencies can query the database through written request to NHCAA. Another limitation is that the System is not comprehensive, containing only about 1,984 entries as of March 1996. Many private insurers are not members of the Association. Further, even NHCAA members are not required to report fraud-related information to the Association’s database. Generally, in and of itself, information in the database is not “evidence” of any kind of fraudulent activity; rather, the information represents merely a means of focusing—in each member organization’s independent discretion—limited investigative resources. Every private payer that participates in the System agrees to indemnify the other participants if liability results from misuse of database information. However, one NHCAA member company’s officials told us that their company does not provide information to the database due to concerns about how other members might access and use the information. Another industry information resource potentially useful to health care investigators is the National Insurance Crime Bureau. Although the Bureau was established to coordinate the insurance industry’s efforts to address fraudulent claims involving automobile and other property/casualty insurance, its information systems may contain information relevant to certain health care fraud cases. For example, schemes involving staged automobile accidents or fraudulent workers compensation claims may entail fraudulent medical claims, sometimes involving corrupt health care providers in the scheme as well. Thus, while not all-inclusive, the Bureau’s information systems may contain some information about individuals involved in suspected health care fraud. To address the issue of access to health care fraud-related information, recent proposals (see table 3.1) have supported the establishment of a centralized repository for health care fraud information. In January 1993, a Bush administration task force on health care fraud and abuse recommended the establishment of two national databases—one for the reporting of final adverse actions and one for active fraud investigations. Access to the final adverse actions database would include not only law enforcement agencies, but also insurers and private individuals; access to the active investigations database would be restricted to law enforcement agencies, state licensing agencies, and insurance company investigative units. Regarding the final adverse actions database, the task force suggested—as an alternative to establishing a new database—expanding the National Practitioner Data Bank to require reporting of all final adverse actions involving practitioners and to include similar information about health care entities other than practitioners. Regarding the active investigations database, the task force also recommended that participants be provided with good faith immunity for reporting to and obtaining information from the database as an incentive to encourage participation. Neither of the task force’s database recommendations was implemented; however, the concept of a centralized health care fraud database has continued to be included in subsequent proposed federal legislation. In November 1993, for example, the Clinton administration’s proposed Health Security Act (H.R. 3600) advocated establishing a health information database containing, among other things, “information necessary to determine compliance with fraud statutes.” However, because this act would have substantially reformed the nation’s entire health care system, the proposed database was expected to contain much more than just information related to fraud. For instance, the database would have included information about clinical encounters and other health services provided, administrative and financial transactions of participants, utilization management by health plans or providers, and other nonenforcement-related activities and services. By late summer 1994, the original Clinton health reform bill had been essentially dropped by both the House and the Senate. Although proposals to establish a centralized health care fraud database appeared in several other health reform bills introduced during 1993 and 1994, none of these proposals were enacted. In July 1995, Senator William Cohen introduced S. 1088, which includes a proposal to establish a centralized repository for the reporting of final adverse actions against health care providers, suppliers, or practitioners.As defined in this bill, the term “final adverse action” includes (1) civil judgments against a health care provider in federal or state court related to the delivery of a health care item or service; (2) federal or state criminal convictions related to the delivery of a health care item or service; (3) actions by federal or state agencies responsible for the licensing and certification of health care providers, suppliers, and licensed health care practitioners; (4) exclusions from participation in federal or state health care programs; and (5) any other adjudicated actions or decisions that the HHS Secretary establishes by regulation. In October 1995, this legislative proposal was incorporated into the Senate’s proposed 1996 Budget Reconciliation Act, which was still pending at the time of our review. While advocating the establishment of a centralized health care fraud database, none of the proposals noted above clearly identified the database’s expected operating parameters—such as how many data records would be maintained, how many information queries were expected, and how much the system might cost to develop and operate.Current systems that might be useful in evaluating the recent health care fraud database proposals are the National Practitioner Data Bank, a system containing data on certain final adverse actions; and the Provider Indexing Network System, a system containing data on active investigations. As a large, national repository containing certain information on adverse actions taken against health care practitioners, the National Practitioner Data Bank illustrates how a centralized health care fraud database might be expected to operate. As of December 1994, the Data Bank contained over 97,000 records. The Data Bank has received over 4.5 million inquiries since it became operational in 1990, with the number of annual inquiries increasing from about 800,000 in 1991 to just over 1.5 million in 1994. The original 5-year contract (awarded in December 1988) to develop and operate the Data Bank was expected to cost $15.8 million. According to HHS officials, this contract was subsequently extended through June 1995, and the estimated cost was expected to be $24 million. Total costs to operate the Data Bank—including contract and HHS administrative costs—averaged almost $5.8 million annually for the period 1991 through 1994. The next operating contract—for the so-called second generation Data Bank—is expected to be less costly, about $12 million over 6 years. By law, Data Bank inquiry processing costs can be recovered through user fees, which currently range from $4.00 to $10.00 per inquiry. Although much smaller in scope and concept than the National Practitioner Data Bank, the NHCAA’s Provider Indexing Network System is a centralized repository of investigative information dealing specifically and solely with health care fraud. Because it is a personal computer-based system, the costs to develop and operate the system—about $30,000 to develop and about $35,000 (fiscal year 1995 costs) to operate—are much less than those incurred by the mainframe computer-based National Practitioner Data Bank. However, the relatively low costs are also reflected in the size of the database, which included only 1,984 entries as of March 1996. Although the size of the Provider Indexing Network System might limit its usefulness as a national information resource, it does demonstrate a less expensive alternative approach to health care fraud information sharing. As noted in chapter 1, to better understand the advantages and disadvantages of a centralized health care fraud database, in 4 states we interviewed individuals responsible for investigating and prosecuting health care fraud cases at 12 offices of federal agencies, 6 offices of state agencies, and 8 insurance companies; and in 2 other states, we interviewed investigators and general counsel at 5 national insurance companies (see app. I). As shown in table 3.1, recent legislative proposals have supported the establishment of a centralized health care fraud database of final adverse actions. The 1993 Bush Administration Health Care Fraud Task Force and the proposed 1995 Health Care Fraud and Abuse Prevention Act both specifically supported the establishment of a centralized repository for the reporting of final adverse actions against practitioners, providers, and other health care entities. In general, final adverse actions have been adjudicated in some federal or state public forum (for example, before courts or health care licensing and certification agencies) and are considered to be generally available to the public. Officials at 5 of the 12 federal investigative and prosecutive offices we visited told us they believe a centralized database of final adverse actions would be useful to health care fraud enforcement efforts. At three of these offices, officials told us that the database would make it easier for health care fraud investigators to do the background work necessary to establish a suspect’s past history of fraudulent activity. A U.S. Postal Inspection Service investigator noted that even though this information is already publicly available, having it all located in one repository would make the investigative process more timely. Officials at four of these offices indicated that knowing whether a suspect has been found to have committed past fraudulent acts would make it easier for prosecutors to demonstrate the individual’s intent to defraud. One Assistant U.S. Attorney noted that having easy access to past histories of fraudulent activity not only helps to prove an individual’s intent to defraud, but also can be used to demonstrate prior relevant conduct that would support an increased criminal sentence. At 6 of the 12 offices we visited, officials noted that although establishment of such a database is not critical to enforcement efforts, there could still be some benefits. These officials generally noted that the information was already publicly available from other sources and other information was more useful. Officials at one office we visited told us they did not believe it necessary to establish a final adverse actions database. According to an Assistant U.S. Attorney, the information that would be in the database is already publicly available from other sources and, given the current government budget environment, he questioned the feasibility of funding a database that would provide only marginal enforcement utility. Most of the officials we spoke with expressed some concerns about the establishment of a final adverse actions database. Most notably, at eight offices, officials indicated that the potential would exist for the information to be misused—for example, by insurance companies to deny a provider’s insurance claims or by the government in targeting persons for investigation. One FBI supervisory agent told us that to ensure the security of the database and prevent misuse, access should be restricted to law enforcement agencies only. At six offices, officials stated that providers and the public would likely object to the establishment of such a database as an unwarranted intrusion by the federal government into the privacy of citizens’ lives. One Assistant U.S. Attorney noted that having the federal government operate the database might also result in the database becoming too bureaucratic and entangled with rules and regulations about access, thereby making the database less efficient to operate. The responses to our survey of the 50 state insurance commissioners indicated broad support for a centralized health care fraud database. Of the 29 respondents who said their offices investigated health care fraud during 1994, 26 believed a centralized health care fraud database would facilitate enforcement efforts. Twenty-three of the respondents indicated that a database would expedite the enforcement process, with about half indicating that it would either strengthen prosecution efforts or lead to harsher penalties. The respondents were split on who should operate the database, with about 10 favoring the federal government, 8 favoring state government, and 7 favoring the private sector. Twenty-seven of the respondents indicated that final adverse actions should be included in the database. Eighteen of the respondents also believed there might be some negative effects of a centralized health care fraud database, most notably the lack of security and confidentiality of the information (12) and the possibility that the database would contain inaccurate information (13). At three of the six state investigative and prosecutive offices we visited, officials we interviewed told us they believe a centralized health care fraud database of final adverse actions would be useful in facilitating health care fraud enforcement efforts. At the other three offices, officials noted that such a database is not essential but could be another tool to assist health care fraud investigators. However, all officials saw certain advantages to a centralized database. The state officials noted, for example, that a centralized database would (1) provide investigators with easy access to information about individuals being investigated, thus making routine background investigative work more efficient; and (2) help investigators to better identify fraudulent schemes and potential suspects. One state fraud bureau official told us that because of the mobility of fraud perpetrators, a national database would help investigators to identify individuals within their jurisdictions who have been previously involved in fraudulent schemes in other locations. At the five offices that identified possible negative effects of a final adverse actions database, officials said they were concerned that the information might be misused. One Insurance Department official stated that insurers might use the information, independently or in concert with other insurers, to unfairly restrict the ability of certain providers to participate in their health plans. At 9 of the 13 insurance companies we visited, representatives told us a centralized health care fraud database of final adverse actions would facilitate health care fraud enforcement efforts. At three of the companies, representatives thought the database might be beneficial, but they did not consider it an essential resource. Most of the officials told us that a centralized repository of final adverse actions would make the investigative process more efficient by providing a single location for background information about health care providers who previously have been involved in fraudulent activity. One insurance company investigator noted that such a database could help insurers to easily identify providers with a previous record of fraud, which would allow insurers to more closely monitor future claims submitted by these providers. Another investigator mentioned that a centralized database would help insurers to better screen providers who have applied to join their health care network. The officials who believed a final adverse actions database was not essential generally said that the information would be useful for confirming suspicions or getting cases accepted for prosecution, but not for identifying and initiating investigations. The most commonly voiced concern about establishing a final adverse actions database was potential misuse of the information. Officials at five insurance companies mentioned this as a potential problem. One insurance company official stated that creation of such a database might, from the providers’ perspective, lead to the inappropriate identification and targeting of innocent individuals by investigators. One other official noted the possibility that inaccurate information could be included in the database and could have adverse consequences if inadvertently disclosed. Only one of the recent database proposals—the 1993 Bush administration’s proposal—would include active investigative information as part of a centralized health care fraud database. As described in the recommendations of the task force, this database would have been accessible to law enforcement entities, state licensing agencies, and accredited insurer special investigative units. The task force specifically defined active investigations as any ongoing investigation of potentially fraudulent activity. Ongoing investigative information is naturally more sensitive than information about final adverse actions, since at this stage of an investigation there has not yet been a public adjudication of the matter. At 6 of the 12 investigative and prosecutive agencies we visited, officials told us a database of ongoing, or active, investigative information would be useful to health care fraud enforcement efforts. Officials at two of the six offices noted that the database would provide a means to identify multiple agencies investigating the same subject, thus helping to eliminate investigative duplication and allowing investigators to combine efforts. More often, however, the officials cited concerns about an ongoing investigative database. At five of the six offices, officials cited the sensitivity of the information as a significant concern that would result in restricted access to the database. In general, the officials noted that for security reasons, such a database would probably have to be restricted to law enforcement agencies only. For example, according to one Assistant U.S. Attorney, sensitive investigative information (unlike final adverse actions) is, by its nature, less certain, sometimes inaccurate, and may never end up being adjudicated in a public forum. This official added that if such information is inadvertently or deliberately disclosed, it could seriously damage an individual’s life and livelihood. Equally significant, Assistant U.S. Attorneys at two offices we visited noted that where investigative information was gathered through the federal grand jury process, it would be illegal to disclose that information to anyone not designated by the court. Similarly, all of the FBI officials we spoke with noted that the FBI would be very reluctant to contribute active investigative information to the database, unless the FBI controlled use of and access to the database. The responses to our survey of the 50 state insurance commissioners indicated support for including ongoing investigative information in a centralized health care fraud database. Of the 29 respondents who said they investigated health care fraud during 1994, 22 believed a database of ongoing investigative information would facilitate enforcement efforts.Similarly, at three of the six state investigative and prosecutive agencies we visited, officials told us a database of ongoing investigations would be useful to health care fraud enforcement efforts. However, one state fraud bureau official pointed out that because of the sensitivity of the information and the need for security, access to the database probably would have to be restricted to law enforcement agencies only. In this official’s opinion, if insurance company investigators are cut off from this valuable source of intelligence, they will not be as effective in their own anti-fraud efforts. At 4 of the 13 insurance companies we visited, officials we spoke with identified an ongoing investigations database as being useful to health care fraud enforcement efforts. According to one insurance company investigator, having access to a database of ongoing investigations would provide investigators a means to combine efforts across jurisdictions. This investigator further commented that in many instances, any one insurer may have incurred only minimal dollar losses due to the fraud committed; however, fraud schemes are often perpetrated simultaneously in multiple jurisdictions. He said that if an investigator can identify other ongoing investigations targeting the same individuals, these investigations may be combined into a larger investigation. This would potentially allow investigators to develop larger, more significant fraud cases that are more attractive to prosecutors. One indication of the potential positive effect of sharing ongoing investigative information can be found in recent statistics developed by NHCAA with regard to its Provider Indexing Network System. As of April 1995, NHCAA reported that 7.2 percent of the known or suspected fraud perpetrators listed in its computerized database had been entered by more than one member organization. These duplicate listings illustrate a potential opportunity for investigators in different organizations to share investigative information and possibly combine their enforcement efforts. With regard to potential drawbacks, one insurance company investigator told us that insurers might be unwilling to report ongoing investigative information to the database if they are not granted access to it. Another insurance company official stated that insurers would have to be provided immunity for reporting such information because of the potential liability if the information were disclosed. Demonstrating the reality of this concern, one investigator noted that her company will not place ongoing investigative information in NHCAA’s Provider Indexing Network System because the company could not be sure that another member insurer would not misuse the information, thus exposing the reporting company to potential civil liability. Although not identified in any of the recent health care fraud database proposals, one alternative to the two database approaches noted above is a database of suspected fraud referrals. Most health care fraud cases begin as fraud referrals to investigative agencies. These referrals come from both formal sources (e.g., government agencies, insurers) and informal sources (e.g., fraud hotlines, beneficiaries). The investigative agencies review these referrals and, on the basis of relatively limited information, select the most promising leads for assignment to an investigator. Because suspected fraud referrals typically have not yet been thoroughly investigated, they involve information that is less certain than information about either ongoing investigations or final adverse actions. There is a federal precedent for the creation of a fraud referral database. Such a database has been established at the Financial Crimes Enforcement Network to obtain and track information about suspected financial institution fraud. Officials at 8 of the 31 federal, state, and insurance company offices we visited suggested the creation of a health care fraud referral database as a useful tool to enhance health care fraud enforcement efforts. Many of the respondents to our survey of state insurance commissioners also supported creation of a fraud referral database. Specifically, 22 of the 29 survey respondents who indicated they investigated health care fraud during 1994 favored creation of a fraud referral database. In addition, the FBI has recently suggested that Congress pass legislation to create a criminal referral system, whereby all health benefit programs would be required to report suspected fraud to a federal government database to be used to track patterns of criminal activity throughout the health care system. Regarding potential benefits, one FBI supervisory agent told us that a database of suspected fraud referrals would expedite the early stages of an investigation by possibly helping to determine the extent and amount of fraud involved. An insurance company investigator also noted that encouraging private insurers to report suspected fraud to a national database would allow government investigators to better identify fraudulent schemes involving multiple private insurers and—since these schemes tend to involve both private and public sector insurers simultaneously—would very likely also lead to the discovery of more public sector fraud. According to an Assistant U.S. Attorney, in order to be most useful, a suspected fraud referral database would have to include (1) a requirement for all insurers to report suspected fraud, along with a grant of immunity for doing so; (2) a specified reporting format; and (3) a designated entity to centrally collect and maintain the information. Concerns were raised, however, about the feasibility of establishing a fraud referral database. For example, unlike the banking and savings and loan industries, the insurance industry is subject principally to state, rather than federal, regulation. One FBI supervisory agent noted that access to suspected fraud referrals was helpful in fighting bank fraud, and a database of health care fraud referrals could help investigators initiate health care fraud cases. The agent said that to encourage private insurers to actively refer suspected fraud, ideally federal law should require mandatory reporting of such fraud and provide immunity for doing so. However, the agent believes that because there is no federal regulatory entity governing private insurers, such a law may not be possible. According to an insurance company official, absent such a reporting requirement, private insurers may not feel compelled to report suspected fraud to a national referral database, thus making it less comprehensive in scope and, therefore, less useful to investigators. Also, many states already require insurers to report suspected fraud to state agencies (see app. II). According to a U.S. Postal Service investigator, in those states with suspected fraud reporting requirements, an additional federal reporting requirement might be viewed by some private insurers as being unnecessary. In addition, because of the sensitive nature of suspected fraud referral information, several officials noted that security for and access to a database of suspected fraud referrals would likely be a critical issue. According to one FBI supervisory agent, some of the information in a suspected referral database might be nothing more than unsupported allegations, and the release or misuse of such information might ruin an innocent doctor’s reputation or career. Therefore, access to and use of the information in the database would have to be tightly controlled. And, as pointed out by one Assistant U.S. Attorney, access to information of this nature would likely have to be restricted to law enforcement agencies only, in order to best protect against misuse and inappropriate disclosures. Recent proposals to establish a centralized health care fraud database, if implemented, would provide investigators and prosecutors an additional tool to enhance anti-fraud enforcement efforts. Senate Bill 1088 would establish a health care fraud database of final adverse actions, accessible by law enforcement and regulatory agencies and insurers. Law enforcement and industry officials identified certain other types of information—ongoing investigative information and reports of suspected fraud—that might also be useful to include in a health care fraud database. However, although these types of information would be potentially beneficial, they would also pose increased risks of inappropriate disclosure and misuse. Many law enforcement, regulatory, and industry officials we spoke with agreed that a database comprising final adverse actions may benefit investigators only marginally. Although they said this type of information would be useful in compiling general background information on suspects, they added that it is already publicly available from other sources. However, the officials noted that disclosure of such information would likely pose minimal risks of civil lawsuits for violation of individuals’ privacy rights. Ongoing investigation information has a lesser degree of credibility and a higher degree of sensitivity than final adverse actions information. The officials said that this type of information can be used to help build prosecutable cases; however, such information has not yet been adjudicated and would therefore have to be protected against inappropriate disclosures. A database of suspected fraud referrals also poses risks from inappropriate disclosures. In many instances, minimal investigative time has been spent to verify the validity of fraud referral information. However, according to the officials we spoke with, such information can be useful to investigators in identifying previously undiscovered fraud. In addition to the issues noted above, centralized databases also pose uncertainties about development and operating costs. These costs generally have not been addressed by any of the proposals discussed above. In its written comments on the draft report, NHCAA commented that a centralized database, if properly created, would be a useful additional tool in fighting health care fraud. As an analogy, NHCAA referred to the Provider Indexing Network System and said that a database is most useful when it (a) includes information on active investigations, (b) has safeguards and procedures that are carefully outlined, and (c) has modest costs. Further, NHCAA commented that a centralized database would be particularly helpful if there is disclosure of information by both law enforcement agencies and private payers on a regular basis. Finally, NHCAA made some technical and clarifying comments, which we incorporated where appropriate. In its written comments on the draft report, the American Medical Association supported the sharing of information related to fraud and abuse but said that creating a national database may not be the best use of limited enforcement dollars. The Association commented that databases can be exceedingly expensive to establish and maintain, have great potential for problems with inappropriate use and disclosure of information, and also may not sufficiently protect the confidentiality of patient records.
Pursuant to a congressional request, GAO discussed: (1) the extent of federal and state immunity laws protecting persons who report health care fraud; and (2) evidence for and against establishing a centralized health care fraud database. GAO found that: (1) there are no federal immunity protections for persons who report alleged health care fraud to law enforcement agencies; (2) the only federal provision that could protect such persons applies to persons reporting Medicare and Medicaid fraud; (3) private insurers and health care claims processors are also not provided any federal immunity protection if they report suspected fraud; (4) many states have enacted immunity protection, but the amount of protection varies by state; (5) Congress is considering legislation that would protect persons providing health care fraud information to the Departments of Health and Human Services or Justice; (6) most federal and state officials support the proposed immunity provisions and safeguards to protect against persons who make unsubstantiated allegations in bad faith; (7) many federal officials believe that the legislation should be expanded to provide immunity to persons sharing fraud-related information with any state or federal enforcement entity and insurers sharing information with other insurers; (8) the proposed legislation would create a central database to track criminal activity in the health care system; (9) the database would provide information on criminal convictions, civil judgments, and negative licensing actions and be accessible to federal and state agencies and health insurers; and (10) most law enforcement officials support the database's establishment and believe that enforcement benefits would accrue, but many are concerned about the potential for unauthorized disclosure of information and high development and operating costs.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Ebola outbreak was first identified in the remote Guinean village of Meliandou in December 2013 and subsequently spread to neighboring Sierra Leone and Liberia and later to other African countries, including Mali, Nigeria, and Senegal (see fig. 1). Guinea, Liberia, and Sierra Leone experienced the largest number of confirmed, probable, or suspected cases and deaths. As of June 2016, WHO reported 28,616 confirmed, probable, or suspected Ebola cases and 11,310 deaths in these three countries since the onset of the outbreak (see fig. 2). WHO reported the majority of these cases and deaths between August 2014 and December 2014. After 2014, the number of new cases began to decline, and on March 29, 2016, the WHO Director-General terminated the Public Health Emergency of International Concern designation related to Ebola in West Africa and recommended lifting the temporary restrictions to travel and trade with Guinea, Liberia, and Sierra Leone. However, seven confirmed and three probable cases were reported in Guinea, and three confirmed cases of Ebola were reported in Liberia in March 2016 and April 2016, respectively. Figure 3 shows the numbers of confirmed, probable, and suspected Ebola cases and deaths in Guinea, Liberia, and Sierra Leone by month from August 2014 to December 2015. USAID noted that the Ebola outbreak caused disruptions to health systems and adverse economic impacts in Guinea, Liberia, and Sierra Leone. Health-care resources in these countries were diverted from other programs to Ebola response efforts, and some patients and health-care workers avoided health facilities for fear of contracting the disease. USAID also noted that these countries experienced job loss, disruptions to trade, reduced agricultural production, decreased household purchasing power, and increased food insecurity as a consequence of the outbreak. The closure of international borders in response to the outbreak hampered economic activity by limiting trade and restricting the movement of people, goods, and services. In addition, the Ebola outbreak occurred at the beginning of the planting season, which affected agricultural markets and food supplies. U.S. efforts to respond to the Ebola outbreak in West Africa began in March 2014 when the Centers for Disease Control and Prevention (CDC) deployed personnel to help with initial response efforts. In August 2014, the U.S. ambassador to Liberia and the U.S. chiefs of mission in Guinea and Sierra Leone declared disasters in each country. Following the declarations, USAID and CDC deployed medical experts to West Africa to augment existing staff in each of the affected countries. In August 2014, a USAID-led Disaster Assistance Response Team (DART) deployed to West Africa. USAID-funded organizations began to procure and distribute relief commodities and establish Ebola treatment units, and CDC began laboratory testing in West Africa. In September 2014, the Department of Defense (DOD) began direct support to civilian-led response efforts under Operation United Assistance. In September 2014, the President announced the U.S. government’s strategy to address the Ebola outbreak in the three primarily affected West African countries. The National Security Council, the Office of Management and Budget, USAID, and State developed an interagency strategy that has been revised over time to reflect new information about the outbreak and changes in international response efforts. The strategy is organized around four pillars and their associated activities (see table 1). Multiple U.S. agencies have played a role in the U.S. government’s response to the Ebola outbreak in West Africa. USAID managed and coordinated the overall U.S. effort to respond to the Ebola outbreak overseas, and State led diplomatic efforts related to the outbreak. CDC led the medical and public health component of the U.S. government’s response efforts. For example, CDC provided technical assistance and disease control activities with other U.S. agencies and international organizations. In addition, CDC assisted in disease surveillance and contact tracing to identify and monitor individuals who had come into contact with an Ebola patient. DOD coordinated U.S. military efforts with other agencies, international organizations, and nongovernmental organizations. DOD also constructed Ebola treatment facilities, transported medical supplies and medical personnel, trained health-care workers, and deployed mobile laboratories to enhance diagnostic capabilities in the field. Other federal agencies, including the Department of Agriculture, also contributed to the overall U.S. response. Although U.S. response efforts have focused primarily on Liberia, the United States has been engaged in all three primarily affected West African countries. The U.S. government also undertook response efforts in Mali following an Ebola outbreak in that country in October 2014. Currently, U.S. response efforts include regional activities that encompass multiple countries, including activities to build the capacity of West African countries to prepare for and respond to infectious disease outbreaks. Prior to the fiscal year 2015 appropriation, USAID and State funded activities in response to the Ebola outbreak using funds already appropriated. On December 16, 2014, Congress appropriated approximately $5.4 billion in emergency funding for Ebola preparedness and response to multiple U.S. agencies as part of the Consolidated and Further Continuing Appropriations Act, 2015 (the fiscal year 2015 consolidated appropriations act). Within the fiscal year 2015 consolidated appropriations act, Congress provided approximately $2.5 billion to USAID and State for necessary expenses to assist countries affected by, or at the risk of being affected by, the Ebola outbreak and for efforts to mitigate the risk of illicit acquisition of the Ebola virus and to promote biosecurity practices associated with Ebola outbreak response efforts. These funds were made available from different accounts and are subject to different periods of availability (see table 2). The Act requires State, in consultation with USAID, to submit reports to the Senate and House Committees on Appropriations on the proposed uses of the funds on a country and project basis, for which the obligation of funds is anticipated, no later than 30 days after enactment. These reports are to be updated and submitted every 30 days until September 30, 2016, and every 180 days thereafter until all funds have been fully expended. The reports are to include (1) information detailing how the estimates and assumptions contained in the previous reports have changed and (2) obligations and expenditures on a country and project basis. In addition, Section 9002 of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015, authorized USAID and State to use funds appropriated for the Global Health Programs (GHP), International Disaster Assistance (IDA), and Economic Support Fund (ESF) accounts to reimburse appropriation accounts administered by USAID and State for obligations incurred to prevent, prepare for, and respond to the Ebola outbreak prior to enactment of the Act. USAID reports that it has made reimbursements totaling $401 million for obligations made prior to the Act, using $371 million from the IDA account and $30 million from the ESF account of the fiscal year 2015 appropriation. USAID reports that these obligations were incurred by the Office of U.S. Foreign Disaster Assistance (OFDA); the Office of Food for Peace (FFP); the Bureau for Global Health; and missions in Liberia, Guinea, and Senegal. Figure 4 shows USAID’s reported obligations by office and bureau. State reported that it did not reimburse funding for obligations made prior to the act. As of July 1, 2016, USAID and State had obligated 58 percent and disbursed more than one-third of the $2.5 billion appropriated by the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015 for Ebola response and preparedness activities. USAID and State obligated funds from various appropriation accounts for Ebola activities during different stages of the response, with the largest shares of funding going to Pillar 1 activities to control the outbreak and to support regional Ebola activities in multiple countries. In response to the Ebola outbreak, USAID obligated funds from the IDA account, and State obligated funds from the Diplomatic & Consular Programs (D&CP) account. As the U.S. government shifted its focus to mitigating second-order impacts and strengthening global health security, USAID obligated funds from the ESF and GHP accounts, and State obligated Nonproliferation, Antiterrorism, Demining, and Related Programs (NADR) account funds for biosecurity activities. As of July 1, 2016, USAID and State had obligated a total of almost $1.5 billion (58 percent) and disbursed $875 million (35 percent) of the $2.5 billion appropriated for Ebola response and preparedness activities (see fig. 5). USAID and State used different appropriation accounts to fund Ebola activities. As shown in table 3 below, USAID allocated the greatest amount of funding for Ebola activities from the IDA account, followed by the ESF and GHP accounts. State allocated funding from the NADR account and D&CP account for Ebola activities. USAID notified the relevant congressional committees on April 8, 2016, that it intended to obligate $295 million in ESF Ebola funds, of which USAID and State would reprogram $215 million for international response activities related to Zika, including $78 million to the CDC for Zika response activities, and $80 million for the CDC’s international coordination of response efforts related to Ebola. As USAID obligates funding for Zika activities, the unobligated balances in the ESF account will continue to decrease. ESF funds appropriated in the Act must have been obligated by September 30, 2016. See figure 6 below for obligations and disbursements by appropriation account, as of July 1, 2016. USAID and State used different appropriation accounts to fund Ebola activities across the four pillars of the U.S. government’s Ebola strategy. For example, USAID primarily used IDA account funds to control the outbreak (Pillar 1), ESF account funds to mitigate second-order impacts of Ebola (Pillar 2), and GHP account funds to strengthen global health security (Pillar 4). USAID allocated all of the funding from the Operating Expenses account to building coherent leadership and operations (Pillar 3). As of July 1, 2016, USAID had obligated more than 60 percent of funds allocated for Pillars 1, 2, and 4 and obligated just under 50 percent of the funds allocated for Pillar 3; USAID and State had disbursed 85 percent of Pillar 1 obligations. See table 4 for the status of allocated funding by Ebola strategy pillar. USAID allocated the greatest amount of funding to activities to control the Ebola outbreak (Pillar 1) and allocated approximately similar amounts of funding to mitigate the second-order impacts of Ebola (Pillar 2) and to strengthen global health security (Pillar 4). Figure 7 shows allocations, obligations, and disbursements by Ebola strategy pillar, as of July 1, 2016. See appendix II for trends in obligations and disbursements by Ebola strategy pillar, as of July 1, 2016. USAID allocated the largest share of its funding to support regional Ebola activities. Ebola funding for regional activities includes all five appropriation accounts administered by USAID from the fiscal year 2015 appropriation (IDA, ESF, GHP, USAID Operating Expenses, and USAID’s Office of Inspector General) and State’s NADR account. The GHP account represents 42 percent of obligated funding for regional activities, while ESF and IDA represent approximately 29 percent and 21 percent of such funding, respectively. See table 5 below for the status of allocated Ebola funding by geographic area. Of the three countries most affected by the Ebola virus, USAID obligated and disbursed the most funding for Liberia, as the U.S. government took a lead role in Ebola response efforts in this country. Figure 8 shows allocations, obligations, and disbursements for Liberia, compared with Sierra Leone and Guinea. See appendix III for trends in obligations and disbursements for each country, as of July 1, 2016. Prior to the enactment of the Act in December 2014, USAID had obligated $401 million to respond to the Ebola outbreak. USAID obligated the majority of this funding from the IDA account for Pillar 1 activities to control the outbreak in Liberia. State did not obligate any funding prior to the Act but began obligating D&CP funding to respond to the Ebola outbreak in April 2015. During the early stages of the response to the Ebola outbreak, USAID obligated IDA funds for the majority of its Ebola activities, both before and after it received funding for Ebola activities from the Act. By February 2015, USAID had obligated more than $550 million in IDA funding. USAID obligated IDA account funding at a lower rate after July 1, 2015, as USAID began to shift its focus to recovery activities. As of July 1, 2016, USAID had obligated more than 60 percent of IDA account funding and disbursed more than 80 percent of obligations. USAID used the majority of IDA funds to control the outbreak in Liberia (Pillar 1). Activities to control the outbreak included providing care to Ebola patients; supporting safe burials; promoting infection prevention and control at health facilities; distributing Ebola protection and treatment kits; and conducting Ebola awareness, communication, and training programs. USAID obligated approximately 60 percent of IDA funds for Liberia as of July 1, 2016, which reflects the U.S. government’s lead role in responding to the Ebola outbreak in that country. USAID plans to use the remaining approximately 40 percent of unobligated IDA funds for future Ebola response activities as needed. According to USAID officials, since April 2016, OFDA has programmed IDA funding to provide support for maintaining a residual response capacity and for transitioning to long-term development efforts. According to OFDA officials, these activities will require a modest amount of funding in fiscal year 2017. OFDA officials also noted that the unobligated IDA balance would remain available to address any new cases or outbreaks of Ebola that spread beyond the ability of the host governments to contain them. State obligated most of its D&CP account funding to respond to the Ebola outbreak in early 2015. In February 2015, State notified the relevant congressional committees of its intention to allocate $33 million of $36 million in D&CP account funding to the Office of Medical Services. By the end of May 2015, the Office of Medical Services had obligated 97 percent of its allocated funding and disbursed all of its obligated funding for medical evacuations; overseas hospitalization expenses; and personal protective equipment, among other things. State’s Bureau of Administration, which provided grants to international schools affected by the Ebola outbreak, obligated 95 percent of its allocated D&CP funding by the end of April 2015. State’s Africa Bureau also obligated funding in the early stages of the Ebola response, as it provided support for State’s Ebola Coordination Unit and U.S. embassies affected by Ebola. The Africa Bureau obligated more than half of its D&CP funding for Ebola activities by the end of July 2015. As of June 30, 2016, State had obligated 94 percent of the D&CP account funding and disbursed 99 percent of its obligated funding. USAID started to focus its efforts on mitigating second-order impacts of the outbreak and strengthening global health security between June 1, 2015, and October 1, 2015, using funding from the ESF and GHP accounts. As of May 1, 2015, State began obligating NADR funds for biosecurity activities. USAID increased obligations of ESF by approximately $68 million between June 1, 2015, and October 1, 2015, as USAID offices such as the Bureau for Global Health and the U.S. Global Development Lab obligated ESF funds for activities to mitigate the second-order impacts of the outbreak (Pillar 2). USAID disbursements of ESF funding were less than $5 million until September 1, 2015. As of July 1, 2016, USAID had obligated $251 million from the ESF account for Ebola activities, which represented 35 percent of its total allocated funding from the ESF account. Of the ESF funding that USAID obligated to mitigate the second-order impacts of Ebola, USAID obligated the majority of such funding for activities in Liberia, Sierra Leone, and Guinea. Such activities included the development of facilities to decontaminate health-care workers and equipment, restoring routine health services, and strengthening health information systems to better respond to future disease outbreaks. Approximately 45 percent of obligated ESF funds supported such activities in Liberia, while 22 percent and 17 percent funded activities in Sierra Leone and Guinea, respectively. As the numbers of new Ebola cases declined, USAID also began to focus on longer-term efforts to strengthen global health security, using funding from the GHP account. Accordingly, USAID reported no obligations of GHP funding before June 1, 2015 and no disbursements until August 1, 2015. As of July 1, 2016, USAID had obligated 59 percent and disbursed 10 percent of its GHP account funds for Ebola response activities. The Bureau for Global Health, which programs the majority of GHP funding, added all of its GHP funding for Ebola activities to existing awards. Because of the method of payment used for these awards, funding from older appropriations is used first. As a result, the percentage of GHP funding that USAID has disbursed from the fiscal year 2015 appropriation for Ebola activities is the lowest (10 percent) of any account. GHP funding appropriated in the Act does not expire, so USAID can obligate and disburse these funds in future fiscal years. USAID has obligated all of the funding in the GHP account to strengthen global health security (Pillar 4). Such activities included building laboratory capacity, strengthening community surveillance systems to detect and monitor Ebola and other diseases, and building the capacity of West African governments to prepare for and respond to infectious disease outbreaks. Nearly all GHP funding is regional funding rather than funding for a specific Ebola-affected country. For trends in obligations and disbursements of IDA, ESF, and GHP funds for Ebola from February 3, 2015 to July 1, 2016, see appendix IV. State’s Bureau of International Security and Nonproliferation did not begin obligating funding from the NADR account until May 1, 2015, and did not disburse any NADR funds until September 1, 2015, for biosecurity activities. Such activities included training on biosecurity best practices and strategies for Ebola sample management as well as conducting biosecurity risk assessments. All NADR funding supports regional activities in West Africa rather than activities in a single country. As of July 1, 2016, State had obligated all of its NADR funding and disbursed 40 percent of obligated funding from this account. Twenty-one of USAID’s 271 reimbursements for obligations incurred prior to the enactment of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015, were not in accordance with the reimbursement provisions of the Act. These 21 transactions account for over $60 million, or roughly 15 percent, of the approximately $401 million reimbursed. Because USAID did not have the legal authority to make the reimbursements that were not in accordance with the reimbursement provisions in the Act, these reimbursements represent unauthorized transfers. As of October 2016, USAID had not determined whether it had budget authority to support the obligations against the accounts that were improperly reimbursed, possibly resulting in a violation of the Antideficiency Act. In addition, USAID has not developed written policies or procedures to guide staff on appropriate steps to take for the reimbursement process. Such policies or procedures could provide USAID with reasonable assurance that it will comply with reimbursement provisions and mitigate the risk of noncompliance. Of 271 reimbursements that USAID made for obligations incurred prior to the enactment of the Act, USAID made 21 reimbursements (totaling over $60 million) that were not in accordance with the Act. These 21 reimbursements represent roughly 15 percent of the approximately $401 million that USAID obligated for reimbursements, of the almost $1.5 billion that had been obligated as of July 1, 2016. USAID made the other 250 reimbursements (totaling about $341 million) in accordance with the reimbursement provisions of the Act for funds that OFDA and FFP obligated prior to enactment of the Act. To assess whether USAID’s reimbursements were in accordance with the Act, we reviewed USAID documentation to determine whether each reimbursement met the four provisions described in the text box below. Reimbursement Provisions of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015 The reimbursement was made to the same appropriation account as the account from which the U.S. Agency for International Development originally obligated the funds. The obligation was incurred prior to the December 16, 2014, enactment of the Act. The obligation was incurred to prevent, prepare for, and respond to the Ebola outbreak. The reimbursement was made from the International Disaster Assistance, Economic Support Fund, and Global Health Programs accounts of the fiscal year 2015 appropriation. For the 21 reimbursements (totaling over $60 million), USAID did not reimburse the same appropriation accounts as the accounts from which USAID originally obligated the funds. Moreover, for 4 of the 21 reimbursements, USAID reimbursed obligations for which it did not document that it incurred the obligation to prevent, prepare for, or respond to the Ebola outbreak. Specifically, USAID did not reimburse the same appropriation account as the account from which USAID originally obligated the funds. The Bureau for Global Health; FFP; and the missions in Guinea, Liberia, and Senegal incurred over $60 million in obligations from the GHP, Title II of the Food for Peace Act (Title II), and Development Assistance accounts. Although USAID used funding from the ESF and IDA accounts of the fiscal year 2015 appropriation to make 21 reimbursements for these obligations, the agency did not reimburse the GHP, Title II, and Development Assistance accounts from which USAID originally obligated the funds, in accordance with the Act (see table 6). Instead, USAID reimbursed the office, bureau or mission that originally obligated the funds. These reimbursements were made to different accounts, with different spending authorities, than those from which the original obligation was made. For example, FFP obligated approximately $30 million in Title II account funds for the World Food Program’s regional program to address the urgent food needs of populations affected by the Ebola outbreak, but USAID used approximately $30 million in IDA account funds from the fiscal year 2015 appropriation to reimburse FFP and credited the funds to FFP’s IDA account. USAID reimbursed obligations for which it did not document that it incurred the obligation to prevent, prepare for, and respond to the Ebola outbreak. USAID made four reimbursements, totaling about $3.3 million, for which USAID was unable to provide documentation showing that the original obligations funded activities to prevent, prepare for, and respond to the Ebola outbreak. For example, USAID reimbursed approximately $1.3 million for an obligation incurred by the Bureau for Global Health in September 2014. However, USAID officials noted that they did not document the amount or purpose of the obligation at the time the funds were obligated. In July 2016, in response to our inquiry, USAID prepared a memorandum to attest that the original obligation was approximately $1.3 million and funded technical assistance for Ebola preparedness to 13 African countries. In another instance, USAID reimbursed $500,000 for an obligation incurred for the preparation of laboratories to detect Middle East Respiratory Syndrome (MERS) and other acute respiratory infections in the Africa region. However, USAID was unable to provide us documentation showing how the obligation funded activities related to the Ebola outbreak. Because USAID did not have the legal authority to make the reimbursements that were not in accordance with the reimbursement provisions in the Act, these 21 reimbursements represent unauthorized transfers. Further, the Antideficiency Act prohibits an agency from obligating or expending more than has been appropriated to it. As of October 2016, USAID had not determined whether it had the budget authority to support the obligations against the accounts that were improperly reimbursed. See appendix V for a complete list of USAID’s reimbursements for obligations incurred prior to the fiscal year 2015 appropriation. The concerns we identified with the 21 reimbursements could partly be attributed to the fact that USAID has not developed written policies or procedures to guide staff on appropriate steps to take to make reimbursements and document them. Internal control standards require that management design control activities—actions established through policies and procedures—to achieve objectives and respond to risks and implement such activities through policies. By developing written policies and procedures, offices and bureaus would know what steps they should take to ensure that reimbursements are made in accordance with an appropriation act and other applicable appropriations laws and are recorded in a manner that allows documentation to be readily available for examination. Such written policies and procedures could provide USAID with reasonable assurance that it is complying with reimbursement provisions and help mitigate the risk of noncompliance. Without written policies and procedures, USAID does not have a process in place for making reimbursements and maintaining documentation to show that it complied with the reimbursement provisions of the Act. Our review found that USAID made 21 reimbursements that it did not have the legal authority to make, possibly resulting in the obligation or expenditure of funds in excess of appropriations, which would violate the Antideficiency Act. In addition, because USAID did not have written policies and procedures for maintaining documentation of the reimbursements, USAID officials spent several months locating records to demonstrate to us that USAID had reimbursed 250 obligations in a manner consistent with the provisions of the Act. USAID officials also noted that because they did not document the purpose of some obligations at the time the funds were obligated, USAID officials had to create a memorandum to explain how two reimbursed obligations funded USAID activities to prevent, prepare for, and respond to the Ebola outbreak. West Africa has experienced the largest and most complex Ebola outbreak since the virus was first discovered, resulting in more than 11,000 deaths. The outbreak not only disrupted the already weak health- care systems in Guinea, Liberia, and Sierra Leone but also contributed to reduced economic activity, job loss, and food insecurity in these countries. USAID and State have funded a range of activities to control the outbreak and will continue to fund longer-term efforts to mitigate the second-order effects and strengthen global health security. The Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015, authorized the reimbursement of obligations incurred prior to enactment to prevent, prepare for, and respond to the Ebola outbreak. While most of USAID’s reimbursements were made in accordance with the reimbursement provisions of the Act, roughly 15 percent of the funds that USAID reimbursed were not. In particular, USAID did not in all cases reimburse the same appropriation accounts from which it obligated funds as required by the Act, and in several instances it made reimbursements for obligations for which it did not document that it incurred the obligation to prevent, prepare for, and respond to the Ebola outbreak. As a result, USAID may not have the budget authority to support the obligations against the accounts that were erroneously reimbursed, which may result in a violation of the Antideficiency Act. Furthermore, without written policies and procedures, USAID risks the possibility of noncompliance recurring should USAID be granted reimbursement authority in the future. To ensure that USAID reimburses funds in accordance with section 9002 of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015, we recommend that the Administrator of USAID take the following three actions: 1. Reverse reimbursements that were not made to the same appropriation account as the account from which USAID obligated the funds. 2. Reverse reimbursements for which there is no documentary evidence that the obligation was incurred to prevent, prepare for, and respond to the Ebola outbreak. 3. Determine whether reversing any of these reimbursements results in the obligation of funds in excess of appropriations in violation of the Antideficiency Act and, if so, report any violations in accordance with law. To help ensure that USAID complies with reimbursement provisions that may arise in future appropriations laws, we recommend that the Administrator of USAID develop written policies and procedures for the agency’s reimbursement process. We provided a draft of this report to USAID and State for comment. In its written comments, reproduced in appendix VI, USAID agreed with our findings and recommendations. USAID noted that it would reverse reimbursements that were not made to the same appropriation account as the account from which USAID originally obligated funds, and that it is reviewing whether it has violated the Antideficiency Act. State did not provide formal comments. USAID and State also provided technical comments, which we incorporated throughout the report, as appropriate. We are sending copies of this report to the appropriate congressional committees, the U.S. Agency for International Development, the Department of State, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov If you or your staff have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. The Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015 (the Act), included a provision for us to conduct oversight of the U.S. Agency for International Development’s (USAID) and the Department of State’s (State) activities to prevent, prepare for, and respond to the 2014 Ebola outbreak and reimbursements made. We examined (1) USAID’s and State’s obligations and disbursements for Ebola activities and (2) the extent to which USAID made reimbursements in accordance with the requirements of the fiscal year 2015 appropriations act. To examine USAID’s and State’s obligations and disbursements for Ebola activities, we analyzed each agency’s allocation, obligation, and disbursement data for Ebola response and preparedness activities reported in USAID’s and State’s reports to the Senate and House Committees on Appropriations mandated by the Act. We analyzed USAID’s and State’s reported data on allocations, obligations, and disbursements that USAID and State reported from February 3, 2015, to July 1, 2016. We obtained and analyzed the data from USAID for the International Disaster Assistance; Global Health Programs; Economic Support Fund; USAID Operating Expenses; Nonproliferation, Antiterrorism, Demining, and Related Programs; and USAID Office of Inspector General appropriation accounts. We obtained and analyzed the data for the Diplomatic & Consular Programs account from State, which State reported separately. We reviewed each agency’s reports to congressional committees to determine the activities funded and reasons for any changes in the agencies’ cost estimates and use of the funds. We interviewed USAID and State officials in Washington, D.C., from each bureau and office that administers funding from accounts of the fiscal year 2015 appropriation to discuss the status of the agencies’ obligations and disbursements for activities to prevent, prepare for, and respond to the Ebola outbreak in West Africa, methods for reporting the funding data, and plans for obligating and disbursing the funds. We also interviewed officials from each of USAID’s and State’s offices that report on the funds to obtain information about their budgeting process and terms to determine the best method for analyzing the data across accounts. We then reviewed the data and information reported and consulted with USAID and State officials on the accuracy and completeness of the data and information. When we found data discrepancies, we contacted relevant agency officials and obtained information from them necessary to resolve the discrepancies. To assess the reliability of the data provided, we requested and reviewed information from agency officials regarding the underlying financial data systems and the checks and reviews used to generate the data and ensure its accuracy and reliability. To further ensure the reliability of the data, we obtained from USAID officials data from the financial data system that USAID used to report obligation and disbursement data and compared them to a sample of the data that we analyzed. We determined that the data we used were sufficiently reliable for our purposes of examining USAID’s and State’s obligations and disbursements of the funds. To examine the extent to which USAID made reimbursements in accordance with the Act, we reviewed USAID’s reports to the Senate and House Committees on Appropriations mandated by the Act to determine the obligations that the agency incurred prior to enactment and the reimbursements that USAID made for the obligations. We obtained data and documentation from each USAID bureau and office that administered the appropriation accounts for the obligations incurred prior to the fiscal year 2015 appropriation. To assess the reliability of the data provided, we requested and reviewed information from USAID officials regarding the underlying financial data systems and the checks and reviews used to generate the data and ensure its accuracy and reliability. We determined that the data we used were sufficiently reliable for our purposes of examining USAID’s reimbursements. We analyzed these data and reviewed documents to determine the dates and amounts of the obligations, the activities that the obligations funded, and the appropriation accounts used for the obligations. We also analyzed the data and reviewed documents to determine the amounts of the reimbursements, the appropriation accounts from the fiscal year 2015 appropriation that USAID used for the reimbursements, and the appropriation accounts to which USAID reimbursed the funds. We reviewed USAID’s congressional notifications and award documentation to determine the amounts, purposes, and appropriation accounts for the obligations that USAID incurred prior to the enactment of the Act as well as the amounts and appropriation accounts from which USAID reimbursed the funds. We interviewed USAID and State officials about the extent to which the agencies incurred obligations prior to the appropriation and reimbursed the funds. We also interviewed USAID officials and reviewed USAID’s policies for the management of funds to determine the extent to which the agency had developed and implemented written policies or procedures for the reimbursement process. We assessed the extent to which the obligations and reimbursements were consistent with the provisions of the Act as well as the extent to which the procedures USAID used met relevant internal control standards. We conducted this performance audit from June 2015 to November 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As of July 1, 2016, the U.S. Agency for International Development (USAID) and the Department of State (State) had obligated the largest share of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015 (the Act) Ebola funding for activities to control the outbreak (Pillar 1). USAID had obligated lesser amounts of funding for activities to mitigate second-order impacts of Ebola (Pillar 2) and activities to strengthen global health security (Pillar 4). USAID obligated a minimal amount of funding for building coherent leadership and operations (Pillar 3). For all strategy pillars, the largest percentage increase in obligations occurred between April 1, 2015, and July 1, 2015 (see fig. 9). The U.S. Agency for International Development (USAID) obligated funding from the International Disaster Assistance (IDA) and Economic Support Fund (ESF) accounts for almost all of its country-specific Ebola funding. More than 80 percent of the funding that USAID has obligated for Ebola activities in Guinea, Liberia, and Sierra Leone is from the IDA account. Since USAID began its reporting to Congress on Ebola funding in February 2015, IDA obligations for Liberia show the greatest increase between February 3, 2015, and July 1, 2015, and ESF obligations for Liberia show the greatest increase between April 1, 2015, and October 1, 2015 (see fig. 10). ESF disbursements do not begin before July 1, 2015. USAID deobligated and reprogrammed IDA funding for Liberia between January 1, 2016, and July 1, 2016, as it needed fewer funds to control the outbreak. Consistent with Liberia, since USAID began reporting to Congress on its Ebola funding in February 2015, USAID obligations of IDA funding for Sierra Leone show the greatest increases between February 3, 2015, and July 1, 2015. However, USAID made no obligations of ESF funding before April 1, 2015, and no disbursements of ESF funding until after July 1, 2015 (see fig. 11). Since USAID began reporting to Congress on its Ebola funding in February 2015, the greatest increase in obligations from the IDA account for Guinea occurred between February 3, 2015, and July 1, 2015 (see fig. 12). The International Disaster Assistance (IDA) account represents the majority of funding obligated and disbursed for Ebola activities to control the outbreak. After the enactment of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015, obligations and disbursements from the IDA account increased the most between April 1, 2015, and July 1, 2015 (see fig. 13). The U.S. Agency for International Development (USAID) deobligated and reprogrammed IDA funding between January 1, 2016, and July 1, 2016, as it needed fewer funds to control the outbreak (see fig. 13). The Office of U.S. Foreign Disaster Assistance (OFDA) programmed the majority of IDA funding. The Economic Support Fund (ESF) account represents the majority of funding obligated and disbursed for activities to mitigate the second-order impacts of Ebola. USAID increased obligations of ESF significantly between April 1, 2015, and October 1, 2015, as new cases of Ebola decreased. Disbursements of ESF funding did not increase significantly until after July 1, 2015 (see fig. 14). USAID’s Bureau for Global Health and Global Development Lab programmed the majority of ESF funding. The Global Health Programs (GHP) account represents the majority of funding obligated and disbursed for activities to strengthen global health security. USAID did not obligate any GHP funding between February 3, 2015, and April 1, 2015, and did not report disbursements until August 1, 2015, as USAID intends GHP to fund longer-term activities to respond to future outbreaks (see fig. 15). The Bureau for Global Health programs almost all GHP funding. The U.S. Agency for International Development (USAID) made 271 reimbursements for the approximately $401 million total obligations that USAID’s Bureau for Global Health; Office of Food for Peace (FFP); Office of U.S. Foreign Disaster Assistance (OFDA); and the missions in Guinea, Liberia, and Senegal incurred prior to the enactment of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015 (the Act). Of the 271 reimbursements, USAID made 250 reimbursements (totaling about $341 million) in accordance with the reimbursement provisions of the Act. USAID made these 250 reimbursements for funds that OFDA and FFP obligated prior to enactment of the Act. However, USAID made 21 reimbursements (totaling over $60 million) that did not accord with the reimbursement provisions of the Act. USAID made these 21 reimbursements for funds that the Bureau for Global Health, FFP, and the missions obligated prior to enactment of the Act. Tables 7 through 9 provide information about these 21 reimbursements and the requirements of the Act that each of the reimbursements did not meet. In addition to the contact named above, Valérie L. Nowak (Assistant Director), Bradley Hunt (Analyst-in-Charge), Ashley Alley, Bryan Bourgault, Debbie Chung, Neil Doherty, Rachel Dunsmoor, Jill Lacey, Amanda Postiglione, and Matthew Valenta made key contributions to this report.
In March 2014, the World Health Organization reported an Ebola outbreak in West Africa and, as of June 2016, reported that the outbreak had resulted in more than 11,000 deaths in Guinea, Liberia, and Sierra Leone. USAID and State initially funded Ebola activities using funds already appropriated. In December 2014, Congress appropriated approximately $2.5 billion to USAID and State, in part, for international efforts to prevent, prepare for, and respond to an Ebola outbreak and mandated that the agencies report periodically on their use of the funds. Congress also allowed the agencies to reimburse accounts for obligations incurred for Ebola activities prior to the fiscal year 2015 appropriation. The Act also included a provision for GAO to conduct oversight of USAID and State activities to prevent, prepare for, and respond to the Ebola outbreak. This report examines (1) USAID's and State's obligations and disbursements for Ebola activities and (2) the extent to which USAID made reimbursements in accordance with the fiscal year 2015 appropriations act. GAO analyzed USAID and State funding, reviewed documents on Ebola activities, and interviewed agency officials. As of July 1, 2016, the U.S. Agency for International Development (USAID) and the Department of State (State) had obligated 58 percent and disbursed more than one-third of the $2.5 billion appropriated for Ebola activities. In the early stages of the U.S. response in West Africa, USAID obligated $883 million to control the outbreak, and State obligated $34 million for medical evacuations, among other activities. Subsequently, the United States shifted focus to mitigating second-order impacts, such as the deterioration of health services and food insecurity, and strengthening global health security. Accordingly, USAID obligated $251 million to restore health services, among other activities, and $183 million for activities such as strengthening disease surveillance, while State obligated $5 million for biosecurity activities. Of 271 reimbursements that USAID made for obligations incurred prior to the enactment of the Department of State, Foreign Operations, and Related Programs Appropriations Act, 2015 (the Act), USAID made 21 reimbursements, totaling over $60 million, that were not in accordance with the Act. These 21 reimbursements represent roughly 15 percent of the $401 million that USAID obligated for reimbursements, of the almost $1.5 billion that had been obligated as of July 1, 2016 (see fig.). For these 21 reimbursements, USAID did not reimburse the same appropriation accounts as the accounts from which it originally obligated the funds, and therefore it did not have legal authority to make these reimbursements. In addition, four reimbursements were for obligations that USAID did not document were for Ebola activities. In reviewing the reimbursements, GAO found that USAID does not have written policies or procedures for staff to follow in making and documenting reimbursements. As a result, USAID does not have a process that could provide reasonable assurance that it complies with reimbursement provisions of applicable appropriations laws, such as the reimbursement provisions in the Act. GAO is making four recommendations, including that USAID should reverse reimbursements not made in accordance with the Act and develop written policies and procedures for its reimbursement process. USAID concurred with GAO's recommendations.
You are an expert at summarizing long articles. Proceed to summarize the following text: The SBInet program is responsible for identifying and deploying an appropriate mix of technology (e.g., sensors, cameras, radars, communications systems, and mounted laptop computers for agent vehicles), tactical infrastructure (e.g., fencing, vehicle barriers, roads,), rapid response capability (e.g., ability to quickly relocate operational assets and personnel) and personnel (e.g., program staff and Border Patrol agents) that will enable CBP agents and officers to gain effective control of U.S. borders. SBInet technology is also intended to include the development and deployment of a common operating picture (COP) that provides uniform data through a command center environment to Border Patrol agents in the field and all DHS agencies and to be interoperable with stakeholders external to DHS, such as local law enforcement. The initial focus of SBInet is on the southwest border areas between ports of entry that CBP has designated as having the highest need for enhanced border security because of serious vulnerabilities. Through SBInet, CBP plans to complete a minimum of 387 miles of technology deployment across the southwest border by December 31, 2008. Figure 1 shows the location of select SBInet projects underway on the southwest border. In September 2006, CBP awarded a prime contract to the Boeing Company for 3 years, with three additional 1-year options. As the prime contractor, Boeing is responsible for acquiring, deploying, and sustaining selected SBInet technology and tactical infrastructure projects. In this way, Boeing has extensive involvement in the SBInet program requirements development, design, production, integration, testing, and maintenance and support of SBInet projects. Moreover, Boeing is responsible for selecting and managing a team of subcontractors that provide individual components for Boeing to integrate into the SBInet system. The SBInet contract is largely performance-based—that is, CBP has set requirements for SBInet and Boeing and CBP coordinate and collaborate to develop solutions to meet these requirements—and designed to maximize the use of commercial off-the-shelf technology. CBP’s SBInet PMO oversees and manages the Boeing-led SBInet contractor team. The SBInet PMO workforce includes a mix of government and contractor support staff. The SBInet PMO reports to the CBP SBI Program Executive Director. CBP is executing part of SBInet activities through a series of task orders to Boeing for individual projects. As of September 30, 2007, CBP had awarded five task orders to Boeing for SBInet projects. These include task orders for (1) Project 28, Boeing’s pilot project and initial implementation of SBInet technology to achieve control of 28 miles of the border in the Tucson sector; (2) Project 37, for construction approximately 32 miles of vehicle barriers and pedestrian fencing in the Yuma sector along the Barry M. Goldwater Range (BMGR); (3) Program Management, for engineering, facilities and infrastructure, test and evaluation, and general program management services; (4) Fence Lab, a project to evaluate the performance and cost of deploying different types of fences and vehicle barriers; and (5) a design task order for developing the plans for several technology projects to be located in the Tucson, Yuma, and El Paso sectors. In addition to deploying technology across the southwest border, the SBInet PMO plans to deploy 370 miles of single-layer pedestrian fencing and 200 miles of vehicle barriers by December 31, 2008. Whereas pedestrian fencing is designed to prevent people on foot from crossing the border, vehicle barriers are other physical barriers meant to stop the entry of vehicles. The SBInet PMO is utilizing the U.S. Army Corps of Engineers (USACE) to contract for fencing and supporting infrastructure (such as lights and roads), complete required environmental assessments, and acquire necessary real estate. DHS has estimated that the total cost for completing the deployment for the southwest border—the initial focus of SBInet deployment—will be $7.6 billion from fiscal years 2007 through 2011. DHS has not yet reported the estimated life cycle cost for this program, which is the total cost to the government for a program over its full life, consisting of research and development, operations, maintenance, and disposal costs. For fiscal year 2007, Congress appropriated about $1.2 billion for SBInet, about which 40 percent DHS had committed or obligated as of September 30, 2007. For fiscal year 2008, DHS has requested an additional $1 billion. DHS has made some progress to implement Project 28—the first segment of technology on the southwest border, but it has fallen behind its planned schedule. Project 28 is the first opportunity for Boeing to demonstrate that its technology system can meet SBInet performance requirements in a real-life environment. Boeing’s inability thus far to resolve system integration issues has left Project 28 incomplete more than 4 months after its planned June 13 milestone to become operational—at which point, Border Patrol agents were to begin using SBInet technology to support their activities, and CBP was to begin its operational test and evaluation phase. Boeing delivered and deployed the individual technology components of Project 28 on schedule. Nevertheless, CBP and Boeing officials reported that Boeing has been unable to effectively integrate the information collected from several of the newly deployed technology components, such as sensor towers, cameras, radars, and unattended ground sensors. Among several technical problems reported were that it was taking too long for radar information to display in command centers and newly deployed radars were being activated by rain, making the system unusable. In August 2007, CBP officially notified Boeing that it would not accept Project 28 until these and other problems were corrected. In September 2007, CBP officials told us that Boeing was making progress in correcting the system integration problems; however, CBP was unable to provide us with a specific date when Boeing would complete the corrections necessary to make Project 28 operational. See figures 2 and 3 below for photographs of SBInet technology along the southwest border. The SBInet PMO reported that is in the early stages of planning for additional SBInet technology projects along the southwest border; however, Boeing’s delay in completing Project 28 has led the PMO to change the timeline for deploying some of these projects. In August 2007, SBInet PMO officials told us they were revising the SBInet implementation plan to delay interim project milestones for the first phase of SBInet technology projects, scheduled for calendar years 2007 and 2008. For example, SBInet PMO officials said they were delaying the start dates for two projects that were to be modeled on the design used for Project 28 until after Project 28 is operational and can provide lessons learned for planning and deploying additional SBInet technology along the southwest border. According to the SBInet master schedule dated May 31, 2007, these projects were to become operational in December 2007 and May 2008. Despite these delays, SBInet PMO officials said they still expected to complete all of the first phase of technology projects by the end of calendar year 2008. As of October 15, 2007, the SBInet PMO had not provided us with a revised deployment schedule for this first phase. CBP reports that it is taking steps to strengthen its contract management for Project 28. For example, citing numerous milestone slippages by Boeing during Project 28 implementation, in August 2007, CBP sought and reached an agreement with Boeing to give it greater influence in milestone setting and planning corrective actions on the Project 28 task order. While CBP had selected a firm-fixed-price contract to limit cost overruns on Project 28, CBP officials told us that the firm-fixed-price contract CBP used for Project 28 had limited the government’s role in directing Boeing in its decision making process. For example, CBP and contractor officials told us they expressed concern about the timeline for completing Project 28, but CBP chose not to modify the contract because doing so would have made CBP responsible for costs beyond the $20 million fixed-price contract. In mid-August 2007, CBP organized a meeting with Boeing representatives to discuss ways to improve the collaborative process, the submission of milestones, and Boeing’s plan to correct Project 28 problems. Following this meeting, CBP and Boeing initiated a Change Control Board. In mid-September representatives from Boeing’s SBInet team and its subcontractors continued to participate on this board and vote on key issues for solving Project 28 problems. Although CBP participates on this committee as a non-voting member, a senior SBInet official said the government’s experience on the Change Control Board had been positive thus far. For example, the official told us that the Change Control Board had helped improve coordination and integration with Boeing and for suggesting changes to the subcontractor companies working on Project 28. Deploying SBInet’s tactical infrastructure along the southwest border is on schedule, but meeting the SBInet program’s goal to have 370 miles of pedestrian fence and 200 miles of vehicle barriers in place by December 31, 2008, may be challenging and more costly than planned. CBP set an intermediate goal to deploy 70 miles of new pedestrian fencing by the close of fiscal year 2007 and, having deployed 73 miles by this date, achieved its goal. Table 1 summarizes CBP‘s progress and plans for tactical infrastructure deployment. Costs for the 73 miles of fencing constructed in fiscal year 2007 averaged $2.9 million per mile and ranged from $700,000 in San Luis, Arizona, to $4.8 million per mile in Sasabe, Arizona. CBP also deployed 11 miles of vehicle barriers and, although CBP has not yet been able to provide us with the cost of these vehicle barriers, it projects that the average per mile cost for the first 75 miles of barriers it deploys will be $1.5 million. Figure 4 presents examples of fencing deployed. CBP estimates costs for the deployment of fencing in the future will be similar to those thus far. However, according to CBP officials, costs vary due to the type of terrain, materials used, land acquisition, who performs the construction, and the need to meet an expedited schedule. Although CBP estimates that the average cost of remaining fencing will be $2.8 million per mile, actual future costs may be higher due to factors such as the greater cost of commercial labor, higher than expected property acquisition costs, and unforeseen costs associated with working in remote areas. To minimize one of the many factors that add to cost, in the past DHS has used Border Patrol agents and DOD military personnel. However, CBP officials reported that they plan to use commercial labor for future infrastructure projects to meet their deadlines. Of the 73 miles of fencing completed to date, 31 were completed by DOD military personnel and 42 were constructed through commercial contracts. While the non-commercial projects cost an average of $1.2 million per mile, the commercial projects averaged over three times more—$4 million. According to CBP officials, CBP plans to utilize exclusively commercial contracts to complete the remaining 219 miles of fencing. If contract costs for deployment of remaining miles are consistent with those to deploy tactical infrastructure to date and average $4 million per mile, the total contract cost will be $890 million, considerably more than CBP’s initial estimate of $650 million. Although deployment of tactical infrastructure is on schedule, CBP officials reported that meeting deadlines has been challenging because factors they will continue to face include conducting outreach necessary to address border community resistance, devoting time to identify and complete steps necessary to comply with environmental regulations, and addressing difficulties in acquiring rights to border lands. As of July 2007 CBP anticipated community resistance to deployment for 130 of its 370 miles of fencing. According to community leaders, communities resist fencing deployment for reasons including the adverse effect they anticipate it will have on cross-border commerce and community unity. In addition to consuming time, complying with environmental regulations, and acquiring rights to border land can also drive up costs. Although CBP officials state that they are proactively addressing these challenges, these factors will continue to pose a risk to meeting deployment targets. In an effort to identify low cost and easily deployable fencing solutions, CBP funded a project called Fence Lab. CBP plans to try to contain costs by utilizing the results of Fence Lab in the future. Fence Lab tested nine fence/barrier prototypes and evaluated them based on performance criteria such as their ability to disable a vehicle traveling at 40 miles per hour (see fig. 5), allowing animals to migrate through them, and their cost- effectiveness. Based on the results from the lab, SBInet has developed three types of vehicle barriers and one pedestrian fence that meet CBP operational requirements (see fig. 6). The pedestrian fence can be installed onto two of these vehicle barriers to create a hybrid pedestrian fence and vehicle barrier. CBP plans to include these solutions in a “toolkit” of approved fences and barriers, and plans to deploy solutions from this toolkit for all remaining vehicle barriers and for 202 of 225 miles of remaining fencing. Further, CBP officials anticipate that deploying these solutions will reduce costs because cost-effectiveness was a criterion for their inclusion in the toolkit. SBInet officials also told us that widely deploying a select set of vehicle barriers and fences will lower costs through enabling it to make bulk purchases of construction and maintenance materials. While SBInet Program officials expect SBInet to greatly reduce the time spent by CBP enforcement personnel in performing detection activities, a full evaluation of SBInet’s impact on the Border Patrol’s workforce needs has not been completed. The Border Patrol currently uses a mix of resources including personnel, technology, infrastructure, and rapid response capabilities to incrementally achieve its strategic goal of establishing and maintaining operational control of the border. Each year through its Operational Requirements Based Budget Program (ORBBP), the Border Patrol sectors outline the amount of resources needed to achieve a desired level of border control. Border Patrol officials state this annual planning process allows the organization to measure the impact of each type of resource on the required number of Border Patrol agents. A full evaluation of SBInet’s impact on the Border Patrol’s workforce needs is not yet included in the ORBBP process; however, the Border Patrol plans to incorporate information from Project 28 a few months after it is operational. According to agency officials, CBP is on track to meet its hiring goal of 6,000 new Border Patrol agents by December 2008, but after SBInet is deployed, CBP officials expect the number of Border Patrol agents required to meet mission needs to change from current projections, although the direction and magnitude of the change is unknown. In addition, in June 2007, we expressed concern that deploying these new agents to the southwest sectors coupled with the planned transfer of more experienced agents to the northern border will create a disproportionate ratio of new agents to supervisors within those sectors—jeopardizing the supervisors’ availability to acclimate new agents. Tucson Sector officials stated CBP is planning to hire from 650 to 700 supervisors next year. To accommodate the additional agents, the Border Patrol has taken initial steps to provide additional work space through constructing temporary and permanent facilities, at a projected cost of about $550 million from fiscal year 2007 to 2011. The SBInet PMO expects SBInet to support day-to-day border enforcement operations; however, analysis of the impact of SBInet technology on the Border Patrol’s operational procedures cannot be completed at this time because agents have not been able to fully use the system as intended. Leveraging technology is part of the National Border Patrol Strategy which identifies the objectives, tools, and initiatives the Border Patrol uses to maintain operational control of the borders. The Tucson sector, where Project 28 is being deployed, is developing a plan on how to integrate SBInet into its operating procedures. Border Patrol officials stated they intend to re-evaluate this strategy, as SBInet technology is identified and deployed, and as control of the border is achieved. According to agency officials, 22 trainers and 333 operators were trained on the current Project 28 system, but because of deployment delays and changes to the COP software, the SBInet training curriculum is to be revised by Boeing and the government. Training is continuing during this revision process with 24 operators being trained each week. According to CBP officials, Border Patrol agents are receiving “hands on” training during evening and weekend shifts at the COP workstations to familiarize themselves with the recent changes made to the Project 28 system. However, training is to be stopped once a stabilized version of the COP can be used and both trainers and operators are to be retrained using the revised curriculum. Costs associated with revising the training material and retraining the agents are to be covered by Boeing as part of the Project 28 task order; however, the government may incur indirect costs associated with taking agents offline for retraining. The SBI PMO tripled in size in fiscal year 2007 but fell short of its staffing goal of 270 employees. As of September 30, 2007, the SBI PMO had 247 employees onboard, with 113 government employees and 134 contractor support staff. SBI PMO officials also reported that as of October 19, 2007, they had 76 additional staff awaiting background investigations. In addition, these officials said that a Human Capital Management Plan has been drafted, but as of October 22, 2007, the plan had not been approved. In February 2007, we reported that SBInet officials had planned to finalize a human capital strategy that was to include details on staffing and expertise needed for the program. At that time, SBI and SBInet officials expressed concern about difficulties in finding an adequate number of staff with the required expertise to support planned activities about staffing that shortfalls could limit government oversight efforts. Strategic human capital planning is a key component used to define the critical skills and competencies that will be needed to achieve programmatic goals and outlines ways the organization can fill gaps in knowledge, skills, and abilities. Until SBInet fully implements a comprehensive human capital strategy, it will continue to risk not having staff with the right skills and abilities to successfully execute the program. Project 28 and other early technology and infrastructure projects are the first steps on a long journey towards SBInet implementation that will ultimately require an investment of billions of taxpayer dollars. Some of these early projects have encountered unforeseen problems that could affect DHS’s ability to meet projected completion dates, expected costs, and performance goals. These issues underscore the need for both DHS and Boeing, as the prime contractor, to continue to work cooperatively to correct the problems remaining with Project 28 and to ensure that the SBInet PMO has adequate staff to effectively plan and oversee future projects. These issues also underscore Congress’s need to stay closely attuned to DHS’s progress in the SBInet program to make sure that performance, schedule, and cost estimates are achieved and the nation’s border security needs are fully addressed. This concludes my prepared testimony. I would be happy to respond to any questions that members of the Subcommittees may have. For questions regarding this testimony, please call Richard M. Stana at (202) 512-8777 or StanaR@gao.gov. Other key contributors to this statement were Robert E. White, Assistant Director; Rachel Beers; Jason Berman; Katherine Davis; Jeanette Espínola; Taylor Matheson; and Sean Seales. To determine the progress that the Department of Homeland Security (DHS) has made in implementing the Secure Border Initiative (SBI) SBInet’s technology deployment projects, we analyzed DHS documentation, including program schedules, project task orders, status reports, and expenditures. We also interviewed DHS and the U.S. Customs and Border Protection (CBP) headquarters and field officials, including representatives of the SBInet Program Management Office (PMO), Border Patrol, CBP Air and Marine, and the DHS Science and Technology Directorate, as well as SBInet contractors. We visited the Tucson Border Patrol sector—the site where SBInet technology deployment was underway at the time of our review. To determine the progress that Department of Homeland Security (DHS) has made in infrastructure project implementation, we analyzed DHS documentation, including schedules, contracts, status reports, and expenditures. In addition, we interviewed DHS and CBP headquarters and field officials, including representatives of the SBInet PMO, and Border Patrol. We also interviewed officials from the U.S. Army Corps of Engineers and the Department of the Interior. We visited the Tucson and Yuma, Arizona Border Patrol sectors—two sites where tactical infrastructure projects were underway at the time of our review. We did not review the justification for infrastructure project cost estimates or independently verify the source or validity of the cost information. To determine the extent to which CBP has determined the impact of SBInet technology and infrastructure on its workforce needs and operating procedures, we reviewed documentation of the agency’s decision to hire an additional 6,000 agents and the progress hiring these agents. We also interviewed headquarters and field officials to track if and how CBP (1) is hiring and training its target number of personnel, (2) it is planning to train new agents on SBInet technology, and (3) it will incorporate the new system into its operational procedures, and any implementation challenges it reports facing in conducting this effort. To determine how the SBInet PMO defined its human capital goals and progress it has made in achieving these goals, we reviewed the office’s documentation on its hiring efforts related to SBInet, related timelines, and compared this information with agency goals. We determined that the workforce data were sufficiently reliable for purposes of this report. We also interviewed SBI and SBInet officials to identify challenges in meeting the goals and steps taken by the agency to address those challenges. We performed our work from April 2007 through October 2007 in accordance with generally accepted government auditing standards. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In November 2005, the Department of Homeland Security (DHS) established the Secure Border Initiative (SBI), a multiyear, multibillion dollar program to secure U.S. borders. One element of SBI is SBInet--the U.S. Customs and Border Protection (CBP) program responsible for developing a comprehensive border protection system through a mix of security infrastructure (e.g., fencing), and surveillance and communication technologies (e.g., radars, sensors, cameras, and satellite phones). The House Committee on Homeland Security asked GAO to monitor DHS progress in implementing the SBInet program. This testimony provides GAO's observations on (1) SBInet technology implementation; (2) SBInet infrastructure implementation; (3) the extent to which CBP has determined the impact of SBInet technology and infrastructure on its workforce needs and operating procedures; and (4) how the CBP SBI Program Management Office (PMO) has defined its human capital goals and the progress it has made to achieve these goals. GAO's observations are based on analysis of DHS documentation, such as program schedules, contracts, status, and reports. GAO also conducted interviews with DHS officials and contractors, and visits to sites in the southwest border where SBInet deployment is underway. GAO performed the work from April 2007 through October 2007. DHS generally agreed with GAO's findings. DHS has made some progress to implement Project 28--the first segment of SBInet technology across the southwest border, but it has fallen behind its planned schedule. The SBInet contractor delivered the components (i.e., radars, sensors and cameras) to the Project 28 site in Tucson, Arizona on schedule. However, Project 28 is incomplete more than 4 months after it was to become operational--at which point Border Patrol agents were to begin using SBInet technology to support their activities. According to DHS, the delays are primarily due to software integration problems. In September 2007, DHS officials said that the Project 28 contractor was making progress in correcting the problems, but DHS was unable to specify a date when the system would be operational. Due to the slippage in completing Project 28, DHS is revising the SBInet implementation schedule for follow-on technology projects, but still plans to deploy technology along 387 miles of the southwest border by December 31, 2008. DHS is also taking steps to strengthen its contract management for Project 28. SBInet infrastructure deployment along the southwest border is on schedule, but meeting CBP's goal to have 370 miles of pedestrian fence and 200 miles of vehicle barriers in place by December 31, 2008, may be challenging and more costly than planned. CBP met its intermediate goal to deploy 70 miles of new fencing in fiscal year 2007 and the average cost per mile was $2.9 million. The SBInet PMO estimates that deployment costs for remaining fencing will be similar to those thus far. In the past, DHS has minimized infrastructure construction labor costs by using Border Patrol agents and Department of Defense military personnel. However, CBP officials report that they plan to use commercial labor for future fencing projects. The additional cost of commercial labor and potential unforeseen increases in contract costs suggest future deployment could be more costly than planned. DHS officials also reported other challenging factors they will continue to face for infrastructure deployment, including community resistance, environmental considerations, and difficulties in acquiring rights to land along the border. The impact of SBInet on CBP's workforce needs and operating procedures remains unclear because the SBInet technology is not fully identified or deployed. CBP officials expect the number of Border Patrol agents required to meet mission needs to change from current projections, but until the system is fully deployed, the direction and magnitude of the change is unknown. For the Tucson sector, where Project 28 is being deployed, Border Patrol officials are developing a plan on how to integrate SBInet into their operating procedures. The SBI PMO tripled in size during fiscal year 2007, but fell short of its staffing goal of 270 employees. Agency officials expressed concerns that staffing shortfalls could affect the agency's capacity to provide adequate contractor oversight. In addition, the SBInet PMO has not yet completed long-term human capital planning.
You are an expert at summarizing long articles. Proceed to summarize the following text: In November 2002, the Congress passed IPIA. The major objective of IPIA is to enhance the accuracy and integrity of federal payments. The law requires executive branch agency heads to annually review all programs and activities that they administer, identify those that may be susceptible to significant improper payments, and estimate and report annually on the amount of improper payments in those programs and activities. IPIA also requires the agencies to report annually to the Congress on the actions they are taking to reduce erroneous payments for programs for which estimated improper payments exceed $10 million. IPIA further requires OMB to prescribe guidance for federal agencies to use in implementing the act. OMB issued this guidance in Memorandum M-03- 13 in May 2003. It requires use of a systematic method to annually review and identify those programs and activities that are susceptible to significant improper payments. OMB guidance defines significant improper payments as annual improper payments in any particular program exceeding both 2.5 percent of program payments and $10 million. The OMB guidance then requires agencies to estimate the annual amount of improper payments using statistically valid techniques for each susceptible program or activity. For those agency programs, including state-administered programs, determined to be susceptible to significant improper payments and with estimated annual improper payments greater than $10 million, IPIA and related OMB guidance require each agency to report the results of its improper payment efforts. OMB guidance requires the reporting to be in the Management Discussion and Analysis section of the agency’s PAR for each fiscal year ending on or after September 30, 2004. IPIA requires the following information to be reported to the Congress: a discussion of the causes of the improper payments identified, actions taken to correct those causes, and results of the actions taken to address those causes; a statement of whether the agency has the information systems and other infrastructure it needs to reduce improper payments to minimal cost-effective levels; if the agency does not have such systems and infrastructure, a description of the resources the agency has requested in its most recent budget submission to the Congress to obtain the necessary information and infrastructure; and a description of the steps the agency has taken and plans to take to ensure that agency mangers are held accountable for reducing improper payments. OMB’s guidance in M-03-13 requires that three additional things be included in the PAR: a discussion of the amount of actual erroneous payments that the agency expects to recover and how it will go about recovering them; a description of any statutory or regulatory barriers that may limit the agency’s corrective actions in reducing improper payments; and provided the agency has estimated a baseline improper payment rate for the program, a target for the program’s future improper payment rate that is lower than the agency’s most recent estimated error rate. In August 2004, OMB established Eliminating Improper Payments as a new program-specific initiative in the President’s Management Agenda (PMA). The separate improper payments PMA program initiative began in the first quarter of fiscal year 2005. Previously, agency efforts related to improper payments were tracked along with other financial management activities as part of the Improving Financial Performance initiative. The objective of establishing a separate initiative for improper payments was to ensure that agency managers are held accountable for meeting the goals of IPIA and are therefore dedicating the necessary attention and resources to meeting IPIA requirements. This program initiative establishes an accountability framework for ensuring that federal agencies initiate all necessary financial management improvements for addressing this significant and widespread problem. Specifically, agencies are to measure their improper payments annually, develop improvement targets and corrective actions, and track the results annually to ensure the corrective actions are effective. State responses to our survey show that the number of state-administered federal programs (state programs) estimating improper payments significantly decreases if there is no federal requirement to estimate or if the states are not participating in a federally administered pilot to estimate. For the 25 major programs reviewed for fiscal years 2003 and 2004, all 51 states estimated improper payments where there was a federal requirement to do so. For the federally administered improper payment pilots, the number decreased to 29 states. Where there was no federal requirement or pilot in place, only 11 states reported estimating improper payments on their own initiative, as shown in figure 1. Only 2 of the 25 major programs in our review had federal requirements for all the states to annually estimate improper payments—the Food Stamp and UI programs. In total, 47 states reported estimating improper payments for one or more major programs, which represented 97 program surveys for fiscal year 2003, fiscal year 2004, or both. More than half of the reported estimates were for the Food Stamp and UI programs. Food Stamp and UI program outlays expended by the states totaled about $61 billion for fiscal year 2004. This constitutes about 15 percent of the total federal funds that are estimated to be annually distributed to states and other nonfederal entities for redistribution to eligible parties. Both of these programs are benefit programs, have a history of measuring improper payments through established systems, and can calculate a national error rate. The purpose of the Food Stamp Program is to help low-income individuals and families obtain a more nutritious diet by supplementing their incomes with benefits to purchase food. As reported in USDA’s fiscal year 2005 PAR, the causes of improper payments in the Food Stamp Program include client errors, such as incomplete or inaccurate reporting of income, assets, or both by participants at the time of certification or by not reporting subsequent changes. Causes can also be provider based, such as errors in determining eligibility or benefit amounts or delays in action or inaction on client reported changes. The Food Stamp quality control system measures payment accuracy and monitors how accurately states determine food stamp eligibility and calculate benefits. USDA reports a rate and dollar amount of estimated improper payments for the Food Stamp Program in its annual PAR based on the quality control system. In its fiscal year 2005 PAR, USDA reported a national improper payment error rate of 5.88 percent, or $1.4 billion, for the Food Stamp Program. A national error rate is calculated and incentives and penalties are applied to the states that have rates lower or higher than the national rate. Recent initiatives reported in USDA’s fiscal year 2005 PAR include the agency’s fiscal year 2004 nationwide implementation of an electronic benefit transfer (EBT) system for the delivery of food stamp benefits. The EBT card, which replaced paper coupons, creates an electronic record for each transaction that makes fraud easier to detect. Other USDA efforts include Partner Web, which is an intranet for state food stamp agencies, and the National Payment Accuracy Workgroup, which consists of representatives from USDA headquarters and regional offices who meet to discuss best practice methods and strategies. (See app. III for more details on the Food Stamp Program.) The UI Program provides temporary cash benefits to workers who lose their jobs through no fault of their own. Labor reported in its fiscal year 2005 PAR that the principal cause of improper payments was claimants who continue to claim benefits despite having returned to work. Pursuant to Part 602 of Title 20, Code of Federal Regulations, Labor implemented the Benefit Accuracy Measurement system to measure state payment accuracy in the UI Program. Labor also reports a rate and dollar amount of estimated improper payments for the UI Program in its annual PAR. In its fiscal year 2005 PAR, Labor reported an annual error rate of 10.13 percent, or $3.2 billion, for the UI Program. Labor’s initiatives to reduce improper payments in the UI Program include implementing new cross-matching technologies like the National Directory of New Hires database and funding states’ data-sharing efforts with federal agencies, such as the Social Security Administration, and other state agencies, such as the state departments of motor vehicles. Further, Labor is instilling additional performance measures for states to detect and recover overpayments of benefits and continuing analyses of the causes, costs, and benefits of improper payment prevention or establishing recovery operations. (See app. IV for more details on the UI Program.) Twenty-nine states in our review responded in our surveys or during interviews that they voluntarily participated in federally administered pilot projects to estimate improper payments. We visited the state participating in the Department of Transportation’s (DOT) Highway Planning and Construction Program and one of the states participating in the Department of Health and Human Services’ (HHS) Medicaid program and discussed the states’ efforts to measure improper payments. These pilots serve as models for the federal agencies on obtaining improper payment information and establishing a methodology for other states to estimate improper payments for those programs. Neither of the two pilots was sufficiently comprehensive to allow the responsible federal agency to project an error rate with statistical precision to all of the states. DOT provides funding to the state departments of transportation to administer the nation’s federal Highway Planning and Construction Program. During our review, DOT had a pilot in place to estimate improper payments for two construction projects in Tennessee. The sampled transactions reviewed to identify improper payments for these two projects were selected from a population of almost $35 million, which represented a small portion of DOT’s fiscal year 2005 outlays totaling $31 billion for the Highway Planning and Construction Program. For one of these projects, DOT reported that the estimated improper payments amount was statistically insignificant. For the other project, DOT reported an improper payment estimate of $111,671. The methodology and testing procedures that resulted from DOT’s pilot project will be used to extend the methodology nationwide. In its fiscal year 2005 PAR, DOT reported a zero-dollar improper payment estimate for this program. However, the DOT OIG also reported that detecting improper payments for several grant programs, including the Highway Planning and Construction Program, was a top management challenge for the agency. In particular, the OIG reported that the DOT pilot project was too limited and that OIG investigators continue to identify instances of improper payments. The OIG cited two improper payment examples totaling over $1.3 million, which was reimbursed to DOT as a result of OIG investigations. In response, DOT is reorganizing and redesigning its procedures to better improve oversight of research agreements. This includes creating a new division within DOT’s Office of Acquisition Management devoted to the award and administration of cooperative agreements. (See app. V for more details on the improper payment pilot for the Highway Planning and Construction Program.) In coordination with the states, HHS finances health care services to low-income individuals and families through the Medicaid program. Medicaid improper payments are caused by medical review, eligibility review, or data-processing review errors. In fiscal year 2002, HHS began a pilot to estimate improper payments for its Medicaid program. The number of states voluntarily participating in the pilot has increased each year, and in the second year of the pilot, fiscal year 2003, 12 states participated. In the third year, fiscal year 2004, 24 states participated in the pilot. Because HHS had not fully implemented a statistically valid methodology, the agency did not report an improper payment estimate for the Medicaid program in its fiscal year 2005 PAR. According to agency officials, HHS is in the process of implementing a methodology for estimating payment error rates for Medicaid in all states. HHS stated that it expects to be fully compliant with the IPIA requirements for the Medicaid program by fiscal year 2008. Other initiatives HHS is undertaking for the Medicaid program are the hiring of additional staff to do prospective reviews of state Medicaid operations and the Medicare/Medicaid data match program designed to identify improper payments and areas in need of improved payment accuracy. (See app. VI for more details on the Medicaid program.) We identified other improper payment pilot initiatives during our review of agencies’ fiscal year 2005 PARs. Specifically, HHS reported that improper payment pilots are being conducted for three other state-administered programs to assist HHS in its efforts to report a national improper payment estimate in the future. For HHS’s State Children’s Health Insurance Program (SCHIP), 15 states participated in a payment accuracy measurement pilot in fiscal year 2004. The states performed a combination of medical, eligibility, or data-processing reviews of claims and applicable payments for the period October 1, 2003, to December 31, 2003. Using a standard methodology, those states computed a payment accuracy error rate for their programs. Based on these results, HHS has adopted a national strategy using federal contractors to obtain a national error rate for SCHIP with expected implementation in fiscal year 2006. In fiscal year 2007, HHS expects to begin measuring SCHIP error rates nationwide for its fee-for- service component. HHS expects to report SCHIP error rates for its fee-for- service, managed care, and eligibility components in its fiscal year 2008 PAR. For HHS’s Child Care and Development Fund (CCDF) Program, 11 states participated in an improper payment pilot in fiscal year 2004 to assess states’ efforts to prevent and reduce improper payments. The states worked with HHS to assess the adequacy of state systems, databases, policy, and administrative structures. In fiscal year 2005, HHS expanded pilot participation to 18 states. HHS also conducted an error rate study in 4 states to assess those states’ ability to verify information received from clients during the initial eligibility process or to establish eligibility correctly. In addition, HHS conducted interviews in 5 other states to gather information about improper payment activities. HHS reported that it will continue to work with states during fiscal year 2006 to identify an appropriate strategy for determining estimates of payment errors in the CCDF Program. For HHS’s Temporary Assistance for Needy Families (TANF) Program, one state participated in a pilot to undergo a more in-depth review of TANF expenditures as part of its single audit requirement. The objective of the pilot was to explore the viability of estimating improper payments in the single audit process. Using statistical sampling, the auditors reviewed 208 cases to test controls. According to HHS, the auditors reported an overall case error rate of 20 percent and a payment error rate of 3.9 percent from their review of the 208 cases. In addition to this pilot, state-led initiatives involving the TANF Program were also under way, as described below. During our review of survey responses, we also noted that 11 states, on their own initiative, were estimating improper payments related to 5 separate programs for fiscal year 2003, fiscal year 2004, or both. For example, 6 of the 11 states indicated in their survey responses that they were estimating improper payments for HHS’s TANF Program. Among the varying methods the 11 states used to estimate amounts, error rates, or both were statistically representative samples of payments and findings from states’ single audits. Other techniques respondents reported using included Food Stamp Program quality control reviews to ascertain the accuracy of TANF payments, which would be reasonable to do if the eligibility requirements of the two programs were similar. As part of their funds stewardship responsibilities for federal awards, states are required to establish and maintain internal control designed to provide reasonable assurance that funds are administered in compliance with federal laws, regulations, and program requirements. This includes maintaining accountability over assets and safeguarding funds against loss from unauthorized use or disposition. To ensure proper administration of federal funds, states reported using a variety of prepayment and postpayment mechanisms. For example, states reported the use of computer-related techniques to identify and prevent improper payments as well as recovery audits to collect overpayments. In addition, selected programs reported that federal incentives and penalties are in place to help reduce improper payments. These types of actions contribute to a strong internal control structure that helps mitigate the risk and occurrence of improper payments. Generally, improper payments result from a lack of or an inadequate system of internal control, but some result from program design issues. Our Standards for Internal Control in the Federal Government provides a road map for entities to establish control for all aspects of their operations and a basis against which entities’ control structures can be evaluated. Also, our executive guide on strategies to manage improper payments focuses on internal control standards as they relate to reducing improper payments. The five components of internal control—control environment, risk assessment, control activities, information and communication, and monitoring—are defined in the executive guide in relation to improper payments as follows: Control environment—creating a culture of accountability by establishing a positive and supportive attitude toward improvement and the achievement of established program outcomes. Risk assessment—analyzing program operations to determine if risks exist and the nature and extent of the risks identified. Control activities—taking actions to address identified risk areas and help ensure that management’s decisions and plans are carried out and program objectives are met. Information and communication—using and sharing relevant, reliable, and timely financial and nonfinancial information in managing activities related to improper payments. Monitoring—tracking improvement initiatives over time, and identifying additional actions needed to further improve program efficiency and effectiveness. For this engagement, we focused on two of these internal control components—risk assessments and control activities, which are discussed in more detail in the following sections. All states except 1 acknowledged using computer-related techniques to prevent or detect improper payments, while 21 states reported having performed some type of statewide assessments to determine what programs are at risk of improper payments. Strong systems of internal control provide reasonable assurance that programs are operating as intended and are achieving expected outcomes. A key step in the process of gaining this assurance is conducting a risk assessment, an activity that entails a comprehensive review and analysis of program operations to determine where risks exist and what those risks are, and then measuring of the potential or actual impact of those risks on program operations. In performing a risk assessment, management should consider all significant interactions between the entity and other parties, as well as all internal factors at both the organizationwide and program levels. IPIA requires agencies to review all of their programs to identify those that may be susceptible to significant improper payments. Since the programs in our review were state administered, we asked the states if they performed statewide reviews to assess if their programs may be at risk of improper payments. Twenty-one states responded that they had performed some type of statewide assessment of their programs. Some of the states’ risk assessment processes included internal control assessments, which were generally self-assessments performed by the states’ program agencies and entities. Two states noted that these self-assessments can be used as a tool by state auditors to evaluate weaknesses or to plan work to be performed. Regular evaluation of internal control systems is statutorily required by at least 2 states. Other risk assessment methods states reported using included single audits and other audits or reviews performed by state auditors or by state agencies. Survey respondents also cited using control activities, such as computer- related techniques, to aid in the detection and prevention of improper payments. Computer-related techniques play a significant role not only in identifying improper payments, but also in providing data on why these payments were made and, in turn, highlighting areas that need strengthened prevention controls. The adoption of technology allows states to have effective detection techniques to quickly identify and recover improper payments. Data sharing, data mining, smart technology, data warehousing, and other techniques are powerful internal control tools that provide more useful and timely access to information. The use of these techniques can achieve potentially significant savings by identifying client- related reporting errors and misinformation during the eligibility determination process—before payments are made—or by detecting improper payments that have been made. Fifty of the 51 states representing 21 different programs reported in their surveys that they used computer- related techniques to prevent or detect improper payments. Table 1 shows the number of programs that reported using each technique. As table 1 shows, for the state programs that reported using a computer- related technique, 106 state program administrators reported using some sort of fraud detection system. One example is the Transportation Software Management Solution, a fraud detection system used by several states for the Highway Planning and Construction Program. This software contains a Bid Analysis Management System that allows highway agencies to analyze bids for collusion. Also, a limited number of states in our survey reported using smart technology. For example, the Medicaid Fraud, Abuse and Detection System is designed to structure, store, retrieve, and analyze management information. It has the ability to detect fraud patterns, and it works with the Medicaid Management Information System, which contains a data warehouse that can be queried for information to be used in a variety of analyses. Other techniques include one state’s use of a Web-based system that allows National School Lunch Program participants to enter monthly claims by site. System checks are in place to ensure that sites do not overclaim meals based on days served and eligible students. Recovery auditing is another method that states can use to recoup detected improper payments. Recovery auditing focuses on the identification of erroneous invoices, discounts offered but not received, improper late payment penalties, incorrect shipping costs, and multiple payments for single invoices. Recovery auditing can be conducted in-house or by recovery audit firms. Section 831 of the National Defense Authorization Act for Fiscal Year 2002 contains a provision that requires all executive branch agencies entering into contracts with a total value exceeding $500 million in a fiscal year to have cost-effective programs for identifying errors in paying contractors and for recovering amounts erroneously paid. The legislation further states that a required element of such a program is the use of recovery audits and recovery activities. The law authorizes federal agencies to retain recovered funds to cover in-house administrative costs as well as to pay contractors, such as collection agencies. OMB guidance suggests that federal agencies awarding grants may extend their recovery audit programs to cover significant contract activity by grant recipients (e.g., states). States may engage in their own recovery audit programs. As shown in table 2, based on our review of survey responses, 15 states reported conducting recovery audits in fiscal year 2003, fiscal year 2004, or both. In fiscal year 2003, states reported recovering over $180 million, compared to $155 million for fiscal year 2004. In survey responses, states reported using either outside contractors to perform recovery audits or establishing in-house fraud and detection units to recover improperly paid amounts. One state noted that it passed legislation requiring the use of recovery auditors in its state agencies. In June 2005, Texas enacted legislation that directs the state’s Comptroller of Public Accounts to contract to conduct recovery audits of payments made by state agencies to vendors and to recommend improved state agency accounting operations. The law requires state entities with more than $100 million in biennial expenditures to undertake annual recovery audits. The state expects to recover up to $4.5 million annually starting in state fiscal year 2007. Viewed broadly, agencies have applied limited incentives and penalties for encouraging improved state administration to reduce improper payments. Incentives and penalties can be helpful to create management reform and to ensure adherence to performance standards. The IPIA implementing guidance requires that each federal agency report on steps it has taken to ensure that agency managers are held accountable for reducing and recovering improper payments. When a culture of accountability over improper payments is instilled in an organization, everyone in the organization, including the managers and day-to-day program operators, have an incentive to reduce fraud and errors. Transparency, through public communication of performance results, also acts as an incentive for agencies to be vigilant in their efforts to address the wasteful spending that results from lapses in controls that lead to improper payments. In the survey, we asked the state program administrators to identify any incentives they have received from the federal government to encourage them to reduce improper payments. We also asked them to identify any penalties they have received from the federal government for not doing so. Thirty-two states reported incentives such as enhanced funding and reduced reporting requirements for 5 of the 25 major programs. Most incentives were related to the Food Stamp Program, largely because of a statutory requirement that USDA assess penalties and provide financial incentives to the states. As we previously reported on the Food Stamp Program, the administration of the quality control process and its system of performance bonuses and sanctions is a large motivator of program behavior and has assisted in increasing payment accuracy. Examples of other incentives identified by the state programs included reduced reporting requirements for benefit recipients and additional funding received for a fraud and abuse detection system. Penalties such as decreased funding, increased reporting, and client sanctions were reported by 17 states for four different programs. As with incentives, most of the penalties identified related to the Food Stamp Program. States can get approval from USDA to reinvest portions of their penalties toward corrective actions to reduce the error rate as opposed to USDA recovering the penalty from the state; thus the distinction between incentives and penalties is somewhat blurred. Our survey results showed that some states believed that being able to reinvest a portion of their food stamp penalty toward corrective action plans to improve payment accuracy was actually an incentive, while other states considered it a penalty. For another program, one state noted in its survey response that it was penalized by the federal government for not applying applicable reductions to TANF beneficiaries for noncompliance with child support enforcement regulations. In lieu of paying a penalty of over $1 million, the state submitted a corrective action plan to address the problems. Certain states perceive limitations in their ability to adequately address improper payments. For example, 37 states reported in their survey responses that federal legislative and program design barriers hinder their ability to detect, prevent, and reduce improper payments for one or more programs. Legislative barriers relate to an agency’s ability to take actions to reduce improper payments. Program design barriers relate to the complexity and variety of programs. From our review of survey responses, several state program officials, representing multiple programs, reported that they encountered legislative barriers related to due process. Specifically, states are not permitted to stop or adjust payments until the due process hearing or appeals processes are complete, even though they know the payment is improper. For example, one state reported that it has a state superior court ruling that requires paying UI benefits conditionally under certain circumstances, and that the recovery of the paid benefits can only take place once the courts have determined the payments were incorrect. Another state program response said that lack of authority to mandate the submission of Social Security numbers for those applying for benefits was a barrier that limited the ability to identify and prevent improper payments. Additionally, 23 state programs identified statutory restrictions over the use of certain data as a barrier to improved accuracy. For example, three state programs noted that because of security policies, they were restricted from accessing and using information from the Internal Revenue Service. Program design barriers have also contributed to states’ inability to reduce improper payments. Generally, states receive broad statutory and regulatory program guidelines from the responsible federal agency. States then issue state-specific guidelines to manage day-to-day operations, which may vary among the states. A few survey respondents indicated that inconsistent requirements between programs hindered their ability to reduce improper payments. For example, four state programs noted that efforts to manage improper payments are hindered because of the different eligibility requirements among the federal programs that they administer. The survey responses of the state programs also indicated that they encountered resource barriers, such as lack of funding for additional personnel or information technology. For example, one state program responded that the lack of funding needed to identify eligible beneficiaries through data matching was a barrier. Minimizing improper payments is often most efficiently and effectively achieved through the exchange of relevant, reliable, and timely information between individuals and units within an organization and with external entities that have oversight and monitoring responsibilities. For state- administered programs, assistance from the federal agencies and OMB may be needed in order for the states and state programs to successfully assist the federal agencies in implementing IPIA requirements. The types of communication and information that may be necessary at both the state and federal levels include (1) a determination of what information is needed by managers to meet and support initiatives aimed at reducing improper payments; (2) adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on improper payment initiatives, such as periodic meetings with oversight bodies; and (3) working relationships with other organizations to share information on improper payments. Of the 227 state program surveys received, 100 identified one or more areas where guidance or resources from the federal government would be helpful. OMB can play an important role in encouraging and coordinating efforts between the state programs and federal agencies. OMB, as part of its responsibilities, develops and implements budget, program, management, and regulatory policies. As such, OMB can set the tone at the top by creating a general framework and setting expectations for federal agencies in meeting the requirements of IPIA. Additional resources and guidance would be needed for increased state involvement. As noted above, 100 state program officials requested various tools cited as needed in their efforts to estimate improper payments and to help the federal agencies in meeting various IPIA requirements, including guidance on estimating improper payments, additional funding for staffing and various projects, sharing of best practices and available guidance, and guidance on performing risk assessments. State programs also indicated that they would want an opportunity to comment on any proposed regulations prior to implementation that would require state actions to estimate and report improper payment information. In our survey, we asked the state program officials what types of guidance and resources from the federal agencies or OMB would be beneficial to better estimate improper payments. State program officials identified one or more types of guidance or resources that would be helpful to assist the federal agencies in meeting the requirements of IPIA. We classified these responses into the following areas: Guidance on estimating improper payments. Forty-four of the state programs asked for general procedures, program-specific procedures, or both for identifying and detecting improper payments, calculating error rates, and establishing sampling methodologies. One state program suggested that guidance related to training for detecting improper payments and on how to design controls to facilitate improper payment detection be made available. Additional funding. Forty-three of the state programs indicated a need for additional funding to train and support the additional staff levels they believe would be necessary to estimate improper payments. Additional funding also was requested for automation projects. One state requested enhanced funding to update its eligibility system to include fraud detection. Another state requested additional funding for developing an automated Quality Management System to capture data from all levels of reviews and programs. Sharing of best practices and available guidance. Fifteen of the state programs also expressed interest in the creation of groups to discuss trends and best practices in improper payment-related areas, while other states wanted general information on IPIA and the states’ roles. Assessing risk/risk assessment instruments. Thirteen of the state programs requested procedures for assessing risk of improper payments, including items to take into consideration when assessing their programs for risk susceptibility. Recognition of state input. Seven of the state programs want an opportunity to comment on any proposed regulations prior to implementation of any requirements to estimate or report improper payment information. For example, one state responded in its survey that the state, in coordination with its cognizant federal agency, should determine its own plans to detect improper payments. Additionally, another state program inquired as to the purpose of involving the states, particularly those that have had little occurrence of audit findings, and another wanted clarification on what sanctions would be assessed for those that identified improper payments. Other guidance and resources. Forty-eight of the state programs requested other types of guidance and resources relating to enhancing the use of information technology, overcoming legislative barriers, and establishing incentives and penalties for subrecipients, among others. For example, one state program wanted the creation of a national database to track the activity of medical providers that operate in multiple states. OMB has continued to conduct its improper payments work through CFOC and PCIE’s Erroneous and Improper Payments Workgroup. The workgroup periodically convenes to discuss and develop best practices and other methods to reduce or eliminate, where possible, improper payments made by federal government agencies. It has issued reports and other products to CFOC/PCIE, reflecting workgroup deliberations and determinations. OMB officials have told us that they have started to draft a plan on developing and maintaining partnerships with states to facilitate state’s estimating and reporting information to the federal agencies. For federal agencies’ fiscal year 2005 PAR reporting, OMB included a new requirement in Circular No. A-136, Financial Reporting Requirements, that federal agencies were to report on their actions and results at the grantee level. However, based on our review of selected federal agencies’ fiscal year 2005 PARs, reporting of fund stewardship at the grantee level was limited. The CFOC and PCIE Erroneous and Improper Payments Workgroup created the Grants Subgroup in March 2004 to explore the feasibility of using various tools to measure and report improper payments, including evaluating currently available policies and guidance and modifying OMB single audit guidance to fulfill IPIA reporting requirements. Specifically, the Grants Subgroup’s work focused on developing cost-effective approaches for tracking improper payments at each stage of the payment cycle, including (1) evaluating existing policies and guidance that could be used to measure and report improper payments and (2) examining the possibilities of measuring improper payments using the audits conducted under the Single Audit Act of 1996, as amended; OMB’s Circular No. A-133 Single Audit Compliance Supplement; and the Federal Single Audit Clearinghouse. In March 2005, the subgroup issued a report reflecting the results of its work. Specifically, the subgroup identified issues with (1) the current structure and design of grant programs’ distribution of funding, which hinders determining a national payment error rate; (2) little incentive for states to assist federal agencies with IPIA reporting; (3) lack of funding to perform IPIA compliance activities; and (4) awareness and commitment from all levels of management within an agency to address the causes of improper payments. Further, in an effort to foster working relationships among federal agencies and the states, OMB has begun work to clarify state and federal roles in estimating and reporting improper payments information and planning the development of state partnerships for certain state-administered programs. Additionally, beginning with fiscal year 2005 PARs, OMB included three reporting requirements for those agencies with grant-making programs: (1) agency’s accomplishments in the area of funds stewardship past the primary recipient, (2) status of projects, and (3) results of any reviews. Our preliminary review of these PARs showed that in general agencies either did not report on their grant-making activities, did not clearly identify grant programs, or did not address fund stewardship beyond the primary recipient. However, we noted that some agencies provided partial information on the three reporting requirements. For example, eight agencies reported on the status of their projects, including one that discussed linking grants management and financial data to produce better information to ensure that projects funded by grants achieve program objectives and grant recipients are technically competent to carry out the work. In November 2005, OMB issued draft revisions to its IPIA implementing guidance. This implementing guidance, together with recovery auditing guidance, is to be consolidated into future Parts I and II of Appendix C to OMB Circular No. A-123, Management’s Responsibility for Internal Controls (Dec. 21, 2004). Among the proposed changes, OMB provides that for state-administered programs, federal agencies may provide state-level estimates either for all states or a sample of states to generate a national improper payment rate for that program. Also, OMB proposes to allow modifications to agency-specific compliance supplements to enhance implementation of IPIA for federal grant-making agencies, such as the ones discussed in this report. While OMB has taken steps to begin addressing the complexities related to reporting improper payment information for federally funded, state-administered programs, additional enhancements could be made that address how federal agencies define state-administered programs and the methodology to be employed for generating a national estimate. Specifically, we found that the proposed changes do not clearly define the term state-administered programs. Without a clear definition, OMB is at risk of receiving inconsistent improper payment reports because agencies could define programs differently. In addition, we noted that the draft guidance did not provide basic criteria, such as the nature and extent of data and documentation that agencies should consider when developing a plan or methodology to calculate a national improper payment error rate for these state-administered programs. Federal agencies continue to make progress toward meeting the requirements of IPIA, in response to the PMA and other key initiatives to eliminate improper payments. However, measuring improper payments and designing and implementing actions to reduce or eliminate them are not simple tasks, particularly for grant programs that rely on quality administration efforts at the state level. With budgetary pressures rising across the federal government, agencies are under constant and increasing pressure to do more with less. Preventing improper payments and identifying and recouping those that occur become an even higher priority in this environment. States have a fundamental responsibility to ensure the proper administration of federal awards by using sound management practices and maintaining internal controls to ensure distribution of federal funding to subrecipients or beneficiaries in accordance with federal and state laws and regulations. Given their involvement in determining eligibility and distributing benefits, states are in a position to assist federal agencies in reporting on IPIA requirements. In fact, the success of several existing programs and pilots in estimating improper payment rates indicates that such efforts could logically be expanded. Communication, coordination, and cooperation among federal agencies and the states will be critical factors in estimating national improper payment rates and meeting IPIA reporting requirements for state-administered programs. We are making four recommendations to help further the progress toward meeting the goals of IPIA and determining states’ role in assisting federal agencies to report a national improper payment estimate on federal programs. Specifically, we recommend that the Director, Office of Management and Budget, revise IPIA policy guidance to clearly define state-administered programs so that federal agencies can consistently identify all such programs; expand IPIA guidance to provide criteria that federal agencies should consider when developing a plan or methodology for estimating a national improper payment estimate for state-administered programs, such as criteria that address the nature and extent of data and documentation needed from the states to calculate a national improper payment estimate; require federal agencies to communicate, and make available to the states, guidance on conducting risk assessments and estimating improper payments for federally funded, state-administered programs; and share ideas, concerns, and best practices with federal agencies and states regarding improper payment reporting requirements for federally funded, state-administered programs. We received written comments on a draft of this report from OMB and reprinted them in appendix VII. OMB agreed with our recommendations and highlighted several initiatives under way to ensure that accurate improper payment rates can be generated without creating undue cost and burden on federal agencies or state partners that manage federally funded programs. OMB also provided technical comments that we incorporated, as appropriate. We are sending copies of this report to the Director, Office of Management and Budget; Secretaries of Agriculture, Health and Human Services, Labor, and Transportation; appropriate congressional committees; and other interested parties. We will also make copies available to others upon request. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-9095 or williamsm1@gao.gov if you have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix VIII. The objectives of this report were to determine (1) what actions are being taken by states to assist federal agencies in estimating improper payments; (2) what techniques, related to detecting, preventing, or reducing improper payments, have states employed to ensure proper administration of federal awards; and (3) what assistance can be provided by the Office of Management and Budget (OMB) that state program administrators would find helpful in supporting the respective federal agencies with the implementation of the Improper Payments Information Act of 2002 (IPIA). To address each of these objectives, we conducted a statewide survey in all 50 states and the District of Columbia regarding actions to estimate improper payments for state- administered federal programs for fiscal years 2003 and 2004, conducted a program-specific survey of the major programs in each of performed site visits to selected states, conducted interviews with federal and state officials, and reviewed federal agencies’ fiscal year 2005 performance and accountability reports (PAR) and prior GAO and office of inspector general (OIG) reports. More detailed information on each of these aspects of our research is presented in the following sections. We conducted our work from April 2005 through December 2005 in accordance with generally accepted government auditing standards. The surveys were developed based on IPIA, the National Defense Reauthorization Act for Fiscal Year 2002, and our executive guide on managing improper payments, and included questions about state-issued policies or guidance on internal controls or on estimating statewide risk assessments for improper payments; state recovery auditing efforts; state program efforts to prevent, detect, and reduce improper payments; state program participation in improper payment pilots; and additional assistance needed by state programs to support efforts in measuring and reporting improper payments. The surveys were pretested with state officials in two states. Revisions to the survey were made based on comments received during the pretests. To determine the state programs that would receive the program-specific survey, we designed a spreadsheet for each state containing its major programs, which we defined as those programs for which federal funds covered at least 60 percent of total state-administered expenditures. To do this, we used the Federal Audit Clearinghouse single audit database to identify a universe of federally funded, state-administered programs for each state. We sorted the programs from largest to smallest expenditure amount and identified the major programs in decreasing order until we obtained, in aggregate, at least 60 percent of the total federal portion of state-administered expenditures in each state. We provided this spreadsheet to states so they could confirm it with their state records. Table 3 lists the 25 major programs and the number of states in which each major program was included. The number of states identified for each major program ranged from 1 to 51. As shown in table 4, the number of major programs identified for each state ranged from 1 to 12. We e-mailed the surveys in June 2005 and followed up with subsequent mailings and telephone communications. The collection of survey data ended in October 2005 with a response rate of 98 percent for the statewide surveys (50 of the 51 states) and a 95 percent response rate for the program-specific surveys (227 of the 240 programs). We conducted follow-up phone calls to clarify responses where there appeared to be discrepancies; however, we did not independently verify the responses or information obtained through the surveys. Although no sampling errors were associated with our survey results, the practical difficulties of conducting any survey may introduce certain types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted or differences in the sources of information that participants use to respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages to reduce such nonsampling errors. Specifically, social science survey specialists designed draft questionnaires, we pretested two versions of the questionnaire, and we performed reviews to identify inconsistencies and other indications of error prior to analysis of data. The data were keyed and verified after data entry. We conducted our survey work from June 2005 through December 2005. We visited two states and interviewed state agency officials and other relevant parties about initiatives in place to estimate improper payments for the Highway Planning and Construction, Medicaid, and Unemployment Insurance (UI) programs. The two states were selected based on our knowledge of actions under way for programs in Tennessee and Texas. We went to Tennessee to obtain information about the Department of Transportation’s (DOT) implementation of a pilot project to estimate improper payments related to the Highway Planning and Construction Program. The pilot was the first that DOT’s Federal Highway Administration had conducted to estimate improper payments in a state and covered two construction projects. We went to Texas to obtain information about the Department of Health and Human Services’s (HHS) Medicaid program and the Department of Labor’s (Labor) UI Program. One reason for selecting Texas was that HHS’s Center for Medicare and Medicaid Services had identified Texas as having a leadership role in estimating improper payments for its Medicaid program. In addition, Texas was one of three states participating in a new pilot project organized by Labor to begin data-matching work using the National Directory of New Hires. More information about these states’ efforts in these three programs is provided in appendixes IV, V, and VI. Detailed information regarding the Department of Agriculture’s Food Stamp Program and its efforts in estimating and reporting improper payments is presented in appendix III. Improper payment estimates and references from agencies’ PARs are used for background purposes. We did not assess the reliability of these data. Temporary Assistance for Needy Families Surveys, Studies, Investigations, and Special Purpose Grants Child Care and Development Block Grant Temporary Assistance for Needy Families Temporary Assistance for Needy Families Title I Grants to Local Educational Agencies Special Supplemental Nutrition Program for Women, Infants, and Children Highway Planning and Construction (Continued From Previous Page) Temporary Assistance for Needy Families Title I Grants to Local Educational Agencies Lower Income Housing Assistance Program Section 8 Moderate Rehabilitation Child Care and Development Fund Improving Teacher Quality State Grants Temporary Assistance for Needy Families Temporary Assistance for Needy Families Temporary Assistance for Needy Families Temporary Assistance for Needy Families Capitalization Grants for Clean Water State Revolving Funds Unemployment Insurance (Continued From Previous Page) Temporary Assistance for Needy Families Title I Grants to Local Educational Agencies Temporary Assistance for Needy Families Title I Grants to Local Educational Agencies Temporary Assistance for Needy Families Title I Grants to Local Educational Agencies Medicaid (Continued From Previous Page) Temporary Assistance for Needy Families Title I Grants to Local Educational Agencies Temporary Assistance for Needy Families Title I Grants to Local Educational Agencies Temporary Assistance for Needy Families Temporary Assistance for Needy Families Temporary Assistance for Needy Families (Continued From Previous Page) Temporary Assistance for Needy FamiliesTitle I Grants to Local Educational AgenciesTemporary Assistance for Needy Families Temporary Assistance for Needy Families Title I Grants to Local Educational Agencies Temporary Assistance for Needy Families Title I Grants to Local Educational Agencies Child Care and Development Block Grant Temporary Assistance for Needy Families Highway Planning and Construction (Continued From Previous Page) Temporary Assistance for Needy Families Capitalization Grants for Clean Water State Revolving Funds Capitalization Grants for Drinking Water State Revolving Funds Title I Grants to Local Educational Agencies Temporary Assistance for Needy Families Title I Grants to Local Educational Agencies Temporary Assistance for Needy Families Child Care and Development Block Grant Food Stamp Program (Continued From Previous Page) OMB’s implementing guidance requires that agencies report overpayments and underpayments in their programs if the figures are available. USDA reports these amounts in its PAR for the Food Stamp Program. Table 5 provides the overpayment and underpayment amount for each state. In fiscal year 2003, overpayments ranged from $454,636 to $103,236,074 while underpayments ranged from $126,288 to $40,679,714. In fiscal year 2004, overpayments ranged from $756,935 to $94,118,074 and underpayments ranged from $151,016 to $46,714,340. Since fiscal year 1999, the combined Food Stamp error rate has continued to decline. Figure 2 displays the error rates for the 6-year period from fiscal years 1999 to 2004. Actions taken by both the states and FNS contributed to the declining error rates. For example, the state of Arizona has completed a statewide implementation of a fingerprint imaging system. The state is using the system as a means of positive identification of welfare applicants and clients to ensure that participants do not use false identities to receive benefits to which they are not entitled; the system is also used in the eligibility determination process. The state reported that cost avoidance savings resulted from welfare fraud reduction achieved through the identification and prevention of duplicate enrollments in the Food Stamp and TANF programs. Recent initiatives reported in USDA’s fiscal year 2005 PAR include FNS’s nationwide implementation of an electronic benefit transfer (EBT) system for the delivery of food stamp benefits. EBT recipients use a plastic card, much like debit cards, to pay for their food at authorized retail stores. Funds are transferred from a Food Stamp benefits account to a retailer’s account. With EBT cards, food stamp customers pay for groceries without any paper coupons changing hands. By eliminating paper coupons, EBT creates an electronic record for each transaction that precludes certain types of fraudulent claims and makes other attempted frauds easier to detect. Other FNS efforts include Partner Web, a Web-based system to facilitate communication and information exchange between USDA and its nutrition assistance program partners. Another initiative, the National Payment Accuracy Workgroup, consists of representatives from USDA headquarters and regional offices who meet to discuss best practice methods and strategies. The practices the states are promoting include preparing reports detailing causes and sources of errors for the local offices and publishing and distributing monthly error rates for all local offices; transmitting the results of statewide error review panels on the source and causes of errors to local offices, along with suggested corrective actions; sponsoring statewide QC meetings and state best practices conferences for local offices to discuss error rate actions taken and common problems; and sponsoring local office participation in FNS regional conferences. Table 6 summarizes these and other factors contributing to the declining error rate. Since 1988, Labor has reported a national improper payment estimate for its UI Program. As part of the BAM program’s quality control, each state is responsible for selecting representative samples and investigating the accuracy of the benefit determinations, benefit payments, and recoveries. The results of these reviews are integrated with the BAM system to identify erroneously paid claims. UI overpayments at a national level have fluctuated over the past 16 years. The lowest reported national error rate occurred in 1991 at 7.5 percent while the highest national error rate occurred in 1988 with 10.1 percent, as shown in figure 3. We also noted that since 2001, UI’s national error rate has steadily increased. Labor attributes the rise in error rates to an increase in payments to claimants who improperly continue to claim benefits despite having returned to work. Although when combined, the dollar amounts of overpayments and underpayments decreased between calendar years 2003 and 2004, the national error rate increased from 9.9 percent in calendar year 2003 to 10.6 percent in calendar year 2004. At the state level, the improper payment overpayments in calendar year 2003 ranged from $2,829,017 to $450,073,624, while underpayments ranged from $100,263 to $37,825,338. In calendar year 2004, overpayments ranged from $2,250,919 to $317,991,985 and underpayments ranged from $20,184 to $40,330,046. Table 7 lists the UI improper payment overpayments and underpayments by state. In addition to its leadership role in producing improper payment estimates on a national level, Labor has initiated the NDNH pilot for the UI Program to further assist in identifying, detecting, and preventing improper payments. In fiscal year 2005, three states (Texas, Utah, and Virginia) participated in the pilot. Labor initiated the NDNH pilot to determine how a cross-match between NDNH and state UI claimant data would help identify and reduce improper payments. For further review of Labor’s pilot project, we visited the state of Texas. Texas’s participation in the NDNH pilot was through its Texas Workforce Commission (TWC). During this pilot, TWC conducted three matches of the state UI claimant data against the NDNH’s new hire data, UI claimant data, and quarterly wage data to identify potential overpayments. Generally, to perform these matches, TWC electronically transmitted state UI claimant data to HHS’s OCSE. OCSE then compared the state UI claimant data to data in the NDNH. Potential matches of claimants who may have improperly received unemployment benefits were then transmitted to TWC. TWC investigated all matches to determine the validity and amount of overpayment. According to TWC, using the national cross-match along with the statewide cross-match helped detect 50 percent more cases of potential fraud in one quarter than it would have detected otherwise. Besides the NDNH pilot, Texas also communicated to us that it had several other actions in place to manage UI improper payments. In July 2004, the Texas governor issued an executive order for each state agency to report on efforts to assess risk in the agency; identify best practices for eliminating fraud in contracting, contract management, and procurement; and describe common components for fraud prevention and elimination programs. Each agency was also to develop a fraud prevention program. Additionally, the executive order required TWC to prioritize prevention, detection, and elimination of fraud and abuse in the UI Program by identifying any state policies, weaknesses in computer cross-matching systems, and other appropriate factors that are ineffective in preventing fraud and abuse; developing strategies to address benefit fraud and claims overpayments; identifying and implementing national best practices for detecting and prosecuting fraudulent schemes, identifying cost-effective strategies designed to eliminate fraud, and increasing recovery of overpayments. Further, TWC has been educating employers on their responsibilities to provide TWC with information to make benefit determinations. For example, TWC sent letters to those employers that have a history of not providing complete or timely information during the initial claims investigation. These letters reiterated employers’ responsibilities and TWC’s expectations for receiving timely information during an investigation. Based on its NDNH pilot results, Labor reported in its fiscal year 2005 PAR that a substantial amount of additional overpayments could be detected using the database. In addition, Labor reported that it is already moving ahead with full implementation of the NDNH cross-match with 5 states (Connecticut, Texas, Utah, Virginia, and Washington). Labor expects 29 states to use NDNH by the end of fiscal year 2006. In addition to funding initiatives related to the new hire cross-matches, Labor has announced that states will be given an additional incentive to prevent and detect overpayments by implementing core measures in states’ performance budget plans based on the level of overpayments the states have detected. Labor’s fiscal year 2006 budget request contained a legislative proposal that is designed to give states the means to obtain funding for integrity activities, including additional staff, to enhance recovery and prevent overpayments. Also, to reduce overpayments and facilitate reemployment, Labor awarded Reemployment and Eligibility Assessments grants to 21 states during fiscal year 2005. The grants have been used to conduct in-person claimant interviews to assess UI beneficiaries’ need for reemployment services and their continued eligibility for benefits and to ensure that beneficiaries understand that they must stop claiming benefits upon their return to work. Further, Labor continues to promote data sharing with other agencies, such as the Social Security Administration, to identify, detect, and prevent improper payments. In its fiscal years 2004 and 2005 PARs, DOT reported a zero-dollar amount for its improper payment estimate for the Highway Planning and Construction Program. To enhance its reporting of improper payments, DOT conducted a pilot in the state of Tennessee. DOT completed this project in the summer of 2005. Testing disclosed three underpayments, one of which was determined by DOT to be statistically insignificant. An extrapolation of the other two errors to the population of payments for that construction project resulted in an improper payment estimate of $111,671. The sample was not designed to produce an estimate for the Tennessee statewide Highway Planning and Construction Program. DOT noted in its fiscal year 2005 PAR that the Tennessee pilot resulted in a methodology and testing procedures that will be used nationwide, but that the testing procedures may need to be modified based on each state’s grant management policies. DOT plans to pilot the project in more volunteer states in fiscal year 2006 and extend the process nationwide in fiscal year 2007. In addition to participating in the pilot, states work to reduce improper payments by implementing computer software to detect fraud and abuse. One such tool is the Transportation Software Management Solution, which was used by several state programs in their Highway Planning and Construction programs and contains a Bid Analysis Management System that allows highway agencies to analyze bids for collusion. At the federal level, DOT improper payment initiatives for the future include citing the inherent higher risk of improper payments because of concentrated and accelerated spending related to Hurricanes Katrina and Rita. Fiscal year 2006 Highway Planning and Construction Program testing will be focused on these hurricane regions. In its fiscal year 2006 PAR, DOT will provide interim information on the amounts and causes of improper payments and control procedures that can be used to prevent or detect improper payments in national emergency situations. Because of the variations in the states’ Medicaid programs, CMS provided states the option of either testing for the FFS or managed care components, including testing eligibility for the two components. The rates for the 12 states that participated in the PAM pilot for Year 2 (fiscal year 2003) ranged from 0.3 percent to 18.6 percent for the FFS component and 0 percent to 2.5 percent for the managed care component. The rates for the 24 states that participated in the Year 3 PAM pilot (fiscal year 2004) ranged from 0.80 percent to 54.3 percent for the FFS component and 0 percent to 7.45 percent for the managed care component. Rates for the Year 1 PERM pilot (fiscal year 2005) had not been published at the conclusion of our fieldwork. Although all states used a standard methodology to produce the rates, CMS noted that these rates should not be compared among states. Specifically, states applied different administrative standards that resulted in a lack of a common approach to the reviews among states. For medical reviews, states have different policies against which the reviews are conducted. For eligibility reviews, states had two review options under the PAM Year 3 pilot for verifying program eligibility. Other differences include the level of provider cooperation in submitting information and whether states conducted reviews in-house or contracted with vendors to perform the reviews. CMS identified Texas as having a leadership role in estimating improper payments for its Medicaid program. Texas was estimating improper payments prior to implementing the CMS pilots. Under state statute, effective 1997, Texas was required to biennially estimate improper payments for its Medicaid program. In September 2003, the state of Texas passed another statute, among other things, to fund 200 additional positions to investigate Medicaid fraud. Texas has also initiated a Medicaid Integrity Pilot (MIP) project to assist in preventing improper payments. The MIP project incorporates the use of biometric technology, such as fingerprint imaging and smart cards, as eligibility verification tools. For example, Texas issues smart cards to Medicaid clients participating in the pilot and smart card and biometric readers to medical providers. When a client obtains services, he or she inserts the card into the smart card reader and positions his or her finger on the biometric reader, which compares the print to the fingerprint image contained on the card. The use of this type of technology promotes positive identification, incorporates automated eligibility determination, and assists in an electronic billing process. Furthermore, Texas has performed a feasibility study to consolidate multiple program benefits onto a single card called an Integrated Benefits Card (IBC). This study has identified four primary benefit programs for consolidation—Medicaid; TANF; Food Stamps; and Women, Infants, and Children. Texas believes that the IBC may facilitate the needs of the Medicaid program by preventing fraud, making payments to medical providers more quickly, and offering a means for providers to quickly and accurately verify the eligibility of a client. In addition to the above initiatives, CMS has taken additional steps programwide to estimate improper payments at the national level. See table 9 for a detailed description of actions taken. In October 2005, CMS published an interim final rule, with plans to publish a final rule that would include responses to comments received. According to the interim final rule, states would be stratified based on the states’ annual FFS Medicaid expenditures from the previous year, and a random sample of up to 18 states would be reviewed. States would only be selected once every 3 years. The interim final rule also outlines the strategy for conducting medical and data-processing reviews on claims made for FFS only. CMS will address estimating improper payments for Medicaid’s managed care and eligibility components at a later time. In November 2005, CMS sent a memo to the states selected for review during fiscal year 2006. Subsequent to the publication of the October 2005 interim final rule, CMS stated that it anticipates the number of states selected each year will be 17 to ensure that each state and the District of Columbia would only be selected once every 3 years. This approach would exclude any U.S. territories or possessions that receive Medicaid funds. In a discussion with CMS’s consultant firm, it communicated to us that the sampling approach to be employed was statistically valid since every state was selected by strata, for each of the 3 years, in year 1 of this process, and thus, each state had an equal chance of being selected for years 1 through 3. Because CMS’s sampling methodology, including sampling plans, had not been fully documented by the conclusion of our fieldwork, we were unable to independently assess the statistical validity of CMS’s approach to obtain a national improper payment estimate for its Medicaid program. In its fiscal year 2005 PAR, HHS also identified efforts to detect and reduce improper payments through activities other than the pilot project. For example, HHS’s Health Care Fraud and Abuse Control Office has two projects under way that will assist in reporting improper payments. The office plans to hire 100 staff to conduct prospective reviews of state Medicaid operations and the Medicare/Medicaid data match program to identify areas where efficiencies could be made to enhance payment accuracy. Additionally, HHS expects to improve its data match capabilities to detect improper payments for Medicaid, as well as other programs, through the use of its Public Assistance Reporting Information System (PARIS). PARIS is a voluntary project that enables the 33 participating states’ public assistance data to be matched against several databases to help maintain program integrity and to detect and deter improper payments. CMS expects to be fully compliant with the IPIA requirements for its Medicaid program by fiscal year 2008. In addition to the contact named above, Carla Lewis, Assistant Director; Verginie Amirkhanian; Francine DelVecchio; Louis Fernheimer; Danielle Free; Wilfred Holloway; Stuart Kaufman; Donell Ries; and Bill Valsa made important contributions to this report. Financial Management: Challenges Continue in Meeting Requirements of the Improper Payments Information Act. GAO-06-581T. Washington, D.C.: April 5, 2006. Financial Management: Challenges Remain in Meeting Requirements of the Improper Payments Information Act. GAO-06-482T. Washington, D.C: March 9, 2006. Financial Management: Challenges in Meeting Governmentwide Improper Payment Requirements. GAO-05-907T. Washington, D.C.: July 20, 2005. Financial Management: Challenges in Meeting Requirements of the Improper Payments Information Act. GAO-05-605T. Washington, D.C.: July 12, 2005. Food Stamp Program: States Have Made Progress Reducing Payment Errors, and Further Challenges Remain. GAO-05-245. Washington, D.C.: May 5, 2005. Financial Management: Challenges in Meeting Requirements of the Improper Payments Information Act. GAO-05-417. Washington, D.C.: March 31, 2005. Medicaid Program Integrity: State and Federal Efforts to Prevent and Detect Improper Payments. GAO-04-707. Washington D.C.: July 16, 2004. TANF and Child Care Programs: HHS Lacks Adequate Information to Assess Risk and Assist States in Managing Improper Payments. GAO-04- 723. Washington, D.C.: June 18, 2004. Financial Management: Fiscal Year 2003 Performance and Accountability Reports Provide Limited Information on Governmentwide Improper Payments. GAO-04-631T. Washington, D.C.: April 15, 2004. Financial Management: Status of the Government Efforts to Address Improper Payment Problems. GAO-04-99. Washington, D.C.: October 17, 2003. Financial Management: Effective Implementation of the Improper Payments Information Act of 2002 Is Key to Reducing the Government’s Improper Payments. GAO-03-991T. Washington, D.C.: July 14, 2003. Financial Management: Challenges Remain in Addressing the Government’s Improper Payments. GAO-03-750T. Washington, D.C.: May 13, 2003. Financial Management: Coordinated Approach Needed to Address the Government’s Improper Payments Problems. GAO-02-749. Washington, D.C.: August 9, 2002. Unemployment Insurance: Increased Focus on Program Integrity Could Reduce Billions in Overpayments. GAO-02-697. Washington, D.C.: July 12, 2002. Financial Management: Improper Payments Reported in Fiscal Year 2000 Financial Statements. GAO-02-131R. Washington, D.C.: November 2, 2001. Strategies to Manage Improper Payments: Learning From Public and Private Sector Organizations. GAO-02-69G. Washington, D.C.: October 2001. Financial Management: Billions in Improper Payments Continue to Require Attention. GAO-01-44. Washington, D.C.: October 27, 2000.
Over the past several years, GAO has reported that federal agencies are not well positioned to meet requirements of the Improper Payments Information Act of 2002 (IPIA). For fiscal year 2005, estimated improper payments exceeded $38 billion but did not include some of the highest risk programs, such as Medicaid with outlays exceeding $181 billion for fiscal year 2005. Overall, state-administered programs and other nonfederal entities receive over $400 billion annually in federal funds. Thus, federal agencies and states share responsibility for the prudent use of these funds. GAO was asked to determine actions taken at the state level to help federal agencies estimate improper payments for state-administered federal programs and assistance needed from the federal level to support the respective federal agencies' implementation of IPIA. To date, states have been subject to limited requirements to assist federal agencies in estimating improper payments. For the 25 major state-administered federal programs surveyed, only 2 programs--the Food Stamp and Unemployment Insurance programs--have federal requirements for all states to estimate improper payments. A limited number of federal agencies are conducting pilots to estimate improper payments in other programs, but state participation is voluntary. Where no federal requirement or pilot is in place, 5 programs involving 11 states had estimated improper payments during fiscal years 2003 or 2004. States have a fundamental responsibility to ensure the proper administration of federal awards by using sound management practices and maintaining internal controls. To do this, states reported using a variety of techniques to prevent and detect improper payments. All states, except for one, responded that they use computer-related techniques, such as fraud and abuse detection programs or data matching, to prevent or detect improper payments. Other techniques selected states used included performing statewide assessments and recovery auditing methods. States also reported receiving federal incentives and penalties to assist with reducing improper payments, although most of these actions related to the Food Stamp Program, which gives incentives and penalties to states having error rates below and above the program's national error rate. Of the 240 state program officials surveyed, 100 identified tools that would be needed to estimate improper payments and help federal agencies meet various IPIA requirements, including guidance on estimating improper payments and performing risk assessments. OMB has begun planning for increased state involvement in measuring and reporting improper payments via the Erroneous and Improper Payments Workgroup and IPIA guidance. However, much work remains at the federal level to identify and estimate improper payments for state-administered federal programs, including determining the nature and extent of states' involvement to assist federal agencies with IPIA reporting requirements.
You are an expert at summarizing long articles. Proceed to summarize the following text: Our work has repeatedly shown that mission fragmentation and program overlap are widespread in the federal government. In 1998 and 1999, we found that this situation existed in 12 federal mission areas, ranging from agriculture to natural resources and environment. We also identified, in 1998 and 1999, 8 new areas of program overlap, including 50 programs for the homeless that were administered by eight federal agencies. These programs provided services for the homeless that appeared to be similar. For example, 23 programs operated by four agencies offered housing services, and 26 programs administered by 6 agencies offered food and nutrition services. Although our work indicates that the potential for inefficiency and waste exists, it also shows areas where the intentional participation by multiple agencies may be a reasonable response to a complex public problem. In either situation, implementation of federal crosscutting programs is often characterized by numerous individual agency efforts that are implemented with little apparent regard for the presence of efforts of related activities. In our past work, we have offered several possible approaches for better managing crosscutting programs—such as improved coordination, integration, and consolidation—to ensure that crosscutting goals are consistent; program efforts are mutually reinforcing; and, where appropriate, common or complementary performance measures are used as a basis for management. One of our oft-cited proposals is to consolidate the fragmented federal system to ensure the safety and quality of food. Perhaps most important, however, we have stated that the Results Act could provide the Office of Management and Budget (OMB), agencies, and Congress with a structured framework for addressing crosscutting program efforts. OMB, for example, could use the governmentwide performance plan, which is a key component of this framework, to integrate expected agency-level performance. It could also be used to more clearly relate and address the contributions of alternative federal strategies. Agencies, in turn, could use the annual performance planning cycle and subsequent annual performance reports to highlight crosscutting program efforts and to provide evidence of the coordination of those efforts. OMB guidance to agencies on the Results Act states that, at a minimum, an agency’s annual plan should identify those programs or activities that are being undertaken with other agencies to achieve a common purpose or objective, that is, interagency and crosscutting programs. This identification need cover only programs and activities that represent a significant agency effort. An agency should also review the fiscal year 2003 performance plans of other agencies participating with it in a crosscutting program or activity to ensure that related performance goals and indicators for a crosscutting program are consistent and harmonious. As appropriate, agencies should modify performance goals to bring about greater synergy and interagency support in achieving mutual goals. In April 2002, as part of its spring budget planning guidance to agencies for preparing the President’s fiscal year 2004 budget request, OMB stated that it is working to develop uniform evaluation metrics, or “common measures” for programs with similar goals. OMB asked agencies to work with OMB staff to develop evaluation metrics for several major crosscutting, governmentwide functions as part of their September budget submissions. According to OMB, such measures can help raise important questions and help inform decisions about how to direct funding and how to improve performance in specific programs. OMB’s common measures initiative initially focused on the following crosscutting program areas: low income housing assistance, job training and employment, health. We recently reported that one of the purposes of the Reports Consolidation Act of 2000 is to improve the quality of agency financial and performance data. We found that only 5 of the 24 Chief Financial Officers (CFO) Act agencies’ fiscal year 2000 performance reports included assessments of the completeness and reliability of their performance data in their transmittal letters. The other 19 agencies discussed, at least to some degree, the quality of their performance data elsewhere in their performance reports. To address these objectives, we first defined the scope of each crosscutting program area as follows: Border control focuses on major federal security policies and operations that manage and govern the entry of people, animals, plants, and goods into the United States through air, land, or seaports of entry. Flood mitigation and insurance focuses on major federal efforts to proactively reduce the loss in lives and property due to floods and minimize the postflood costs of repair and construction. Wildland fire management focuses on major federal efforts to reduce accumulated hazardous fuels on public lands. Wetlands focuses on major federal efforts to protect and manage this resource, such as restoration, enhancement, and permitting activities. To identify the agencies involved in each area we relied on previous GAO work and confirmed the agencies involved by reviewing the fiscal year 2001 Results Act performance report and fiscal year 2003 Results Act performance plans for each agency identified as contributing to the crosscutting program area. One of the agencies we identified as being involved in the areas of flood mitigation and wetlands was the U.S. Army Corps of Engineers (Corps). Although we identify the Corps, we do not comment on the agency because, as noted above, the Department of Defense did not submit a fiscal year 2001 performance report or fiscal year 2003 performance plan and was not included in our review. To address the remaining objectives, we reviewed the fiscal year 2001 performance reports and fiscal year 2003 performance plans and used criteria contained in the Reports Consolidation Act of 2000 and OMB guidance. The act requires that an agency’s performance report include a transmittal letter from the agency head containing, in addition to any other content, an assessment of the completeness and reliability of the performance and financial data used in the report. It also requires that the assessment describe any material inadequacies in the completeness and reliability of the data and the actions the agency can take and is taking to resolve such inadequacies. OMB guidance states that an agency’s annual plan should include a description of how the agency intends to verify and validate the measured values of actual performance. The means used should be sufficiently credible and specific to support the general accuracy and reliability of the performance information that is recorded, collected, and reported. We did not include any changes or modifications the agencies may have made to the reports or plans after they were issued, except in cases in which agency comments provided information from a published update to a report or plan. Furthermore, because of the scope and timing of this review, information on the progress agencies may have made on addressing their management challenges during fiscal year 2002 was not yet available. We did not independently verify or assess the information we obtained from agency performance reports and plans. Also, that an agency chose not to discuss its efforts to coordinate in these crosscutting areas in its performance reports or plans does not necessarily mean that the agency is not coordinating with the appropriate agencies. We conducted our review from September through November 2002, in accordance with generally accepted government auditing standards. As shown in table 1, multiple agencies are involved in each of the crosscutting program areas we reviewed. The discussion of the crosscutting areas below summarizes detailed information contained in the tables that appear in appendix I through IV. Hostile nations, terrorist groups, transnational criminals, and even individuals may target American people, institutions, and infrastructure with weapons of mass destruction and outbreaks of infectious disease. Given these threats, successful control of our borders relies on the ability of all levels of government and the private sector to communicate and cooperate effectively with one another. Activities that are hampered by organizational fragmentation, technological impediments, or ineffective collaboration blunt the nation’s collective efforts to secure America’s borders. Each of the five agencies we reviewed in the area of border control— Agriculture, Justice, State, Transportation, and Treasury—discussed in their performance reports and/or plans the agencies they coordinated with on border control issues, although the specific areas of coordination and level of detail provided varied. For example, Agriculture, which focuses on reducing pest and disease outbreaks and foodborne illnesses related to meat, poultry, and egg products in the United States, discusses coordination with a different set of agencies than the other four agencies, which share a focus on border control issues related to travel, trade, and immigration. Agriculture stated that it is a key member of the National Invasive Species Council, which works with other nations to deal with the many pathways by which exotic pests and diseases could enter the United States. Agriculture also stated that it coordinates with the Department of Health and Human Services and EPA on food safety issues. Although Agriculture states it is responsible for inspecting imported products at ports of entry, it does not specifically describe any coordination with the Customs Service within Treasury or the Border Patrol within Justice. In its combined performance report and plan, Transportation provided general statements that the Coast Guard regularly coordinates with a variety of agencies on immigration issues and potential international agreements to ensure security in ports and waterways. However, Transportation provided a more extensive discussion of the coordination and roles played by bureaus within the agency. For example, for its goal to ensure that sea-borne foreign and domestic trade routes and seaports remain available for the movement of passengers and cargo, Transportation states that the Transportation Security Administration, the Maritime Administration (MARAD), and the Coast Guard will coordinate with the international community and federal and state agencies to improve coordination of container identification, tracking, and inspection. As an example of the roles described, Transportation states that the Coast Guard and MARAD will test deployment plans through port security readiness exercises. In its performance report, State listed the partners it coordinates with for each performance goal, but did not always provide details about the coordination that was undertaken. Both Justice and Treasury discuss expanded cooperation through BCI, which includes Agriculture; Customs; Coast Guard; the Immigration and Naturalization Service (INS), and other federal, state, local, and international agencies. According to Customs, BCI efforts toward increased cooperation among partner agencies included cross training, improved sharing of intelligence, community and importer outreach, improved communication among agencies using radio technology, and cooperative operational and tactical planning. Of the five agencies we reviewed, only Justice reported meeting all of its fiscal year 2001 performance goals related to securing America’s borders. Transportation reported not meeting either of its two goals related to border control, but provided explanations and strategies for meeting the goals in the future that appeared reasonable. For example, Transportation said it did not meet its target for the percentage of undocumented migrants interdicted and/or deterred via maritime routes because socioeconomic conditions here and abroad and political and economic conditions caused variations in illegal migration patterns. To meet the target in the future, the Coast Guard plans to operate along maritime routes and establish agreements with source countries to reduce migrant flow. For its two performance goals related to border control, State reported progress in meeting its goal of reducing the risk of illegitimate entry of aliens hostile to the nation’s interest, but not meeting the immigrant visa targets. State explained that it failed to meet this goal due to extremely high demand for visa numbers from INS to adjust the status of large numbers of aliens already in the United States, but did not provide any specific strategies for meeting this goal in the future. Treasury reported meeting its targets for all but two of its seven measures related to its strategic goal of protecting the nation’s borders and major international terminals from traffickers and smugglers. Treasury did not provide reasonable explanations for either shortfall, and did not discuss strategies for achieving those targets in the future. Agriculture reported meeting all but one of its performance targets for its three goals. The unmet performance target for significantly reducing the prevalence of salmonella on broiler chickens fell under Agriculture’s goal of creating a coordinated national and international food safety risk management system. Agriculture provides a reasonable explanation, but it is not clear if from the discussion if it is a domestic or international issue. According to their performance plans, the five agencies generally aimed to achieve the same goals as those reported on in fiscal year 2001, with targets adjusted to reflect higher performance levels. Transportation reported that it established a new performance goal and related measure in fiscal year 2002 that would also be included in the fiscal year 2003 plan. The new goal is to ensure that sea-borne foreign and domestic trade routes and seaports remain available for the movement of passengers and cargo. The new measure is the percentage of high-interest vessels screened, with a target of 100 percent for fiscal year 2003. Three of the five agencies—Agriculture, Justice, and Transportation— discussed strategies that appeared to be reasonably linked to achieving their fiscal year 2003 goals. For example, Transportation discusses strategies for each of its goals. For its new goal Transportation describes strategies, such as increasing intelligence efforts in ports; improving advanced information on passengers, crew, and cargo; and establishing or improving information and intelligence fusion centers in Washington and on both coasts. It also identified more specific efforts, such as increasing boarding and escort operations to protect vessels carrying large numbers of passengers and vessels with dangerous cargo, such as liquefied natural gas or other volatile products, from becoming targets. In contrast, Customs discussed a more limited “strategic context” for each of its goal areas and other information in sections pertaining to specific Customs activities, both of which varied in the level of detail. For example, for its goal of contributing to a safer America by reducing civil and criminal activities associated with the enforcement of Customs laws, Customs defined challenges and constraints to achieving the goal and mentions that it is playing a major role in the interdiction and detection of weapons of mass destruction entering or leaving the United States, including increased vessel, passenger, and cargo examinations. For the most part, State provided only general statements of how it plans to achieve its fiscal year 2003 goals. For example, regarding its visa issuance goal, State said it has committed itself to improving its visa procedures and coordination with other agencies and departments. Regarding the completeness, reliability, and credibility of their reported performance data, Agriculture, Justice, Transportation, and Treasury provided general statements about the quality of their performance data and provided some information about the quality of specific performance data. For example, Transportation provided extensive information on its measures and data sources that allow for an assessment of data quality. The information includes (1) a description of the measure, (2) scope, (3) source, (4) limitations, (5) statistical issues, and (6) verification and validation. Other explanatory information is provided in a comment section of Transportation’s combined performance plan and report. State did not provide consistent or adequate information for the border-control- related data sources to make judgments about data reliability, completeness, and credibility. For the most part, State provided only a few words on the data source, data storage, and frequency of the data. Floods have inflicted more economic losses upon the United States than any other natural disaster. Since its inception 34 years ago, the National Flood Insurance Program (NFIP) has combined flood hazard mitigation efforts and insurance to protect homeowners against losses from floods. The program, which is administered by FEMA, provides an incentive for communities to adopt floodplain management ordinances to mitigate the effects of flooding upon new or existing structures. It offers property owners in participating communities a mechanism—federal flood insurance—to cover flood losses without increasing the burden on the federal government to provide disaster relief payments. Virtually all communities in the country with flood-prone areas now participate in NFIP, and over 4 million U.S. households have flood insurance. The two agencies we reviewed—Agriculture and FEMA—generally address coordination efforts regarding the issue of flood mitigation. Agriculture states in its report and plan that it works with other agencies, such as FEMA and the Corps, to obtain data regarding its goal related to flood mitigation. However, Agriculture does not further specify coordination activities. FEMA’s fiscal year 2001 performance report does not state which agencies it collaborates with to achieve goals related to flood mitigation and insurance. FEMA’s plan provides an appendix that outlines the crosscutting activities and partner agencies associated with its flood mitigation and preparedness activities. For example, FEMA states it is the chair of the President’s Long-Term Recovery Task Force, which helps state and local governments to identify their needs related to the long-term impact of a major, complex disaster. Agencies FEMA coordinates on this effort with include the departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Housing and Urban Development, the Interior, Labor, and Transportation, among other organizations. Agriculture reported that it did not meet its only fiscal year 2001 goal related to flood mitigation—providing benefits to property and safety through flood damage reduction by completing 81 watershed protection structures. Agriculture explained that it did not meet the goal because (1) complex engineering can result in watershed protection structures taking several years to complete, (2) multiple funding sources, including federal, state, and local funds, may alter the schedule for completing the structures, and (3) external factors such as weather and delays in obtaining land rights and permits caused delays in construction. Agriculture states that many of the structures that were not completed in time for the fiscal year 2001 report will be complete in the next few months. FEMA reported meeting all but one of its fiscal year 2001 goals and indicators related to flood mitigation and insurance. FEMA’s five goals were (1) prevent loss of lives and property from all hazards, (2) collect and validate building and flood loss data, confirm that the reduction in estimated losses from NFIP activities exceeds $1 billion, and continue systematic assessment of the impact and effectiveness of NFIP, (3) increase the number of NFIP policies in force by 5 percent over the end of the fiscal year 2000 count, (4) improve the program’s underwriting ratio, and (5) implement NFIP business process improvements. FEMA reported that it did not meet the third goal, explaining that, although the end of year policy count for fiscal year 2001 increased, the retention rates for existing policies were not maintained. FEMA outlined three strategies that appeared reasonably linked to achieving the unmet goal in the future: (1) placing two new fiscal year 2002 television commercials that emphasize the importance of buying and keeping National Flood Insurance, (2) establishing retention goals for “Write Your Own” companies, private insurance companies that write flood insurance under a special arrangement with the federal government, and (3) targeting its marketing strategies toward those properties no longer on the books. Because it revised its strategic plan, FEMA reorganized the layout of its fiscal year 2003 performance plan. Nevertheless, FEMA’s fiscal year 2003 performance goals and measures are similar to those that appear in its fiscal year 2001 performance plan. FEMA merged its goal of implementation of NFIP business process improvements into its fiscal year 2003 goal of improving NFIP’s “bottom line,” an income-to-expense ratio, by 1 percent. In addition, FEMA merged two other goals: (1) prevent loss of lives and property from all hazards and (2) collect and validate building and flood loss data, confirm that the reduction in estimated losses from NFIP activities exceeds $1 billion, and continue the systematic assessment of the impact and effectiveness of NFIP. FEMA adopted one new goal in its fiscal year 2003 plan related to modernizing its floodplain mapping. Agriculture expects to continue making progress on its goal of providing benefits to property and safety through flood damage reduction, but has adopted a new approach to achieving the goal. Agriculture appears to have dropped its target for completing new watershed protection structures and instead plans to implement a new program of rehabilitating aging dams. Overall, the strategies Agriculture and FEMA plan to use appear to be reasonably linked to achieving their fiscal year 2003 goals. For example, to support its fiscal year 2003 performance goals, FEMA outlines several strategies, such as increasing the number of Emergency Action Plans in communities located below significant and potentially high-hazard dams. In its fiscal year 2001 Annual Performance and Accountability Report, FEMA states “the performance measurement criteria and information systems are thought to be generally effective and reliable.” FEMA does not individually identify data quality assessment methods for any of its performance indicators. However, it acknowledges a data limitation for one of its goals relating to business process improvement. FEMA explained that it relied on trend data to assess its performance in customer service for fiscal year 2001 because of a delay in obtaining OMB approval for distributing its customer surveys that year. FEMA states that it plans to conduct the surveys in fiscal year 2002 to obtain more accurate information. Agriculture addresses this issue at the beginning of its report by stating, “performance information supporting these performance goals is of sufficient quality and reliability except where otherwise noted in this document.” Agriculture also states that the data reported by state offices for fiscal year 2001 are accurate. According to estimates by FWS, more than half of the 221 million acres of wetlands that existed during colonial times in what is now the contiguous United States have been lost. These areas, once considered worthless, are now recognized for the variety of important functions that they perform, such as providing wildlife habitats, maintaining water quality, and aiding in flood control. Despite the passage of numerous laws and the issuance of two presidential orders for protecting wetlands, no specific or consistent goal for the nation’s wetlands-related activities existed until 1989. Recognizing the value of wetlands, in 1989, President George Bush established the national goal of no net loss of wetlands. However, the issue of wetlands protection and the various federal programs that have evolved piecemeal over the years to protect and manage this resource have been subjects of continued debate. We previously reported that for the six major agencies involved in and responsible for implementing wetlands-related activities—the Corps, Agriculture’s Farm Service Agency (FSA) and Natural Resources Conservation Service (NRCS), Interior’s FWS, Commerce’s NOAA, and EPA—the consistency and reliability of wetlands acreage data reported by these federal agencies were questionable. Moreover, we reported that the agencies’ reporting practices did not permit the actual accomplishments of the agencies—that is, the number of acres restored, enhanced, or otherwise improved—to be determined. These reporting practices included inconsistencies in the use of terms to describe and report wetlands-related activities and the resulting accomplishments, the inclusion of nonwetlands acreage in wetlands project totals, and the double counting of accomplishments. We recommended that these agencies develop and implement a strategy for ensuring that all actions contained in the Clean Water Action Plan related to wetlands data are adopted governmentwide. Such actions included, in addition to the ongoing effort to develop a single set of accurate, reliable figures on the status and trends of the nation’s wetlands, the development of consistent, understandable definitions and reporting standards that are used by all federal agencies in reporting their wetlands-related activities and the changes to wetlands that result from such activities. The agencies we reviewed generally discussed the need to coordinate with other agencies in their performance plans, but provided little detail on the level of coordination or specific coordination strategies. Agriculture’s annual performance plan includes a strategy to work with other federal agencies and partners to identify priority wetlands that could benefit from conservation practices in the surrounding landscape. Neither of the bureaus within Agriculture—FSA or NRCS—specifically discussed coordination on wetlands issues in their performance reports or plans. Interior’s annual performance report and plan indicate that it will work with Agriculture, EPA, the Corps, the Federal Energy Regulatory Commission (FERC), and the states on wetlands issues. EPA discusses cooperation with the Corps, NOAA’s National Marine Fisheries Service within Commerce, FEMA, FWS within Interior, and NRCS within Agriculture, but provides no specifics. Both Commerce and NOAA indicate that they work with other federal agencies to address crosscutting issues. Although NOAA mentions that it works closely with other agencies on a number of crosscutting issues to address critical challenges facing coastal areas, its plan does not specifically mention coordination with other agencies on wetlands issues. Each of the agencies we reviewed had goals related to wetlands that it reported having met or exceeded in fiscal year 2001. For example, FWS within Interior reported that it restored or enhanced 144,729 acres of wetlands habitat on non-FWS lands, exceeding its goal of 77,581 acres. However, FWS did not report on the number of acres of wetlands restored or enhanced on FWS lands and did not distinguish between the number of acres restored and the number enhanced. Furthermore, several of the agencies included nonwetlands acreage when reporting their accomplishments, and NOAA changed its performance measure from acres of coastal wetlands restored to acres benefited. Consequently, the contributions made by these agencies toward achieving the national goal of no net loss of the nation’s remaining wetlands cannot be determined from their reports. Each of the agencies we reviewed had plans to create, restore, enhance, and/or benefit additional wetlands acreage in fiscal year 2003, although the targets were in some cases lower than the targets for fiscal year 2001. Of the agencies we reviewed, only NRCS indicated in its plan that its progress would contribute to the national goal of no net loss of wetlands. The strategies the agencies planned to use appeared to be reasonably linked to achieving their fiscal year 2003 goals. For example, FSA planned to use the same strategy it has successfully used in past years to achieve its goals— working with producers to enroll land in the Conservation Reserve Program. Regarding the completeness, reliability, and credibility of the performance data reported, agency discussions varied in the specifics they provided. NOAA and FWS had overall discussions of the sources of their performance data and the verification procedures they followed in their performance reports. Within Agriculture, while FSA reported on the sources and processes used to develop the data reported for the number of wetlands acres restored, NRCS discussed its requirement that each state conservationist verify and validate the state’s performance data. NRCS also acknowledged that some discrepancies were noted when the performance data were analyzed, but indicated that there was no compelling reason to discount the performance data reported. Two agencies—FWS and EPA—acknowledged shortcomings in the data, including the possibility of double counting performance data. EPA also indicated that the measure might not reflect actual improvements in the health of the habitat. While FWS does not discuss any steps to resolve or minimize the shortcomings in its data, EPA described improvements it made to make data reported more consistent. FSA indicated some limitations to its data for the Conservation Reserve Program, which it attributed to lags between the date a contract is signed with a producer and when the data are entered, the continual updating of the contract data, and the periodic changes in contract data, but did not discuss any steps to resolve the limitation. We recently testified that the most extensive and serious problem related to the health of forested lands—particularly in the interior West—is the overaccumulation of vegetation, which is causing an increasing number of large, intense, uncontrollable, and destructive wildfires. In 1999, Agriculture’s Forest Service estimated that 39 million acres of national forested lands in the interior West were at high risk of catastrophic wildfire. This figure later grew to over 125 million acres as Interior agencies and states identified additional land that they considered to be high risk. To a large degree, these forest health problems contributed to the wildfires in the year 2000—which were some of the worst in the last 50 years. The policy response to these problems was the development of the National Fire Plan—a long-term, multibillion-dollar effort to address the wildland fire threats we are now facing. Our work on wildland fire has stressed the need for three things: (1) a cohesive strategy to address growing threats to national forest resources and nearby communities from catastrophic wildfires, (2) clearly defined and effective leadership to carry out that strategy in a coordinated manner, and (3) accountability to ensure that progress is being made toward accomplishing the goals of the National Fire Plan. Two years ago, the Forest Service and Interior began developing strategies to address these problems, and recently established a leadership entity—the Wildland Fire Leadership Council—that is intended to respond to the need for greater interagency coordination. Whether the strategy and the council will serve as the framework and mechanism to effectively deal with the threat of catastrophic wildland fire remains to be seen and will depend upon how well the National Fire Plan is implemented. To determine the effectiveness of this implementation effort, we continue to believe that a sound performance accountability framework is needed, one that provides for specific performance measures and data that can be used to assess implementation progress and problems. Both Interior and the Forest Service indicate in their performance plans their participation in developing the 2000 National Fire Plan and a 10-year Comprehensive Strategy under the plan. Furthermore, both agencies discuss current efforts under way to develop a joint Implementation Plan for the Comprehensive Strategy. Consistent with our recommendations, the implementation plan is reported to include cooperatively developed, long-term goals and performance measures for the wildland fire management program. In its performance report, the Forest Service detailed additional specific actions it collaborated on with Interior and other agencies related to wildland fire management, such as conducting an interagency review of the fire plan system. Regarding progress in achieving its fiscal year 2001 goals, Interior reported meeting only about half of its planned target of using fire and other treatments to restore natural ecological processes to 1.4 million acres. Although Interior’s report provided reasonable explanations for the unmet goals—difficulty in obtaining permits to carry out the treatments and shifting of resources from restoration to suppression of active fires—it did not discuss any specific strategies for overcoming these challenges in the future. The Forest Service reported meeting its goal of treating wildlands with high fire risks in national forests and grasslands. However, the Forest Service did not meet any of the individual indicators related to this goal. For example, the Forest Service treated only 1.4 million acres of its targeted 1.8 million hazardous fuel acres. The Forest Service provided explanations that appeared reasonable for some of its unmet targets. For example, unusual drought conditions combined with the added complexities and restrictions of treating hazardous fuels in the wildland urban interface contributed to the unmet hazardous fuels goal. The Forest Service did not provide any strategies for meeting the unmet targets in the future. In fiscal year 2003, Interior expects to treat 1.1 million acres to reduce hazards and restore ecosystem health compared to its goal of 1.4 million acres in 2001. In addition, Interior has added goals for wildland fire containment, providing assistance to rural fire departments, treating high- priority fuels projects, and bringing fire facilities up to approved standards. Interior’s strategies for achieving these goals are very broad and general and lack a clear link or rationale for how the strategies will contribute to improved performance. The Forest Service expects to treat 1.6 million acres to reduce hazardous fuels, slightly less than its 2001 target of 1.8 million acres, and assist over 7,000 communities and fire departments. The Forest Service did not include one of its targets for 2001—maximizing fire fighting production capability. The Forest Services strategies for achieving its goals, although fairly general, appear to be reasonably linked to achieving each of the performance targets. The performance data reported by Interior and the Forest Service for wildfire management generally appear to be complete, reliable, and credible. The Forest Service reported that it will use the Budget Formulation and Execution System to report on performance. However, we have found that this system is more of a planning tool for ranking fuel reduction work at the local unit level and that another system, the National Fire Plan Operations and Reporting System, is being implemented by both the Forest Service and Interior to track outputs and measure accomplishments. Interior acknowledges that its bureaus may interpret the data they collect differently and that a common set of performance measures is still being developed between Interior and the Forest Service as they implement the National Fire Plan. We have recommended that the agencies develop a common set of outcome-based performance goals to better gauge whether agencies are achieving the objective of restoring ecosystem health. The Forest Service acknowledges possible data limitations and reported that it is currently taking steps, such as conducting field reviews, to ensure effective internal controls over the reporting of performance data. We have previously stated that the Results Act could provide OMB, agencies, and Congress with a structured framework for addressing crosscutting program efforts. OMB in its guidance clearly encourages agencies to use their performance plans as a tool to communicate and coordinate with other agencies on programs being undertaken for common purposes to ensure that related performance goals and indicators are consistent and harmonious. We have also stated that the Results Act could also be used as a vehicle to more clearly relate and address the contributions of alternative federal strategies. The President’s common measures initiative, by developing metrics that can be used to compare the performance of different agencies contributing to common objectives, appears to be a step in this direction. Some of the agencies we reviewed appear to be using their performance reports and plans as a vehicle to assist in collaborating and coordinating crosscutting program areas. Those that provided more detailed information on the nature of their coordination provided greater confidence that they are working in concert with other agencies to achieve common objectives. Other agencies do not appear to be using their plans and reports to the extent they could to describe their coordination efforts to Congress, citizens, and other agencies. Furthermore, the quality of the performance information reported—how agencies explain unmet goals and discuss strategies for achieving performance goals in the future, and overall descriptions of the completeness, reliability, and credibility of the performance information reported—varied considerably. Although we found a number of agencies that provided detailed information about how they verify and validate individual measures, only 5 of the 10 agencies we reviewed for all the crosscutting areas commented on the overall quality and reliability of the data in their performance reports consistent with the requirements of the Reports Consolidation Act. Without such statements, performance information lacks the credibility needed to provide transparency in government operations so that Congress, program managers, and other decision makers can use the information. We sent drafts of this report to the respective agencies for comments. We received comments from EPA, FEMA, Commerce, and State. The agencies generally agreed with the accuracy of the information in the report. The comments we received were mostly technical and we have incorporated them where appropriate. Regarding flood mitigation and insurance, FEMA commented that performance reports and plans are static documents that are over a year old and therefore may not reflect the progress FEMA has made since then. FEMA also stated that, although not reflected in it performance reports and plans, it coordinates its flood mitigation and insurance activities extensively and maintains and employs a number of interagency agreements related to the implementation of its programs. We acknowledge these limitations to our analysis in the scope and methodology section of this report. Regarding border control, State commented that, as summary documents, performance reports and plans provide a limited opportunity to fully describe their coordination and data validity and verification efforts. State indicated that it plans to include more appropriate measures of performance and performance data that are complete, reliable, and credible in its upcoming performance reports and plans. Regarding its unmet goal for the number of visas processed, State explained that this is not an accurate measure of program performance because it depends on the demand for visas, which is beyond the agency’s control. State plans to revise this measure to one that will more appropriately reflect program effectiveness. Regarding wetlands, EPA commented on a number of initiatives it has undertaken along with other federal agencies to address the accuracy and availability of data on the extent and health of wetlands. For example, EPA states that its Region V office (Chicago) is working with other federal and state agencies to develop an integrated, comprehensive, geographic information system-based wetlands mapping system for the Minnesota River Basin. Once completed, this new wetland inventory would provide a reliable estimate of total wetland acreage for the Minnesota River Basin, provide a test to update the older National Wetland Inventory data, and serve as a pilot project for identifying wetlands throughout the country using an innovative technology. We are sending copies of this report to the President, the Director of the Office of Management and Budget, the congressional leadership, other Members of Congress, and the heads of major departments and agencies. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me or Elizabeth Curda on (202) 512-6806 or daltonp@gao.gov. Major contributors to this report are listed in appendix V. In addition to the individual named above, the following individuals made significant contributions to this report: Steven J. Berke, Paul Bollea, Lisa M. Brown, Sharon L. Caudle, Amy M. Choi, Peter J. Del Toro and Sherry L. McDonald. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
GAO's work has repeatedly shown that mission fragmentation and program overlap are widespread in the federal government. Implementation of federal crosscutting programs is often characterized by numerous individual agency efforts that are implemented with little apparent regard for the presence and efforts of related activities. GAO has in the past offered possible approaches for managing crosscutting programs, and has stated that the Government Performance and Results Act could provide a framework for addressing crosscutting efforts. GAO was asked to examine the actions and plans agencies reported in addressing the crosscutting issues of border control, flood mitigation and insurance, wetlands, and wildland fire management. GAO reviewed the fiscal year 2001 performance reports and fiscal year 2003 performance plans for the major agencies involved in these issues. GAO did not independently verify or assess the information it obtained from agency performance reports and plans. On the basis of the reports and plans, GAO found that most agencies involved in the crosscutting issues discussed coordination with other agencies in their performance reports and plans, although the extent of coordination and level of detail provided varied considerably. The progress agencies reported in meeting their fiscal year 2001 performance goals also varied considerably. For example, wetlands was the only area in which all of the agencies GAO reviewed met or exceeded fiscal year 2001 goals. Some of the agencies that did not meet their goals provided reasonable explanations and/or strategies that appeared reasonably linked to meeting the goals in the future. The agencies GAO reviewed generally planned to pursue goals in fiscal year 2003 similar to those in 2001, although some agencies added new goals, dropped existing goals, or dropped goals altogether. Many agencies discussed strategies that appeared to be reasonably linked to achieving their fiscal year 2003 goals.
You are an expert at summarizing long articles. Proceed to summarize the following text: On November 19, 2002, pursuant to ATSA, TSA began a 2-year pilot program at 5 airports using private screening companies to screen passengers and checked baggage. In 2004, at the completion of the pilot program, and in accordance with ATSA, TSA established the SPP, whereby any airport authority, whether involved in the pilot or not, could request a transition from federal screeners to private, contracted screeners. All of the 5 pilot airports that applied were approved to continue as part of the SPP, and since its establishment, 21 additional airport applications have been accepted by the SPP. In March 2012, TSA revised the SPP application to reflect requirements of the FAA Modernization Act, enacted in February 2012. Among other provisions, the act provides that Not later than 120 days after the date of receipt of an SPP application submitted by an airport operator, the TSA Administrator must approve or deny the application. The TSA Administrator shall approve an application if approval would not (1) compromise security, (2) detrimentally affect the cost-efficiency of the screening of passengers or property at the airport, or (3) detrimentally affect the effectiveness of the screening of passengers or property at the airport. Within 60 days of a denial, TSA must provide the airport operator, as well as the Committee on Commerce, Science, and Transportation of the Senate and the Committee on Homeland Security of the House of Representatives, a written report that sets forth the findings that served as the basis of the denial, the results of any cost or security analysis conducted in considering the application, and recommendations on how the airport operator can address the reasons for denial. All commercial airports are eligible to apply to the SPP. To apply, an airport operator must complete the SPP application and submit it to the SPP Program Management Office (PMO), as well as to the FSD for its airport, by mail, fax, or e-mail. Figure 1 illustrates the SPP application process. Although TSA provides all airports with the opportunity to apply for participation in the SPP, authority to approve or deny the application resides in the discretion of the TSA Administrator. According to TSA officials, in addition to the cost-efficiency and effectiveness considerations mandated by FAA Modernization Act, there are many other factors that are weighed in considering an airport’s application for SPP participation. For example, the potential impacts of any upcoming projects at the airport are considered. Once an airport is approved for SPP participation and a private screening contractor has been selected by TSA, the contract screening workforce assumes responsibility for screening passengers and their property and is required to adhere to the same security regulations, standard operating procedures, and other TSA security requirements followed by federal screeners at non-SPP airports. Since our December 2012 report, TSA has developed guidance to assist airport operators in completing their SPP applications, as we recommended. In December 2012, we reported that TSA had developed some resources to assist SPP applicants, but it had not provided guidance on its application and approval process to assist airports. As the application process was originally implemented in 2004, the SPP application process required only that an interested airport operator submit an application stating its intention to opt out of federal screening as well as its reasons for wanting to do so. In 2011, TSA revised its SPP application to reflect the “clear and substantial advantage” standard announced by the Administrator in January 2011. Specifically, TSA requested that the applicant explain how private screening at the airport would provide a clear and substantial advantage to TSA’s security operations. At that time, TSA did not provide written guidance to airports to assist them in understanding what would constitute a “clear and substantial advantage to TSA security operations” or TSA’s basis for determining whether an airport had met that standard. As previously noted, in March 2012 TSA again revised the SPP application in accordance with provisions of the FAA Modernization Act, which became law in February 2012. Among other things, the revised application no longer included the “clear and substantial advantage” question, but instead included questions that requested applicants to discuss how participating in the SPP would not compromise security at the airport and to identify potential areas where cost savings or efficiencies may be realized. In December 2012, we reported that while TSA provided general instructions for filling out the SPP application as well as responses to frequently asked questions (FAQ), the agency had not issued guidance to assist airports with completing the revised application nor explained to airports how it would evaluate applications given the changes brought about by the FAA Modernization Act. For example, neither the application instructions or the FAQs addressed TSA’s SPP application evaluation process or its basis for determining whether an airport’s entry into the SPP would compromise security or affect cost-efficiency and effectiveness. Further, we found that airport operators who completed the applications generally stated that they faced difficulties in doing so and that additional guidance would have been helpful. For example, one operator stated that he needed cost information to help demonstrate that his airport’s participation in the SPP would not detrimentally affect the cost-efficiency of the screening of passengers or property at the airport and that he believed not presenting this information would be detrimental to his airport’s application. However, TSA officials at the time said that airports do not need to provide this information to TSA because, as part of the application evaluation process, TSA conducts a detailed cost analysis using historical cost data from SPP and non-SPP airports. The absence of cost and other information in an individual airport’s application, TSA officials noted, would not materially affect the TSA Administrator’s decision on an SPP application. Therefore, we reported in December 2012 that while TSA had approved all applications submitted since enactment of the FAA Modernization Act, it was hard to determine how many more airports, if any, would have applied to the program had TSA provided application guidance and information to improve transparency of the SPP application process. Specifically, we reported that in the absence of such application guidance and information, it may be difficult for airport officials to evaluate whether their airports are good candidates for the SPP or determine what criteria TSA uses to accept and approve airports’ SPP applications. Further, we concluded that clear guidance for applying to the SPP could improve the transparency of the application process and help ensure that the existing application process is implemented in a consistent and uniform manner. Thus, we recommended that TSA develop guidance that clearly (1) states the criteria and process that TSA is using to assess whether participation in the SPP would compromise security or detrimentally affect the cost- efficiency or the effectiveness of the screening of passengers or property at the airport, (2) states how TSA will obtain and analyze cost information regarding screening cost-efficiency and effectiveness and the implications of not responding to the related application questions, and (3) provides specific examples of additional information airports should consider providing to TSA to help assess an airport’s suitability for the SPP. TSA concurred with our recommendation and has taken actions to address it. Specifically, TSA updated its SPP website in December 2012 by providing (1) general guidance to assist airports with completing the SPP application and (2) a description of the criteria and process the agency will use to assess airports’ applications to participate in the SPP. While the guidance states that TSA has no specific expectations of the information an airport could provide that may be pertinent to its application, it provides some examples of information TSA has found useful and that airports could consider providing to TSA to help assess their suitability for the program. Further, the guidance, in combination with the description of the SPP application evaluation process, outlines how TSA plans to analyze and use cost information regarding screening cost- efficiency and effectiveness. The guidance also states that providing cost information is optional and that not providing such information will not affect the application decision. We believe that these actions address the intent of our recommendation and should help improve transparency of the SPP application process as well as help airport officials determine whether their airports are good candidates for the SPP. In our December 2012 report, we analyzed screener performance data for four measures and found that there were differences in performance between SPP and non-SPP airports, and those differences could not be exclusively attributed to the use of either federal or private screeners. The four measures we selected to compare screener performance at SPP and non-SPP airports were Threat Image Projection (TIP) detection rates, recertification pass rates, Aviation Security Assessment Program (ASAP) test results, and Presence, Advisement, Communication, and Execution (PACE) evaluation results (see table 1). For each of these four measures, we compared the performance of each of the 16 airports then participating in the SPP with the average performance for each airport’s category (X, I, II, III, or IV), as well as the national performance averages for all airports for fiscal years 2009 through 2011. As we reported in December 2012, on the basis of our analyses, we found that, generally, certain SPP airports performed slightly above the airport category and national averages for some measures, while others performed slightly below. For example, SPP airports performed above their respective airport category averages for recertification pass rates in the majority of instances, while the majority of SPP airports that took PACE evaluations in 2011 performed below their airport category averages. For TIP detection rates, SPP airports performed above their respective airport category averages in about half of the instances. However, we also reported in December 2012 that the differences we observed in private and federal screener performance cannot be entirely attributed to the type of screeners at an airport, because, according to TSA officials and other subject matter experts, many factors, some of which cannot be controlled for, affect screener performance. These factors include, but are not limited to, checkpoint layout, airline schedules, seasonal changes in travel volume, and type of traveler. We also reported in December 2012 that TSA collects data on several other performance measures but, for various reasons, the data cannot be used to compare private and federal screener performance for the purposes of our review. For example, passenger wait time data could not be used because we found that TSA’s policy for collecting wait times changed during the time period of our analyses and that these data were not collected in a consistent manner across all airports. We also considered reviewing human capital measures such as attrition, absenteeism, and injury rates, but did not analyze these data because TSA’s Office of Human Capital does not collect these data for SPP airports. We reported that while the contractors collect and report this information to the SPP PMO, TSA does not validate the accuracy of the self-reported data nor does it require contractors to use the same human capital measures as TSA, and accordingly, differences may exist in how the metrics are defined and how the data are collected. Therefore, we found that TSA could not guarantee that a comparison of SPP and non- SPP airports on these human capital metrics would be an equal comparison. Since our December 2012 report, TSA has developed a mechanism to regularly monitor private versus federal screener performance, as we recommended. In December 2012, we reported that while TSA monitored screener performance at all airports, the agency did not monitor private screener performance separately from federal screener performance or conduct regular reviews comparing the performance of SPP and non-SPP airports. Beginning in April 2012, TSA introduced a new set of performance measures to assess screener performance at all airports (both SPP and non-SPP) in its Office of Security Operations Executive Scorecard (the Scorecard). Officials told us at the time of our December 2012 review that they provided the Scorecard to FSDs every 2 weeks to assist the FSDs with tracking performance against stated goals and with determining how performance of the airports under their jurisdiction compared with national averages. According to TSA, the 10 measures used in the Scorecard were selected based on input from FSDs and regional directors on the performance measures that most adequately reflected screener and airport performance. Performance measures in the Scorecard included the TIP detection rate, and the number of negative and positive customer contacts made to the TSA Contact Center through e-mails or phone calls per 100,000 passengers screened, among others. We also reported in December 2012 that TSA had conducted or commissioned prior reports comparing the cost and performance of SPP and non-SPP airports. For example, in 2004 and 2007, TSA commissioned reports prepared by private consultants, while in 2008 the agency issued its own report comparing the performance of SPP and non-SPP airports.performed at a level equal to or better than non-SPP airports. However, TSA officials stated at the time that they did not plan to conduct similar analyses in the future, and instead, they were using across-the-board mechanisms of both private and federal screeners, such as the Scorecard, to assess screener performance across all commercial airports. Generally, these reports found that SPP airports In addition to using the Scorecard, we found that TSA conducted monthly contractor performance management reviews (PMR) at each SPP airport to assess the contractor’s performance against the standards set in each SPP contract. The PMRs included 10 performance measures, including some of the same measures included in the Scorecard, such as TIP detection rates and recertification pass rates, for which TSA establishes acceptable quality levels of performance. Failure to meet the acceptable quality levels of performance could result in corrective actions or termination of the contract. However, as we reported in December 2012, the Scorecard and PMR did not provide a complete picture of screener performance at SPP airports because, while both mechanisms provided a snapshot of private screener performance at each SPP airport, this information was not summarized for the SPP as a whole or across years, which made it difficult to identify changes in performance. Further, neither the Scorecard nor the PMR provided information on performance in prior years or controlled for variables that TSA officials explained to us were important when comparing private and federal screener performance, such as the type of X-ray machine used for TIP detection rates. We concluded that monitoring private screener performance in comparison with federal screener performance was consistent with the statutory requirement that TSA enter into a contract with a private screening company only if the Administrator determines and certifies to Congress that the level of screening services and protection provided at an airport under a contract will be equal to or greater than the level that would be provided at the airport by federal government personnel. Therefore, we recommended that TSA develop a mechanism to regularly monitor private versus federal screener performance, which would better position the agency to know whether the level of screening services and protection provided at SPP airports continues to be equal to or greater than the level provided at non- SPP airports. TSA concurred with our recommendation, and has taken actions to address it. Specifically, in January 2013, TSA issued its first SPP Annual Report. The report highlights the accomplishments of the SPP during fiscal year 2012 and provides an overview and discussion of private versus federal screener cost and performance. The report also describes the criteria TSA used to select certain performance measures and reasons why other measures were not selected for its comparison of private and federal screener performance. The report compares the performance of SPP airports with the average performance of airports in their respective category, as well as the average performance for all airports, for three performance measures: TIP detection rates, recertification pass rates, and PACE evaluation results. Further, in September 2013, the TSA Assistant Administrator for Security Operations signed an operations directive that provides internal guidance for preparing the SPP Annual Report, including the requirement that the SPP PMO must annually verify that the level of screening services and protection provided at SPP airports is equal to or greater than the level that would be provided by federal screeners. We believe that these actions address the intent of our recommendation and should better position TSA to determine whether the level of screening services and protection provided at SPP airports continues to be equal to or greater than the level provided at non-SPP airports. Further, these actions could also assist TSA in identifying performance changes that could lead to improvements in the program and inform decision making regarding potential expansion of the SPP. Chairman Mica, Ranking Member Connolly, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For questions about this statement, please contact Jennifer Grover at (202) 512-7141 or GroverJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Glenn Davis (Assistant Director), Stanley Kostyla, Brendan Kretzschmar, Thomas Lombardi, Erin O’Brien, and Jessica Orr. Key contributors for the previous work that this testimony is based on are listed in the product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
TSA maintains a federal workforce to screen passengers and baggage at the majority of the nation's commercial airports, but it also oversees a workforce of private screeners at airports who participate in the SPP. The SPP allows commercial airports to apply to have screening performed by private screeners, who are to provide a level of screening services and protection that equals or exceeds that of federal screeners. In recent years, TSA's SPP has evolved to incorporate changes in policy and federal law, prompting enhanced interest in measuring screener performance. This testimony addresses the extent to which TSA (1) has provided guidance to airport operators for the SPP application process and (2) assesses and monitors the performance of private and federal screeners. This statement is based on a report GAO issued in December 2012 and selected updates conducted in January 2014. To conduct the selected updates, GAO reviewed documentation, such as the SPP Annual Report issued in January 2013, and interviewed agency officials on the status of implementing GAO's recommendations. Since GAO reported on this issue in December 2012, the Transportation Security Administration (TSA) has developed application guidance for airport operators applying to the Screening Partnership Program (SPP). In December 2012, GAO reported that TSA had not provided guidance to airport operators on its application and approval process, which had been revised to reflect requirements in the Federal Aviation Administration Modernization and Reform Act of 2012. Further, airport operators GAO interviewed at the time generally stated that they faced difficulties completing the revised application, such as how to obtain cost information. Therefore, GAO recommended that TSA develop application guidance, and TSA concurred. To address GAO's recommendation, TSA updated its SPP website in December 2012 by providing general application guidance and a description of the criteria and process the agency uses to assess airports' SPP applications. The guidance provides examples of information that airports could consider providing to TSA to help assess their suitability for the program and also outlines how the agency will analyze cost information. The new guidance addresses the intent of GAO's recommendation and should help improve transparency of the SPP application process as well as help airport operators determine whether their airports are good candidates for the SPP. TSA has also developed a mechanism to regularly monitor private versus federal screener performance. In December 2012, GAO found differences in performance between SPP and non-SPP airports based on its analysis of screener performance data. However, while TSA had conducted or commissioned prior reports comparing the performance of SPP and non-SPP airports, TSA officials stated at the time that they did not plan to conduct similar analyses in the future, and instead stated that they were using across-the-board mechanisms to assess screener performance across all commercial airports. In December 2012, GAO found that these across-the-board mechanisms did not summarize information for the SPP as a whole or across years, which made it difficult to identify changes in private screener performance. GAO concluded that monitoring private screener performance in comparison with federal screener performance was consistent with the statutory provision authorizing TSA to enter into contracts with private screening companies and recommended that TSA develop a mechanism to regularly monitor private versus federal screener performance. TSA concurred with the recommendation. To address GAO's recommendation, in January 2013, TSA issued its first SPP Annual Report, which provides an analysis of private versus federal screener performance. Further, in September 2013, a TSA Assistant Administrator signed an operations directive that provides internal guidance for preparing the SPP Annual Report, including the requirement that the report annually verify that the level of screening services and protection provided at SPP airports is equal to or greater than the level that would be provided by federal screeners. These actions address the intent of GAO's recommendation and could assist TSA in identifying performance changes that could lead to improvements in the program. GAO is making no new recommendations in this statement.
You are an expert at summarizing long articles. Proceed to summarize the following text: In September 2003, we reported that the Army and Air Force did not comply with DOD’s force health protection and surveillance requirements for many servicemembers deploying in support of OEF in Central Asia and OJG in Kosovo at the installations we visited. Specifically, our review disclosed problems with the Army and Air Force’s implementation of DOD’s force health protection and surveillance requirements in the following areas: Deployment health assessments. Significant percentages of Army and Air Force servicemembers were missing one or both of their pre- and post-deployment health assessments and, when health assessments were conducted, as many as 45 percent of them were not done within the required time frames. Immunizations and other pre-deployment requirements. Based on the documentation we reviewed, as many as 46 percent of servicemembers in our samples were missing one of the pre-deployment immunizations required, and as many as 40 percent were missing a current tuberculosis screening at the time of their deployment. Up to 29 percent of the servicemembers in our samples had blood samples in the repository older than the required limit of 1 year at the time of deployment. Completeness of medical records and centralized data collection. Servicemembers’ permanent medical records at the Army and Air Force installations we visited did not always include documentation of the completed health assessments that we found at AMSA and at the U.S. Special Operations Command. In one sample, 100 percent of the pre-deployment health assessments were not documented in the servicemember medical records that we reviewed. Furthermore, our review disclosed that the AMSA database was lacking documentation of many health assessments and immunizations that we found in the servicemembers’ medical records at the installations visited. We also wrote in our 2003 report that DOD did not have oversight of departmentwide efforts to comply with health surveillance requirements. There was no effective quality assurance program at the Office of the Assistant Secretary of Defense for Health Affairs or at the Offices of the Surgeons’ General of the Army or Air Force that helped ensure compliance with force health protection and surveillance policies. We believed that the lack of such a system was a major cause of the high rate of noncompliance we found at the installations we visited, and thus recommended that the department establish an effective quality assurance program to ensure that the military services comply with the force health protection and surveillance requirements for all servicemembers. The department concurred with our recommendation. “(a) SYSTEM REQUIRED—The Secretary of Defense shall establish a system to assess the medical condition of members of the armed forces (including members of the reserve components) who are deployed outside the United States or its territories or possessions as part of a contingency operation (including a humanitarian operation, peacekeeping operation, or similar operation) or combat operation. “(b) ELEMENTS OF SYSTEM—The system described in subsection (a) shall include the use of predeployment medical examinations and postdeployment medical examinations (including an assessment of mental health and the drawing of blood samples) to accurately record the medical condition of members before their deployment and any changes in their medical condition during the course of their deployment. The postdeployment examination shall be conducted when the member is redeployed or otherwise leaves an area in which the system is in operation (or as soon as possible thereafter). “(c) RECORDKEEPING—The results of all medical examinations conducted under the system, records of all health care services (including immunizations) received by members described in subsection (a) in anticipation of their deployment or during the course of their deployment, and records of events occurring in the deployment area that may affect the health of such members shall be retained and maintained in a centralized location to improve future access to the records. “(d) QUALITY ASSURANCE—The Secretary of Defense shall establish a quality assurance program to evaluate the success of the system in ensuring that members described in subsection (a) receive predeployment medical examinations and postdeployment medical examinations and that the recordkeeping requirements with respect to the system are met.” As set forth above, these provisions require the use of pre-deployment and post-deployment medical examinations to accurately record the medical condition of servicemembers before deployment and any changes during their deployment. In a June 30, 2003, correspondence with GAO, the Assistant Secretary of Defense for Health Affairs stated that “it would be logistically impossible to conduct a complete physical examination on all personnel immediately prior to deployment and still deploy them in a timely manner.” Therefore, DOD required both pre- and post-deployment health assessments for servicemembers who deploy for 30 or more continuous days to a land-based location outside the United States without a permanent U.S. military treatment facility. Both assessments use a questionnaire designed to help military healthcare providers in identifying health problems and providing needed medical care. The pre-deployment health assessment is generally administered at the home station before deployment, and the post-deployment health assessment is completed either in theater before redeployment to the servicemember’s home unit or shortly upon redeployment. As a component of medical examinations, the statute quoted above also requires that blood samples be drawn before and after a servicemember’s deployment. DOD Instruction 6490.3, August 7, 1997, requires that a pre-deployment blood sample be obtained within 12 months of the servicemember’s deployment. However, it requires the blood samples be drawn upon return from deployment only when directed by the Assistant Secretary of Defense for Health Affairs. According to DOD, the implementation of this requirement was based on its judgment that the Human Immunodeficiency Virus serum sampling taken independent of deployment actions is sufficient to meet both pre- and post-deployment health needs, except that more timely post-deployment sampling may be directed when based on a recognized health threat or exposure. Prior to April 2003, DOD did not require a post-deployment blood sample for servicemembers supporting the OEF and OJG deployments. In April 2003, DOD revised its health surveillance policy for blood samples and post-deployment health assessments. Effective May 22, 2003, the services were required to draw a blood sample from each redeploying servicemember no later than 30 days after arrival at a demobilization site or home station. According to DOD, this requirement for post-deployment blood samples was established in response to an assessment of health threats and national interests associated with current deployments. The department also revised its policy guidance for enhanced post-deployment health assessments to gather more information from deployed servicemembers about events that occurred during a deployment. More specifically, the revised policy requires that a trained health care provider conduct a face-to-face health assessment with each returning servicemember to ascertain (1) the individual’s responses to the health assessment questions on the post-deployment health assessment form; (2) the presence of any mental health or psychosocial issues commonly associated with deployments; (3) any special medications taken during the deployment; and (4) concerns about possible environmental or occupational exposures. The overall record of the military services in meeting force health protection and surveillance system requirements for OIF was mixed and varied by service, by installation visited, and by specific policy requirement; however, our data shows much better compliance with these requirements in the Army and Air Force installations we reviewed compared to the installations in our earlier review of OEF/OJG. Of the installations reviewed for this report, the Marine Corps generally had lower levels of compliance than the other services. None of the services fully complied with all of the force health protection and surveillance system requirements, which include completing pre- and post-deployment health assessments, receipt of immunizations, and meeting pre-deployment requirements related to tuberculosis screening and pre and post-deployment blood samples. Also, the services did not fully comply with requirements that servicemembers’ permanent medical records include required health-related information, and that DOD’s centralized database includes documentation of servicemember health-related information. Servicemembers in our review at the Army and Air Force installations were generally missing small percentages of pre-deployment health assessments, as shown in figure 1. In contrast, pre-deployment health assessments were missing for an estimated 63 percent of the servicemembers at one Marine Corps installation and for 27 percent at the other Marine Corps installation visited. Similarly, the Navy installation we visited was missing pre-deployment health assessments for about 24 percent of the servicemembers; however, we note that the pre-deployment health assessments reviewed for Navy servicemembers were completed prior to June 1, 2003, and may not reflect improvements arising from increased emphasis following our prior review of the Army and Air Force’s compliance for OEF/OJG. At three Army installations we visited, we also analyzed the extent to which pre-deployment health assessments were completed for those servicemembers who re-deployed back to their home unit after June 1, 2003. Servicemembers associated with these re-deployment samples deployed in support of OIF prior to June 1, 2003. For two of these Army installations—Fort Eustis and Fort Campbell—we estimate that less than 1 percent of the servicemembers were missing pre-deployment health assessments. However, approximately 39 percent of the servicemembers that redeployed back to Fort Lewis on or after June 1, 2003, were missing their pre-deployment health assessments. Post-deployment health assessments were missing for small percentages of servicemembers, except at one of the Marine Corps installations we visited, as shown in figure 2. Although the Army provides for waivers for longer time frames, DOD policy requires that servicemembers complete a pre-deployment health assessment form within 30 days of their deployment and a post-deployment health assessment form within 5 days upon redeployment back to their home station. For consistency and comparability between services, our analysis uses the DOD policy for reporting results. These time frames were established to allow time to identify and resolve any health concerns or problems that may affect the ability of the servicemember to deploy, and to promptly identify and address any health concerns or problems that may have arisen during the servicemember’s deployment. For servicemembers that had completed pre-deployment health assessments, we found that many assessments were not completed on time in accordance with requirements. More specifically, we estimate that pre-deployment health assessments were not completed on time for: 47 percent of the pre-deployment health assessments for the active duty servicemembers at Fort Lewis; 41 percent of the pre-deployment health assessments for the active duty servicemembers and for 96 percent of the Army National Guard unit at Fort Campbell; and 43 percent of the pre-deployment health assessments at Camp Lejeune and 29 percent at Camp Pendleton. For the most part, small percentages—ranging from 0 to 5 percent—of the post-deployment health assessments were not completed on time at the installations visited. The exception was at Fort Lewis, where we found that about 21 percent of post-deployment health assessments for servicemembers were not completed on time. DOD policy also requires that pre-deployment and post-deployment health assessments are to be reviewed immediately by a health care provider to identify any medical care needed by the servicemember. Except for servicemembers at one of the two Marine Corps installations visited, only small percentages of the pre- and post-deployment health assessments, ranging from 0 to 6 percent, were not reviewed by a health care provider. At Camp Pendleton, we found that a health care provider did not review 33 percent of the pre-deployment health assessments and 21 percent of the post-deployment health assessments for its servicemembers . Noncompliance with the requirements for pre-deployment health assessments may result in servicemembers with existing health problems or concerns being deployed with unaddressed health problems. Also, failure to complete post-deployment health assessments may risk a delay in obtaining appropriate medical follow-up attention for a health problem or concern that may have arisen during or following the deployment. Based on our samples, the services did not fully meet immunization and other health requirements for OIF deployments, although all servicemembers in our sample had received at least one anthrax immunization before they returned from the deployment as required. Almost all of the servicemembers in our samples had a pre-deployment blood sample in the DOD Serum Repository but frequently the blood sample was older than the one-year requirement. The services’ record in regard to post-deployment blood sample draws was mixed. The U.S. Central Command required the following pre-deployment immunizations for all servicemembers who deployed to Southwest Asia in support of OIF: hepatitis A (two-shot series); measles, mumps, and rubella; polio; tetanus/diphtheria within the last 10 years; typhoid within the last 5 years; and influenza within the last 12 months. Based on the documentation we reviewed, the estimated percent of servicemembers receiving all of the required pre-deployment immunizations ranged from 52 percent to 98 percent at the installations we visited (see fig. 3). The percent of servicemembers missing only one of the pre-deployment immunizations required for the OIF deployment ranged from 2 percent to 43 percent at the installations we visited. Furthermore, the percent of servicemembers missing 2 or more of the required immunizations ranged from 0 percent to 11 percent. Figure 4 indicates that 3 to about 64 percent of the servicemembers at the installations visited were missing a current tuberculosis screening at the time of their deployment. A tuberculosis screening is deemed “current” if it occurred within 1 year prior to deployment. Specifically, the Army, Navy, and Marine Corps required servicemembers deploying to Southwest Asia in support of OIF to be screened for tuberculosis within 12 months of deployment. The Air Force requirement for tuberculosis screening depends on the servicemember’s occupational specialty; therefore we did not examine tuberculosis screening for servicemembers in our sample at Moody Air Force Base due to the difficulty of determining occupational specialty for each servicemember. Although not required as pre-deployment immunizations, U.S. Central Command policies require that servicemembers deployed to Southwest Asia in support of OIF receive a smallpox immunization and at least one anthrax immunization either before deployment or while in theater. For the servicemembers in our samples at the installations visited, we found that all of the servicemembers received at least one anthrax immunization in accordance with the requirement. Only small percentages of servicemembers at two of the three Army installations, the Air Force installation, and the Navy installation visited did not receive the required smallpox immunization. However, an estimated 18 percent of the servicemembers at Fort Lewis, 8 percent at Camp Lejeune, and 27 percent at Camp Pendleton did not receive the required smallpox immunization. U.S. Central Command policies also require that deploying servicemembers have a blood sample in the DOD Serum Repository not older than 12 months prior to deployment. Almost all of the servicemembers in our review had a pre-deployment blood sample in the DOD Serum Repository, but frequently the blood samples were older than the 1-year requirement. As shown in table 1 below, 14 percent of servicemembers at Camp Pendleton had blood samples in the repository older than 1 year. Effective May 22, 2003, the services were required to draw a post-deployment blood sample from each re-deploying servicemember no later than 30 days after arrival at a demobilization site or home station. Only small percentages of the servicemembers at the Army and Air Force installations visited did not have a post-deployment blood sample drawn. The Navy and Marine Corps installations visited had percentages of servicemembers missing post-deployment blood samples ranging from 7 to 19 percent, and the post-deployment blood samples that were available were frequently drawn later than required, as shown in table 2. DOD policy requires that the original completed pre-deployment and post-deployment health assessment forms be placed in the servicemember’s permanent medical record and that a copy be forwarded to AMSA. Also, the military services require that all immunizations be documented in the servicemember’s medical record. Figure 5 shows that small percentages of the completed health assessments we found at AMSA for servicemembers in our samples were not documented in the servicemember’s permanent medical record, ranging from 0 to 14 percent for pre-deployment health assessments and from 0 percent to 20 percent for post-deployment health assessments. Almost all of the immunizations we found at AMSA for servicemembers in our samples were documented in the servicemember’s medical record. Service policies also require documentation in the servicemember’s permanent medical records of all visits to in-theater medical facilities. At six of the seven installations we visited, we sampled and examined whether selected in-theater visits to medical providers—such as battalion aid stations for the Army and Marine Corps and expeditionary medical support for the Air Force—were documented in the servicemember’s permanent medical record. Both the Air Force and Navy installations used automated systems for recording servicemember in-theater visits to medical facilities. While in-theater visits were documented in these automated systems, we found that 20 of the 40 Air Force in-theater visits we examined at Moody Air Force Base and 6 of the 60 Navy in-theater visits we examined at the Naval Construction Battalion Center were not also documented in the servicemembers’ permanent medical records. In contrast, the Army and Marine Corps installations used manual patient sign-in logs for servicemembers’ visits to in-theater medical providers and relied exclusively on paper documentation of the in-theater visits in the servicemember’s permanent medical record. The results of our review are summarized in table 3. Army and Marine Corps representatives associated with the battalion aid stations we examined commented that the aid stations were frequently moving around the theater, increasing the likelihood that paper documentation of the visits might get lost and that such visits might not always be documented because of the hostile environment. The lack of complete and accurate medical records documenting all medical care for the individual servicemember complicates the servicemember’s post-deployment medical care. For example, accurate medical records are essential for the delivery of high-quality medical care and important for epidemiological analysis following deployments. According to DOD health officials, the lack of complete and accurate medical records complicated the diagnosis and treatment of servicemembers who experienced post-deployment health problems that they attributed to their military service in the Persian Gulf in 1990-91. DOD’s Theater Medical Information Program (TMIP) has the capability to electronically record and store in-theater patient medical encounter data. However, the Iraq war has delayed implementation of the program. At the request of the services, the operational test and evaluation for TMIP has been delayed until the second quarter of fiscal year 2005. In addition to the above requirements, Public Law 105-85, 10 U.S.C. 1074f, requires the Secretary of Defense to retain and maintain health-related records in a centralized location for servicemembers who are deployed. This includes records for all medical examinations conducted to ascertain the medical condition of servicemembers before deployment and any changes during their deployment, all health care services (including immunizations) received in anticipation of deployment or during the deployment, and events occurring in the deployment area that may affect the health of servicemembers. A February 2002 Joint Staff memorandum requires the services to forward a copy of the completed pre-deployment and post-deployment health assessments to AMSA for centralized retention. Figure 6 shows the estimated percentage of pre- and post-deployment health assessments in servicemembers’ medical records that were not available in a centralized database at AMSA. Our samples of servicemembers at the installations visited show wide variation by installation regarding pre-deployment health assessments missing from the centralized database, ranging from zero at Fort Lewis to all of the assessments at Camp Lejeune. Post-deployment health assessments were missing for small percentages of servicemembers at the installations visited, except at the Marine Corps installations visited. More specifically, about 26 percent of the post-deployment health assessments at Camp Lejeune and 24 percent at Camp Pendleton were missing from the centralized database. Immunizations missing from the centralized database that we found in the servicemembers’ medical records ranged from 3 to 44 percent for the servicemembers in our samples. DOD officials believe that automation of deployment health assessment forms and recording of servicemember immunizations will improve the completeness of deployment data in the AMSA centralized database, and DOD has ongoing initiatives to accomplish these goals. DOD is currently implementing worldwide a comprehensive electronic medical records system, known as the Composite Health Care System II, which includes pre- and post-deployment health assessment forms and the capability to electronically record immunizations given to servicemembers. Also, the Assistant Secretary of Defense for Health Affairs has established a Deployment Health Task Force whose focus includes improving the electronic capture of deployment health assessments. According to DOD, about 40 percent of the Army’s pre-deployment health assessments and 50 percent of the post-deployment health assessments sent to AMSA since June 1, 2003, were submitted electronically. DOD officials believe that the electronic automation of the deployment health-related information will lessen the burden of installations in forwarding paper copies and the likelihood of information being lost in transit. Although the number of installations we visited was limited and different than those in our previous review with the exception of Fort Campbell, the Army and Air Force compliance with force health protection and surveillance policies for active-duty servicemembers in OIF appears to be better than for those installations we reviewed for OJG and OEF. To provide context, we compared overall data from Army and Air Force active duty servicemembers’ medical records reviewed for OEF/OJG with OIF, by aggregating data from all records examined in these two reviews to provide some perspective and determined that: Lower percentages of Army and Air Force servicemembers were missing pre- and post-deployment health assessments for OIF. Higher percentages of Army and Air Force servicemembers received required pre-deployment immunizations for OIF. Lower percentages of deployment health-related documentation were missing in the servicemembers’ permanent medical records and at DOD’s centralized database for OIF. Because our previous report on compliance with requirements for OEF and OJG focused only on the Army and Air Force, we were unable to make comparisons for the Navy and Marine Corps. Our data indicate that Army and Air Force compliance with requirements for completion of pre- and post-deployment health assessments for servicemembers for OIF appears to be much better than compliance for OEF and OJG for the installations examined in each review. In some cases, the services were in full compliance. As before, we aggregated data from all records examined in the two reviews and determined, among the Army and Air Force active duty servicemembers we reviewed for OIF compared to those reviewed for OEF/OJG, the following: Army servicemembers missing pre-deployment health assessments was an average of 14 percent for OIF contrasted with 45 percent for OEF/OJG. Air Force servicemembers missing pre-deployment health assessments was 8 percent for OIF contrasted with an average of 50 percent for OEF/OJG. Army servicemembers missing post-deployment health assessments was 0 percent for OIF contrasted with an average of 29 percent for OEF/OJG. Air Force servicemembers missing post-deployment health assessments was 4 percent for OIF contrasted with an average of 62 percent for OEF/OJG. Based on our samples, the Army and the Air Force had better compliance with pre-deployment immunization requirements for OIF as compared to OEF and OJG. The aggregate data from each of our OIF samples indicates that an average of 68 percent of Army active duty servicemembers received all of the required immunizations before deploying for OIF, contrasted with an average of only 35 percent for OEF and OJG. Similarly, 98 percent of Air Force active duty servicemembers received all of the required immunizations before deploying for OIF, contrasted with an average of 71 percent for OEF and OJG. The percentage of Army active duty and Air Force servicemembers missing two or more immunizations appears to be markedly better, as illustrated in table 4. Our data indicate that the Army and Air Force’s compliance with requirements for completeness of servicemember medical records and of DOD’s centralized database at AMSA for OIF appears to be significantly better than compliance for OEF and OJG. Lower overall percentages of deployment health-related documentation were missing in servicemembers’ permanent medical records and at AMSA. We aggregated the data from each of our samples and depicted the results in tables 5 and 6. The data appear to indicate that, for active duty servicemembers, the Army and the Air Force have made significant improvements in documenting servicemember medical records. These data also appear to indicate that, overall, both services have also made encouraging improvements in retaining health-related records in DOD’s centralized database at AMSA, although not quite to the extent exhibited in their efforts to document servicemember medical records. In response to congressional mandates and a GAO recommendation, DOD established a deployment health quality assurance program in January 2004 to ensure compliance with force health protection and surveillance requirements and implementation of the program is ongoing. DOD officials believe that their quality assurance program has improved the services’ compliance with requirements. However, we did not evaluate the effectiveness of DOD’s deployment health quality assurance program because of the relatively short time of its implementation. Section 765 of Public Law 105-85 (10 U.S.C. 1074f) requires the Secretary of Defense to establish a quality assurance program to evaluate the success of DOD’s system for ensuring that members receive pre-deployment medical examinations and post-deployment medical examinations and that recordkeeping requirements are met. In May 2003, the House Committee on Armed Services directed the Secretary of Defense to take measures to improve oversight and compliance with force health protection and surveillance requirements. Specifically, in its report accompanying the Fiscal Year 2004 National Defense Authorization Act, the Committee directed the Secretary of Defense to establish a quality control program to assess implementation of the force health protection and surveillance program. In January 2004, the Assistant Secretary of Defense for Health Affairs issued policy and program guidance for the DOD Deployment Health Quality Assurance Program. DOD’s quality assurance program requires: Periodic reporting on pre- and post-deployment health assessments. AMSA is required to provide (at a minimum) monthly reports to the Deployment Health Support Directorate (Directorate) on deployment health data. AMSA is providing the Directorate and the services with weekly reports on post-deployment health assessments and publishes bi- monthly updates on pre- and post-deployment health assessments. Periodic reporting on service-specific deployment health quality assurance programs. The services are required to report (at a minimum) quarterly reports to the Directorate on the status and findings of their respective required deployment health quality assurance programs. Each service has provided the required quarterly reports on its respective quality assurance programs. Periodic visits to military installations to assess deployment health programs. The program requires joint visits by representatives from the Directorate and from service medical departments to military installations for the purpose of validating the service’s deployment health quality assurance reporting. As of September 2004, Directorate officials had accompanied service medical personnel to an Army, Air Force, and Marine Corps installation for medical records review. Directorate officials envision continuing quarterly installation visits in 2005, with possible expansion to include reserve and guard sites. The services are at different stages of developing their deployment quality assurance programs. Following the issuance of our September 2003 report and subsequent testimony before the House Committee on Veterans’ Affairs in October 2003, the Surgeon General of the Army directed that the U.S. Army Center for Health Promotion and Preventive Medicine (the Center) lead reviews of servicemember medical records at selected Army installations to assess compliance with force health protection and surveillance requirements. As of September 2004, the Center had conducted reviews at 10 Army installations. Meanwhile, the Center developed the Army’s deployment health quality assurance program that parallels closely the DOD’s quality assurance program. According to a Center official, this quality assurance program is currently under review by the Surgeon General. In the Air Force, public health officers at each installation report monthly compliance rates with force health protection and surveillance requirements to the office of the Surgeon General of the Air Force. These data are monitored by officials in the office of the Air Force Surgeon General for trends and for identification of potential problems. Air Force Surgeon General officials told us that, as of May 2004, the Air Force Inspector General’s periodic health services inspections—conducted every 18 to 36 months at each Air Force installation—includes an examination of compliance with deployment health surveillance requirements. Also, the Air Force Audit Agency is planning to examine in 2004 whether AMSA received all of the required deployment health assessments and blood samples for servicemembers who deployed from several Air Force installations. According to an official in the office of the Surgeon General of the Navy, no decisions have been reached regarding whether periodic audits of servicemember medical records will be conducted to assess compliance with DOD requirements. DOD’s April 2003 enhanced post-deployment health assessment program expanded the requirement for post-deployment health assessments and post-deployment blood samples to all sea-based personnel in theater supporting combat operations for Operations Iraqi Freedom and Enduring Freedom. Navy type commanders (e.g., surface ships, submarine, and aircraft squadrons) are responsible for implementing the program. The Marine Corps has developed its deployment health assessment quality assurance program that is now under review by the Commandant of the Marine Corps. It reemphasizes the requirements for deployment health assessments and blood samples and requires each unit to track and report the status of meeting these requirements for their servicemembers. At the installations we visited, we observed that the Army and Air Force had centralized quality assurance processes in place that extensively involved installation medical personnel examining whether DOD’s force health protection and surveillance requirements were met for deploying/ redeploying servicemembers. In contrast, we observed that the Marine Corps installations did not have well-defined quality assurance processes for ensuring that the requirements were met for servicemembers. The Navy installation visited did not have a formal quality assurance program; compliance depended largely on the initiative of the assigned medical officer. We believe that the lack of effective quality assurance processes at the Marine Corps installations contributed to lower rates of compliance with force health protection and surveillance requirements. In our September 2003 report, we recommended that DOD establish an effective quality assurance program and we continue to believe that implementation of such a program could help the Marine Corps improve its compliance with force health protection and surveillance requirements. In commenting on a draft of this report, the Assistant Secretary of Defense for Health Affairs concurred with the findings of the report. He suggested that the word “Appears” be removed from the title of the report to more accurately reflect improvements in compliance with force health protection and surveillance requirements for OIF. We do not agree with this suggestion because the number of installations we visited for OIF was limited and different than those in our previous review for OEF/OJG with the exception of Fort Campbell. As pointed out in the report, the data for OIF were limited in some instances to only one sample at one installation. We believe that it is important for the reader to recognize the limitations of this comparison. The Assistant Secretary also commented that the department is aware of variations in progress among the services and is committed to demonstrating full compliance through the continued application of aggressive quality assurance measures. He further commented that the department is focusing on and supporting recent policy efforts by the Marine Corps to improve its deployment health quality assurance program. He commented that plans have been initiated to conduct a joint quality assurance visit to Camp Pendleton, Calif., in early 2005, following the implementation of an improved quality assurance program and the return of significant numbers of Marines currently deployed in support of OIF. The department’s written comments are incorporated in their entirety in appendix II. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, the Air Force, and the Navy; and the Commandant of the Marine Corps. We will also make copies available to others upon request. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me on (202) 512-5559 or Clifton Spruill on (202) 512-4531. Key contributors to this report are listed in appendix III. To meet our objectives, we interviewed responsible officials and reviewed pertinent documents, reports, and information related to force health protection and deployment health surveillance requirements obtained from officials at the Office of the Assistant Secretary of Defense for Health Affairs; the Deployment Health Support Directorate; the National Guard Bureau; and the Offices of the Surgeons General for the Army, Air Force, and Navy Headquarters in the Washington, D.C., area. We also performed additional work at AMSA and the U.S. Central Command. To determine the extent to which the military services were meeting the Department of Defense’s (DOD) force health protection and surveillance requirements for servicemembers deploying in support of Operation Iraqi Freedom (OIF), we identified DOD’s and each service’s overall deployment health surveillance policies. We also obtained the specific force health protection and surveillance requirements applicable to all servicemembers deploying to Southwest Asia in support of OIF required by the U.S. Central Command. We tested the implementation of these requirements at selected Army, Air Force, Marine Corps, and Navy installations. To identify military installations within each service where we would test implementation of the policies, we reviewed deployment data showing the location of units, by service and by military installation that deployed to Southwest Asia in support of OIF or redeployed from Southwest Asia in support of OIF from June 1, 2003, through November 30, 2003. After examining these data, we selected the following military installations for review of selected servicemembers’ medical records, because the installations had amongst the largest numbers of servicemembers who deployed or re-deployed back to their home unit from June 1, 2003, through November 30, 2003: Fort Lewis, Wash. Fort Campbell, Ky. Fort Eustis, Va. Camp Lejeune, N.C. Camp Pendleton, Calif. Moody Air Force Base, Ga. Naval Construction Battalion Center, Gulfport, Miss. In comparing compliance rates for OIF with those for Operation Enduring Freedom (OEF) and Operation Joint Guardian (OJG), we reviewed active duty servicemembers’ medical records for Army servicemembers and Air Force servicemembers at selected installations. For OIF, we reviewed active duty Army servicemembers’ medical records at Fort Campbell and Fort Lewis and active duty Air Force servicemembers at Moody Air Force Base. For OEF and OJG, we reviewed active duty Army servicemembers’ medical records at Fort Drum and Fort Campbell and active duty Air Force servicemembers at Travis Air Force Base and Hurlburt Field. Due to the length of Army deployments in support of OIF, we sampled two groups at the military installations consisting of (1) servicemembers who deployed within the selected time frame and (2) servicemembers who re- deployed back to their home unit within the selected time frame. For the selected military installations, we requested officials in the Deployment Health Support Directorate, in the services’ Surgeon General offices, or at the installations to provide a listing of those active-duty servicemembers who deployed to Southwest Asia in support of OIF for 30 or more continuous days to areas without permanent U.S. military treatment facilities or redeployed back to the military installation from June 1, 2003, through November 30, 2003. For Army reserve and National Guard servicemembers, we requested listings of those servicemembers who deployed during the period June 1, 2003, through January 31, 2004, and those servicemembers who redeployed from Southwest Asia in support of OIF from June 1, 2003, through December 31, 2003. For Marine Corps servicemembers at Camp Lejeune and Camp Pendleton, we modified our selection criteria to draw one sample because a number of servicemembers met the definition for both deployment and redeployment within our given time frames. Specifically, servicemembers at these installations had both deployed to Southwest Asia in support of OIF and redeployed back to their home unit from June 1, 2003, through November 30, 2003, staying for 30 or more continuous days. For our medical records review, we selected samples of servicemembers at the selected installations. Five of our servicemember samples were small enough to complete reviews of the entire universe of medical records for the respective location. For the other locations, we drew probability samples from the larger universe. In all cases, records that were not available for review were researched in more detail by medical officials to account for the reason for which the medical record was not available so that the record could be deemed either in-scope or out-of- scope. For installations in which a sample was drawn, all out-of-scope cases were then replaced with another randomly selected record until the required sample size was met. For installations in which the universe was reviewed, the total number in the universe was adjusted accordingly. There were four reasons for which a medical record was unavailable and subsequently deemed out-of-scope for purposes of this review: 1. Charged to patient. When a patient goes to be seen in clinic (on-post or off-post), the medical record is physically given to the patient. The procedure is that the medical record will be returned following their clinic visit. 2. Expired term of service. Servicemember separates from the military and their medical record is sent to St. Louis, Missouri, and therefore not available for review. 3. Permanent change of station. Servicemember is still in the military, but has transferred to another base. Medical record transfers with the servicemember. 4. Temporary duty off site. Servicemember has left military installation, but is expected to return. The temporary duty is long enough to warrant medical record to accompany servicemember. There were a few instances in which medical records could not be accounted for by the medical records department. These records were deemed to be in-scope, counted as non-responses, and not replaced in the sample. The number of servicemembers in our samples and the applicable universe of servicemembers for the OIF deployments at the installations visited are shown in table 7. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn from the sampled installations. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 5 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. The 95 percent confidence intervals for percentage estimates are presented along with the estimates in figures and tables in this report. At each sampled location, we examined servicemember medical records for evidence of the following force health protection and deployment health-related documentation required by DOD’s force health protection and deployment health surveillance policies: Pre- and post-deployment health assessments, as applicable; Tuberculosis screening test (within 1 year of deployment); Pre-deployment immunizations: hepatitis A; influenza (within 1 year of deployment); measles, mumps, and rubella; polio; tetanus-diphtheria (within 10 years of deployment); and typhoid (within 5 years of deployment); and Immunizations required prior to deployment or in theater: anthrax (at least one immunization); and smallpox To provide assurances that our review of the selected medical records was accurate, we requested the installations’ medical personnel to reexamine those medical records that were missing required health assessments or immunizations and adjusted our results where documentation was subsequently identified. We also requested that installation medical personnel check all possible sources for missing pre- and post-deployment health assessments and immunizations. These sources included automated immunization sources, including the Army’s Medical Protection System (MEDPROS), the Navy’s Shipboard Non-tactical Automated Data Processing Automated Medical System (SAMS), and the Air Force’s Comprehensive Immunization Tracking Application (CITA). In those instances where we did not find a deployment health assessment, we concluded that the assessments were not completed. Our analyses of the immunization records was based on our examination of servicemembers’ permanent medical records and immunizations that were in the Army’s MEDPROS, the Navy’s SAMS, and the Air Force’s CITA. In analyzing our review results at each location, we considered documentation from all identified sources (e.g., the servicemember’s medical record, AMSA, and immunization tracking systems) in presenting data on compliance with deployment health surveillance policies. To identify whether required blood samples were drawn for servicemembers prior to and after deployments, we requested that the AMSA staff query the DOD Serum Repository to identify whether the servicemembers in our samples had a blood sample in the repository not older than 1 year prior to their deployment, and to provide the dates that post-deployment blood samples were drawn. To determine whether the services were documenting in-theater medical interventions in servicemembers’ medical records, we requested, at six of the seven installations visited for medical records review, the patient sign-in logs for in-theater medical care providers—such as the Army’s and Marine Corps’ battalion aid stations—when they were deployed to Southwest Asia in support of OIF. At the Army and Marine Corps locations, we randomly selected sick call visits from non-automated patient sign-in logs, but we randomly selected visits from the automated Global Expeditionary Medical Support (GEMS) at Moody Air Force Base and from the automated SAMs at the Naval Construction Battalion Center. We did not attempt to judge the importance of the patient visit in making our selections. For the selected patient visits, we then reviewed the servicemember’s medical record for any documentation—such as the Standard Form 600—of the servicemember’s visit to the in-theater medical care providers. To determine whether the service’s deployment health-related records were retained and maintained in a centralized location, we requested that officials at the AMSA query the AMSA database for the servicemembers included in our samples at the selected installations. For servicemembers in our samples, AMSA officials provided us with copies of deployment health assessments and immunization data found in the AMSA database. We analyzed the completeness of the AMSA database by comparing the deployment health assessments and the pre-deployment immunization data we found during our medical records review with those in the AMSA database. To identify the completeness of servicemember medical records, we then compared the data identified from the AMSA queries with the data we found during our medical records review. To determine whether DOD has established an effective quality assurance program for ensuring that the military services comply with force health protection and surveillance policies, we interviewed officials within the Deployment Health Support Directorate, the offices of the services’ Surgeons General, and at the installations we visited for medical records review about their internal management control processes. We also reviewed quality assurance policies and other documentation for ensuring compliance with force health protection and surveillance requirements. We took several steps to ensure the reliability of the data we used in our review. DOD electronic lists of servicemembers who either deployed or redeployed within certain time frames were used to generate random samples for which primary data was then collected. It was our premise that no systematic errors for inclusion or exclusion of cases in the database existed and the randomness of the sample generated controlled for those records selected for review. The final universe for which sample size was based was adjusted to account for out-of-scope cases. In addition, we took mitigating measures to (1) avoid relying exclusively on the automated databases and (2) identify and resolve inconsistencies, as described below: Personnel Deployment Databases. Because of concerns about the reliability of deployment data maintained by the Defense Manpower Data Center, we requested, in consultation with officials at the Deployment Health Support Directorate, personnel deployment data from the military installations selected for medical records review. DOD officials believed that the military installations were the most reliable sources for accurate personnel deployment data because servicemembers are deployed from, or redeployed to, these sites. However, we decided to be alert for indications of errors as we reviewed servicemember medical records and to investigate situations that appeared to be questionable. Automated Immunization Databases. Service policies require that immunizations be documented in the servicemember’s medical record. For the most part, immunizations are documented on Department of Defense Form 2766. The services also use automated immunization systems—the Army uses MEDPROS, the Air Force uses CITA, and the Navy/Marine Corps use SAMS. We did not rely exclusively on either of these sources (Department of Defense Form 2766 or automated immunization systems). For servicemembers in our samples, we reviewed both the servicemembers’ medical records and queries of the services’ automated immunization system for each servicemember. If we found documentation of the required immunizations in either source, we considered the immunization documented because it was evident that the immunization was given. AMSA Centralized Database. DOD policy requires that pre- and post-deployment health assessments be documented in the servicemember’s medical record and also that a copy be sent to AMSA for inclusion in the centralized database. We did not rely exclusively on the AMSA centralized database for determining compliance with force health protection and surveillance policies. For servicemembers in our samples, we reviewed both the servicemember’s medical record and queries of the AMSA centralized database for health assessments and immunizations for the servicemember. If we found documentation of the required pre- or post-deployment health assessments or immunizations in either source, we considered the servicemember as having met the requirement for health assessments and immunizations. Our review was performed from November 2003 through August 2004 in accordance with generally accepted government auditing standards. In addition to the individual named above, Steve Fox, Rebecca Beale, Margaret Holihan, Lynn Johnson, Susan Mason, William Mathers, Clara Mejstrik, Christopher Rice, Terry Richardson, Kristine Braaten, Grant Mallie, Jean McSween, Julia Matta, John Van Schaik, and R.K. Wild made key contributions to this report.
A lack of servicemember health and deployment data hampered investigations into the nature and causes of illnesses reported by many servicemembers following the 1990-91 Persian Gulf War. Public Law 105-85, enacted in November 1997, required the Department of Defense (DOD) to establish a system to assess the medical condition of servicemembers before and after deployments. Following its September 2003 report examining Army and Air Force compliance with DOD's force health protection and surveillance policies for Operation Enduring Freedom (OEF) and Operation Joint Guardian (OJG), GAO was asked in November 2003 to also determine (1) the extent to which the services met DOD's policies for Operation Iraqi Freedom (OIF) and, where applicable, compare results with OEF/OJG; and (2) what steps DOD has taken to establish a quality assurance program to ensure that the military services comply with force health protection and surveillance policies. Overall compliance with DOD's force health protection and surveillance policies for servicemembers that deployed in support of OIF varied by service, installation, and policy requirement. Such policies require that servicemembers be assessed before and after deploying overseas and receive certain immunizations, and that health-related documentation be maintained in a centralized location. GAO reviewed 1,862 active duty and selected reserve component servicemembers' medical records from a universe of 4,316 at selected military service installations participating in OIF. Overall, Army and Air Force compliance for sampled servicemembers for OIF appears much better compared to OEF and OJG. For example, (1) lower percentages of Army and Air Force servicemembers were missing pre- and post-deployment health assessments for OIF; (2) higher percentages of Army and Air Force servicemembers received required pre-deployment immunizations for OIF; and (3) lower percentages of deployment health-related documentation were missing in servicemembers' permanent medical records and at DOD's centralized database for OIF. The Marine Corps installations examined generally had lower levels of compliance than the other services; however, GAO did not review medical records from the Marines or Navy for OEF and OJG. Noncompliance with the requirements for health assessments may result in deployment of servicemembers with existing health problems or concerns that are unaddressed. It may also delay appropriate medical follow-up for a health problem or concern that may have arisen during or after deployment. In January 2004, DOD established an overall deployment quality assurance program for ensuring that the services comply with force health protection and surveillance policies, and implementation of the program is ongoing. DOD's quality assurance program requires (1) reporting from DOD's centralized database on each service's submission of required pre-deployment and post-deployment health assessments for deployed servicemembers, (2) reporting from each service regarding the results of the individual service's deployment quality assurance program, and (3) joint DOD and service representative reviews at selected military installations to validate the service's deployment health quality assurance reporting. DOD officials believe that their quality assurance program has improved the services' compliance with requirements. However, the services are at different stages of implementing their own quality assurance programs as mandated by DOD. At the installations visited, GAO analysts observed that the Army and Air Force had centralized quality assurance processes in place that extensively involved medical personnel examining whether DOD's force health protection and surveillance requirements were met for deploying/re-deploying servicemembers. In contrast, GAO analysts observed that the Marine Corps installations did not have well-defined quality assurance processes for ensuring that requirements were met for servicemembers.
You are an expert at summarizing long articles. Proceed to summarize the following text: Until 1993, most forces based in the United States were not assigned to a single geographic command. Due to their location, these forces had limited opportunities to train jointly with the overseas-based forces they would joint in time of crisis or war. The lack of a joint headquarters to oversee the forces of the four military services based in the continental United States (CONUS) was long considered a problem that the Joint Chiefs of Staff tried twice to fix. The concept of a joint headquarters for U.S.-based forces resurfaced again at the end of the Cold War and led to the establishment of the U.S. Atlantic Command (USACOM) in 1993 as the unified command for most forces based in CONUS. With the fall of the Berlin Wall and the collapse of the Eastern European communist regimes in 1989, the Cold War was over and a new world order began. Senior Department of Defense (DOD) leadership began considering the implications of such changes on the Department. They recognized that the end of the Cold War would result in reduced defense budgets and forces, especially overseas-based forces, and more nontraditional, regional operations such as peacekeeping and other operations short of a major theater war. In developing a CONUS power projection strategy, they looked at options for changing the worldwide command structure, which included establishing an Americas Command. The initial concept for an Americas Command—a command that would have geographic responsibility for all of North and South America—was not widely accepted by DOD leadership. However, the Chairman, Joint Chiefs of Staff, General Colin Powell, and other senior military leaders during the early 1990s increased attention to the need to place all CONUS-based forces under one joint command to respond to worldwide contingencies. Factors influencing this concept were the anticipation that the overall DOD force drawdown would increase reliance on CONUS-based forces and that joint military operations would become predominant. Chairman Powell believed such a command was needed because CONUS-based forces remained service-oriented. These forces needed to train to operate jointly as a way of life and not just during an occasional exercise. The concept of one command providing joint training to CONUS-based forces and deploying integrated joint forces worldwide to meet contingency operations was recommended by Chairman Powell in a 1993 report on roles and missions to the Secretary of Defense. The mission of this command would be to train and deploy CONUS-based forces as a joint team, and the Chairman concluded that the U.S. Atlantic Command was best suited to assume this mission. The Chairman’s 1993 report on roles and missions led to an expansion of the roles of the U.S. Atlantic Command. Most notably, the Secretary of Defense, upon review of the Chairman’s report, endorsed the concept of one command overseeing the joint training, integrating, and deploying of CONUS-based forces. With this lead, but without formal guidance from the Joint Staff, USACOM leadership began developing plans to expand the Command. As guidance and the plan for implementing the Command’s expanded roles developed, DOD’s military leadership surfaced many issues. Principal among these issues was whether (1) all CONUS-based forces would come under the Command, including those on the west coast; (2) the Commander in Chief (Commander) of USACOM would remain the Commander of NATO’s Supreme Allied Command, Atlantic; and (3) the Command would retain a geographic area of responsibility along with its functional responsibilities as joint force integrator. While these issues were settled early by the Secretary of Defense, some issues were never fully resolved, including who would be responsible for developing joint force packages for deployment overseas in support of operations and numerous concerns about who would have command authority over forces. This lack of consensus on the expansion and implementation of USACOM was expressed in key military commands’ review comments and objections to USACOM’s implementation plan and formal changes to the Unified Command Plan. Table 1.1 provides a chronology of key events that led to giving the U.S. Atlantic Command the new responsibilities for training, integrating, and providing CONUS-based forces for worldwide operations. The USACOM implementation plan and revised Unified Command Plan, both issued in October 1993, provided the initial approval and guidance for expanding the responsibilities of the U.S. Atlantic Command. The Unified Command Plan gave USACOM “additional responsibilities for the joint training, preparation, and packaging of assigned CONUS-based forces for worldwide employment” and assigned it four service component commands. The implementation plan provided the institutional framework and direction for establishing USACOM as the “Joint Force Integrator” of the bulk of CONUS-based forces. As the joint force integrator, USACOM was to maximize America’s military capability through joint training, force integration, and deployment of ready CONUS-based forces to support geographic commanders, its own, and domestic requirements. This mission statement, detailed in the implementation plan, evolved into USACOM’s functional roles as joint force trainer, provider, and integrator. The USACOM implementation plan was developed by a multiservice working group for the Chairman, Joint Chiefs of Staff, and approved by the Secretary of Defense and the Chairman. The plan provided USACOM the basic concept of its mission, responsibilities, and forces. It further detailed the basic operational concept to be implemented in six areas. Three of these areas of particular relevance to USACOM’s new functional roles were (1) the adaptive joint force packaging concept; (2) joint force training and interoperability concepts; and (3) USACOM joint doctrine and joint tactics, techniques, and procedures. The Command was given 12 to 24 months to complete the transition. The Unified Command Plan is reviewed and updated not less than every 2 years. In 1997, USACOM’s functional roles were revised in the plan for the first time to include the following: Conduct joint training of assigned forces and assigned Joint Task Forcestaffs, and support other unified commands as required. As joint force integrator, develop joint, combined, interagency capabilities to improve interoperability and enhance joint capabilities through technology, systems, and doctrine. Provide trained and ready joint forces in response to the capability requirements of supported geographic commands. Overview of USACOM DOD has nine unified commands, each of which comprises forces from two or more of the military departments and is assigned broad continuing missions. These commands report to the Secretary of Defense, with the Chairman of the Joint Chiefs of Staff functioning as their spokesman. Four of the commands are geographic commands that are primarily responsible for planning and conducting military operations in assigned regions of the world, and four are functional commands that support military operations. The ninth command, USACOM, is unique in that it has both geographic and functional missions. Figure 1.1 shows the organizational structure of the unified commands. In addition to its headquarters staff, USACOM has several subordinate commands, such as U.S. Forces Azores, and its four service component commands—the Air Force’s Air Combat Command, the Army’s Forces Command, the Navy’s Atlantic Fleet Command and the Marines Corps’ Marine Corps Forces Atlantic. Appendix I shows USACOM’s organizational structure. USACOM’s service component commands comprise approximately 1.4 million armed forces personnel, or about 80 percent of the active and reserve forces based in the CONUS, and more than 65 percent of U.S. active and reserve forces worldwide. Figure 1.2 shows the areas of the world and percentage of forces assigned to the geographic commands. While USACOM’s personnel levels gradually increased in its initial years of expansion—from about 1,600 in fiscal year 1994 to over 1,750 in fiscal year 1997—its civilian and military personnel level dropped to about 1,600in fiscal year 1998, primarily because part of USACOM’s geographic responsibilities were transferred to the U.S. Southern Command. During this period, USACOM’s operations and maintenance budget, which is provided for through the Department of the Navy, grew from about $50 million to about $90 million. Most of the increase was related to establishing the Joint Training, Analysis and Simulation Center, which provides computer-assisted training to joint force commanders, staff, and service components. The Command’s size increased significantly in October 1998, when five activities, controlled by the Chairman, Joint Chiefs of Staff, and their approximately 1,100 personnel were transferred to USACOM. The Secretary of Defense also assigned USACOM authority and responsibility for DOD’s joint concept development and experimentation in 1998. An initial budget of $30 million for fiscal year 1999 for these activities was approved by DOD. USACOM estimates it will have 151 personnel assigned to these activities by October 2000. In response to congressional interest in DOD’s efforts to improve joint operations, we reviewed the assimilation of USACOM into DOD as the major trainer, provider, and integrator of forces for worldwide deployment. More specifically, we determined (1) USACOM’s actions to establish itself as the joint force trainer, provider, and integrator of most continental U.S.-based forces; (2) views on the value of the Command’s contributions to joint military capabilities; and (3) recent expansion of the Command’s responsibilities and its possible effect on the Command. We focused on USACOM’s functional roles; we did not examine the rationale for USACOM’s geographic and NATO responsibilities or the effect of these responsibilities on the execution of USACOM’s functional roles. To accomplish our objectives, we met with officials and representatives of USACOM and numerous other DOD components and reviewed studies, reports, and other documents concerning the Command’s history and its activities as a joint trainer, provider, and integrator. We performed our fieldwork from May 1997 to August 1998. A more detailed discussion of the scope and methodology of our review, including organizations visited, officials interviewed, and documents reviewed, is in appendix II. Our review was performed in accordance with generally accepted government auditing standards. In pursuing its joint force trainer role, USACOM has generally followed its 1993 implementation plan, making notable progress in developing a joint task force commander training program and establishing a state-of-the-art simulation training center. The joint force provider and integrator roles were redirected with the decision, in late 1995, to deviate from the concept of adaptive joint force packages, a major element of the implementation plan. For its role as joint force provider, USACOM has adopted a process-oriented approach that is less proactive in meeting force requirements for worldwide deployments and is more acceptable to supported geographic commanders. To carry out its integrator role, USACOM has adopted an approach that advances joint capabilities and force interoperability through a combination of technology, systems, and doctrine initiatives. USACOM planned to improve joint force training and interoperability through six initiatives laid out in its implementation plan. The initiatives were to (1) improve the exercise scheduling process, (2) develop mobile training teams, (3) train joint task force commanders and staffs, (4) schedule the use of service ranges and training facilities for joint training and interoperability, (5) assist its service components in unit-level training intended to ensure the interoperability of forces and equipment, and (6) develop a joint and combined (with allied forces) training program for U.S. forces in support of nontraditional missions, such as peacekeeping and humanitarian assistance. USACOM has taken actions on the first two initiatives and has responded to the third, fifth, and sixth initiatives through its requirements-based joint training program. While the fourth initiative was included in the Command’s implementation plan, USACOM subsequently recognized that it did not have the authority to schedule training events at the service-owned ranges and facilities. The Chairman of the Joint Chiefs of Staff initially gave USACOM executive agent authority (authority to act on his behalf) for joint training, including the scheduling of all geographic commander training exercises, USACOM’s first initiative. In September 1996, the Chairman removed this authority in part because of resistance from the other geographic commands. By summer 1997, the Chairman, through the Joint Training Policy, again authorized USACOM to resolve scheduling conflicts for worldwide training. While USACOM maintains information on all training that the services’ forces are requested to participate in, the information is not adequately automated to enable the Command to efficiently fulfill the scheduling function. The Command has defined the requirement for such information support and is attempting to determine how that requirement will be met. USACOM does provide mobile training teams to other commands for training exercises. Generally, these teams cover the academic phase of the exercises. The Command, for example, sent a training team to Kuwait to help the Central Command prepare its joint task force for a recent operation. It also has included training support, which may include mobile training teams, for the other geographic commanders in its long-range joint training schedule. To satisfy its third, fifth, and sixth initiatives, USACOM has developed a joint training program that reflects the supported geographic commanders’ stated requirements. These are expressed as joint tasks essential to accomplishing assigned or anticipated missions (joint mission-essential tasks). The Command’s training program is derived from the six training categories identified in the Chairman of the Joint Chiefs of Staff’s joint training manual and are described in appendix III. USACOM primarily provides component interoperability and joint training and participates in and supports multinational interoperability, joint and multinational, and interagency and intergovernmental training. The Command’s primary focus has been on joint task force training under guidance provided by the Secretary of Defense. Joint training, conducted primarily at USACOM’s Joint Training, Analysis and Simulation Center, encompasses a series of exercises—Unified Endeavor—that provide training for joint force commanders and their staffs. The training focuses on operational and strategic tasks and has evolved into a multiphased exercise. USACOM uses state-of-the-art modeling and simulation technology and different exercise modules that allows the exercise to be adapted to meet the specific needs of the training participants. For example, one module provides the academic phase of the training and another module provides all phases of an exercise. Until recently, the exercises generally included three phases, but USACOM added analysis as a fourth phase. Phase I includes a series of seminars covering a broad spectrum of operational topics. Participants develop a common understanding of joint issues. Phase II presents a realistic scenario in which the joint task force launches crisis action planning and formulates an operations order. Phase III implements the operations order through a computer-simulated exercise that focuses on joint task force procedures, decision-making, and the application of doctrine. Phase IV, conducted after the exercise, identifies lessons learned, joint after-action reviews, and the commander’s exercise report. USACOM and others consider the Command’s Joint Training, Analysis and Simulation Center to be a world premier center of next-generation computer modeling and simulation and a centerpiece for joint task force training. The Center is equipped with secured communications and video capabilities that enable commands around the world to participate in its exercises. These capabilities allow USACOM to conduct training without incurring the significant expenses normally associated with large field training exercises and help reduce force personnel and operating tempos. For example, before the Center was created, a joint task force exercise would require approximately 45,000 personnel at sea or in the field. With the Center, only about 1,000 headquarters personnel are involved. As of December 1998, USACOM had conducted seven Unified Endeavor exercises and planned to provide varying levels of support to at least 17 exercises—Unified Endeavor and otherwise—per year during fiscal years 1999-2001. Figure 2.1 shows one of the Center’s rooms used for the Unified Endeavor exercises. We attended the Unified Endeavor 98-1 exercise to observe firsthand the training provided in this joint environment. While smooth joint operations evolved over the course of the exercise, service representatives initially tended to view problems and pressure situations from a service rather than a joint perspective. The initial phase allowed the key officers and their support staff, including foreign participants, to grasp the details of the scenario. These details included the basic rules of engagement and discussions of what had to be accomplished to plan the operation. In the exercise’s second phase, staff from the participating U.S. and foreign military services came together to present their proposals for deploying and employing their forces. As the exercise evolved, service representatives came to appreciate the value and importance of coordinating every aspect of their operations with the other services and the joint task force commander. The third phase of the exercise was a highly stressful environment. The joint task force commander and his staff were presented with numerous unknowns and an overwhelming amount of information. Coordination and understanding among service elements became paramount to successfully resolving these situations. For interoperability training, units from more than one of USACOM’s service components are brought together in field exercises to practice their skills in a joint environment. USACOM sponsors three recurring interoperability exercises in which the Command coordinates the training opportunities for its component commands, provides specific joint mission-essential tasks for incorporation into the training, and approves the exercise’s design. The goal of the training is to ensure that U.S. military personnel and units are not confronted with a joint warfighting task for the first time after arrival in a geographic command’s area of responsibility. For example, USACOM sponsors a recurring combat aircraft flying exercise—Quick Force—that is designed to train Air Force and participating Navy and Marine Corps units in joint air operations tailored to Southwest Asia. This exercise is devised to train commanders and aircrews to plan, coordinate, and execute complex day and night, long-range joint missions from widely dispersed operating locations. USACOM relies on its service component commands to plan and execute interoperability training as part of existing service field exercises. According to USACOM’s chief for joint interoperability training, the service component commanders are responsible for evaluating the joint training proficiency demonstrated. The force commander of the exercise is responsible for the accomplishment of joint training objectives and for identifying any operational deficiencies in doctrine, training, material, education, and organization. USACOM provides monitors to evaluate exercise objectives. Until recently, USACOM limited its attention to interoperability training, as its primary focus was on its Unified Endeavor training program. As this training has matured, USACOM recently began to increase its attention on more fully developing and planning the Command’s interoperability training. The Command recently developed, with concurrence from the other geographic commanders, a list of joint interoperability tasks tied to the services’ mission-essential task lists. With the development and acceptance of these joint interoperability tasks, Command officials believe that their joint interoperability exercises will have a better requirements base from which to plan and execute. Also, USACOM is looking for ways to better tie these exercises to computer-assisted modeling. USACOM provides joint and multinational training support through its coordination of U.S. participation in “partnership for peace” exercises. The partnership for peace exercise program is a major North Atlantic Treaty Organization (NATO) initiative directed at increasing confidence and cooperative efforts among partner nations to reinforce regional stability. The Command was recently designated the lead activity in the partnership for peace simulation center network. USACOM also supports training that involves intergovernmental agencies. Its involvement is primarily through support to NATO, as Supreme Allied Commander, Atlantic, and to non-DOD agencies. For example, USACOM has begun including representatives of other federal agencies, such as the State Department and Drug Enforcement Administration, in its Unified Endeavor exercises. USACOM has made substantive changes to its approach to providing forces. Adaptive joint force packaging was to have been the foundation for implementing its force provider role. When this concept encountered strong opposition, USACOM adopted a process-oriented approach that is much less controversial with supported geographic commands and the military services. With over 65 percent of all U.S. forces assigned to it, USACOM is the major source of forces for other geographic commands and for military support and assistance to U.S. civil agencies. However, its involvement in force deployment decisions varies from operation to operation. The Command also helps its service components manage the operating tempos of heavily used assets. USACOM’s implementation plan introduced the operational concept of adaptive joint force packages as an approach for carrying out USACOM’s functional roles, particularly the provider and integrator roles. Under this approach, USACOM would develop force packages for operations less than a major regional war and complement, but not affect, the deliberate planning process used by geographic commanders to plan for major regional wars. USACOM’s development of these force packages, using its CONUS-based forces, was conceived as a way to fill the void created by reductions in forward-positioned forces and in-theater force capabilities in the early 1990s. It was designed to make the most efficient use of the full array of forces and capabilities of the military services, exploring and refining force package options to meet the geographic commanders’ needs. The approach, however, encountered much criticism and resistance, particularly from other geographic commands and the military services, which did not want or value a significant role for USACOM in determining which forces to use in meeting mission requirements. Because of this resistance and the unwillingness of the Chairman of the Joint Chiefs of Staff to support USACOM in its broad implementation of the force packaging concept, USACOM largely abandoned it in 1995 and adopted a process-oriented approach. Adaptive joint force packages and their demise are discussed in appendix IV. The major difference between the adaptive joint force packaging concept and the process-oriented approach that replaced it is that the new approach allows the supported geographic commander to “package” the forces to suit his mission needs. In essence, USACOM prepares the assets, which are put together as the supported commander sees fit rather than having ready-to-go packages developed by USACOM. The new approach retains aspects of the force packaging concept. Most notably, geographic commanders are to present their force requirements in terms of the capability needed, not in the traditional terms of requests for specific units or forces. Forces are to be selected by the supported commanders, in collaboration with USACOM, from across the services to avoid over-tasking any particular force. The process is shown in figure 2.2 and discussed in more detail in appendix V. USACOM, commanding nearly 68 percent of the combat forces assigned to geographic commands, is the major provider of forces for worldwide operations. The size of its assigned forces far exceeds the requirements for operations within the Command’s area of responsibility, which is much less demanding than that of other geographic commands. As a result, USACOM can provide forces to all the geographic commands, and its forces participate in the majority of military operations. The Command also provides military support and assistance to civil authorities for domestic requirements, such as hurricane relief and security at major U.S. events. During 1998, USACOM supported over 25 major operations and many other smaller operations worldwide. These ranged from peacekeeping and humanitarian assistance to evacuation of U.S. and allied nationals from threatened locations. On average, USACOM reported that it had over 30 ships, 400 aircraft, and 40,000 personnel deployed throughout 1998. The Pacific, European, and Special Operations Commands also have assigned forces, but they are unable to provide the same level of force support to other commands as USACOM. The Pacific Command has large Navy and Marine Corps forces but has limited Army and Air Force capabilities. European Command officials said their Command rarely provides forces to other commands because its forces are most often responding to requirements in their own area of responsibility. The Special Operations Command provides specialized forces to other commands for unique operations. The Central and Southern Commands have very few forces of their own and are dependent on force providers such as USACOM to routinely furnish them with forces. USACOM provides forces throughout the world for the entire range of military operations, from war to operations other than war that may or may not involve combat. Since the Gulf War in 1991, the U.S. military has largely been involved in operations that focus on promoting peace and deterring war, such as the U.S. military support to the NATO peacekeeping mission in Bosnia and the enforcement of U.N. sanctions against Iraq. The extent of USACOM’s involvement in force decisions varies from operation to operation. In decisions regarding deployment of major combatant forces, the Command plays a very limited role. The military services and USACOM’s service components collaborate on such decisions. Although USACOM’s interaction with geographic commands and service components may influence force decisions, USACOM’s Commander stated that when specific forces are requested by a geographic commander, his Command cannot say “no” if those forces are available. USACOM is not directly involved in the other geographic commands’ deliberate planning—the process for preparing joint operation plans—except when there is a shortfall in the forces needed to implement the plan or the supported commander requests USACOM’s involvement. Every geographic command is to develop deliberate plans during peacetime for possible contingencies within its area of responsibility as directed by the national command authority and the Chairman of the Joint Chiefs of Staff. As a supporting commander, USACOM and its service component commands examine the operation plans of other commands to help identify shortfalls in providing forces as needed to support the plans. USACOM’s component commands work more closely with the geographic commands and their service components to develop the deployment data to sequence the movement of forces, logistics, and transportation to implement the plan. During crises, for which an approved operation plan may not exist, the responsible geographic command either adjusts an existing plan or develops a new one to respond to specific circumstances or taskings. The time available for planning may be hours or days. The supported commander may request inputs on force readiness and force alternatives from USACOM and its component commands. A European Command official said USACOM is seldom involved in his Command’s planning process for crisis operations because of the compressed planning time before the operation commences. USACOM has its greatest latitude in suggesting force options for military operations other than war that do not involve combat operations, such as nation assistance and overseas presence operations, and for ongoing contingency operations. In these situations, time is often not as critical and USACOM can work with the supported command and component commands to develop possible across-the-service force options. A primary consideration in identifying and selecting forces for deployment is the operating and personnel tempos of the forces, which affect force readiness. As a force provider, USACOM headquarters supports its service component commands in resolving tempo issues and monitors the readiness of assigned forces and the impact of deployments on major contingency and war plans. While tempo issues are primarily a service responsibility, USACOM works with its service component commands and the geographic commands to help balance force tempos to maintain the readiness of its forces and desired quality-of-life standards. This involves analyzing tempo data across its service components and developing force alternatives for meeting geographic commands’ needs within tempo guidelines. According to USACOM officials, the Command devotes much attention to managing certain assets with unique mission capabilities that are limited in number and continually in high demand among the geographic commands to support most crises, contingencies, and long-term joint task force operations in their regions. These low-density/high-demand assets, such as the Airborne Warning and Control Systems and E/A-6B electronic warfare aircraft and Patriot missile batteries, are managed under the Chaiman of the Joint Staff’s Global Military Force Policy. This policy, which guides decisions on the peacetime use of assets that are few in number but high in demand, establishes prioritization guidelines for their use and operating tempo thresholds that can be exceeded only with Secretary of Defense approval. The policy, devised in 1996, is intended to maintain required levels of unit training and optimal use of the assets across all geographic commander missions, while discouraging the overuse of selected assets. USACOM is responsible for 16 of the 32 low-density/high-demand assets—weapon systems and personnel units—that are included in the Global Military Force Policy. The Pacific and European Commands have some of these 16 assets, but the bulk of them are assigned to USACOM. These assets are largely Air Force aircraft. In this support role, USACOM has initiated several actions to help implement the policy, including bringing the services and geographic commands together to resolve conflicts over the distribution of assets, devising a monitoring report for the Joint Staff, and recommending to the services assets that should be included in future policy revisions. Appendix VI provides a list of the low-density/high-demand assets currently assigned to USACOM. The Global Military Force Policy does not capture all of the highly tasked assets. For example, the policy does not include less prominent assets such as dog teams, military security police, water purification systems, intelligence personnel, and medical units. There were similar concerns about the high operating tempos of these assets, and USACOM has monitored them closely. Most of these assets, or alternatives to them, were available across the services. Therefore, USACOM has some flexibility in identifying alternative force options to help balance unit tempos. Another Joint Staff policy affecting USACOM as a force provider is the Global Naval Force Presence Policy. This policy establishes long-range planning guidance for the location and number of U.S. naval forces—aircraft carriers and surface combatant and amphibious ships—provided to geographic commands on a fair-share basis. Under this scheduling policy, the Navy controls the operating and personnel tempos for these heavily demanded naval assets, while it ensures that geographic commands’ requirements are met. USACOM has little involvement in scheduling these assets. While this policy provides little flexibility for creating deployment options in most situations, it can be adjusted by the Secretary of Defense to meet unexpected contingencies. According to an action officer in USACOM’s operations directorate, one of USACOM’s difficulties in monitoring tempos has been the lack of joint tempo guidelines that could be applied across service units and assets. Each service has different definitions of what constitutes a deployment, dissimilar policies or guidance for the length of time units or personnel should be deployed, and different systems for tracking deployments. For example, the Army defined a deployment as a movement during which a unit spends an overnight away from its home station. Deployments to combat training centers were not counted. In contrast, the Marine Corps defines a deployment as any movement from the home station for 10 days or more, including a deployment for training at its combat training center. As a result, it is difficult to compare tempos among the services. An official in USACOM’s operations directorate said the services would have to develop joint tempo guidelines because they have the responsibility for managing the tempos of their people and assets. The official did not anticipate a movement anytime soon to create such guidelines because of the differences in the types of assets and in the management and deployment of the assets. DOD, in responding to a 1998 GAO report on joint training, acknowledged that the services’ ability to measure overall deployment rates is still evolving. The integrator role has changed significantly since 1993 and is still evolving. It was originally tied to adaptive joint force packaging. But with that concept’s demise, the Command’s role became to implement a process to improve interoperability and enhance joint force capabilities through the blending of technology, systems, and doctrine. The Command’s force integration objectives are to (1) identify and refine doctrinal issues affecting joint force operations; (2) identify, develop, evaluate, and incorporate new and emerging technologies to support joint operations; and (3) refine and integrate existing systems to support joint operations. The Command’s emphasis since 1996 has been to sponsor advanced concept technology demonstration projects that have a multiservice emphasis and search for solutions to joint interoperability problems among advanced battle systems. It has given limited attention to joint doctrinal issues. Establishing its integration role has not been easy for USACOM. USACOM’s Commander (1994-97) characterized the Command’s integration efforts as a “real struggle” and said the Joint Staff was not supportive. The current USACOM Commander expressed similar comments, citing the integration role as the most challenging yet promising element of his Command’s mission. He told us the Command stumbled at times and overcame numerous false starts until its new integration role emerged. He said that as USACOM’s functional roles mature, the Command may create more friction with the services and other commands, many of which view USACOM as a competitor. Its efforts were significantly enhanced with the October 1998 transfer to the Command of five joint centers and activities previously controlled by the Chairman of the Joint Chiefs of Staff (see ch. 4). USACOM’s primary means to fulfill its integration role has been to sponsor advanced concept technology demonstration projects. These projects are designed to permit early and inexpensive evaluations of mature advanced technologies to meet the needs of the warfighter. The Command considered such projects to be the best way to achieve integration by building new systems that are interoperable from the beginning. The warfighter determines the military utility of the project before a commitment is made to proceed with acquisition. These projects also allow for the development and refinement of operational concepts for using new capabilities. As an advanced concept technology demonstration project sponsor, USACOM provides an operations manager to lead an assessment to determine the project’s joint military utility and to fully understand its joint operational capability. The Command also provides the personnel for the projects and writes the joint doctrine and concepts of operation to effectively employ these technologies. USACOM only accepts projects that promote interoperability and move the military toward new levels of effectiveness in joint warfighting. Various demonstration managers, such as the Deputy Under Secretary of Defense for Acquisition and Technology, fund the projects. At the completion of our review, USACOM was sponsoring 12 of DOD’s 41 active advanced concept technology demonstrations. It completed work in 1996 on the Predator project, a medium-altitude unmanned aerial vehicle that the Air Force is to acquire. Table 2.1 identifies each USACOM project and its funding through fiscal year 2003. We issued a report in October 1998 on opportunities for DOD to improve its advanced concept technology demonstration program, including the process for selecting candidate projects and guidance on entering technologies into the normal acquisition process, and the risky practice of procuring prototypes beyond those needed for the basic demonstration and before completing product and concept demonstration. In addition to its advanced concept technology demonstration projects, USACOM has sought opportunities to advance the interoperability of systems already deployed or about to be deployed that make a difference on the battlefield. Particularly critical capabilities USACOM has identified for interoperability enhancements include theater missile defense; command, control, and communications; intelligence, surveillance, and reconnaissance; and combat identification (friend or foe). The military services have a long history of interoperability problems during joint operations, primarily because DOD has not given sufficient consideration to the need for weapon systems to operate with other systems, including exchanging information effectively during a joint operation. We reported on such weaknesses in the acquisition of command, control, communications, computers, and intelligence systems in March 1998. A critical question is who pays the costs associated with joint requirements that USACOM identifies in service acquisition programs? The services develop weapon system requirements, and the dollars pass from the Secretary of Defense to the services to satisfy the requirements. If USACOM believes modifications are needed to a weapon system to enable it to operate in a joint environment, the Command can elevate this interoperability issue to the Chairman of the Joint Chiefs of Staff and to the Joint Requirements Oversight Council for action. For example, the USACOM Commander recently told the Chairman and the Council that the Air Force’s unwillingness to modify the Predator and the concept of operations to allow other services to directly receive information from the unmanned aerial vehicle would limit a joint commander’s flexibility in using such vehicles, hurt interoperability, and inhibit the development of joint tactics. According to USACOM’s Operations Manager for this area, the Air Force needs to provide additional funding to make the Predator truly joint but it wants to maintain operational control of the system. As of November 1998, this interoperability concern had not been resolved. USACOM can also enhance force integration through its responsibility as the trainer and readiness overseer of assigned reserve component forces. This responsibility allows USACOM to influence the training and readiness of these reserves and their budgets to achieve full integration of the reserve and active forces when the assigned reserves are mobilized. This is important because of the increased reliance on reserve component forces to carry out contingency missions. The USACOM Commander (1993-97) described the Command’s oversight as a critical step in bringing the reserve forces into the total joint force structure. USACOM and others believe that the Command has helped advance the joint military capabilities of U.S. forces. While USACOM has conducted several self-assessments of its functional roles, we found that these assessments provided little insight into the overall value of the Command’s efforts to enhance joint capabilities. The Command has established goals and objectives as a joint trainer, provider, and integrator and is giving increased attention to monitoring and accomplishing tasks designed to achieve these objectives and ultimately enhance joint operational capabilities. Our discussions with various elements of DOD found little consensus regarding the value of USACOM’s contributions in its functional roles but general agreement that the Command is making important contributions that should enhance U.S. military capabilities. USACOM has conducted three self-assessments of its functional roles. These appraisals did not specifically evaluate the Command’s contribution to improving joint operational capabilities but discussed progress of actions taken in its functional roles. The first two appraisals covered USACOM’s success in executing its plan for implementing the functional roles, while the most recent appraisal rated the Command’s progress in each of its major focus areas. In quarterly reports to the Secretary of Defense and in testimony before the Congress, USACOM has presented a positive picture of its progress and indicated that the military has reached an unprecedented level of jointness. In a June 1994 interim report to the Chairman of the Joint Chiefs of Staff, USACOM’s Commander noted that the Command’s first 6 months of transition into its new functional roles had been eventful and that the Command was progressing well in developing new methodologies to meet the geographic commands’ needs. He recognized that it would take time and the help of the service components to refine all the responsibilities relating to the new mission. He reported that USACOM’s vision and strategic plan had been validated and that the Command was on course and anticipated making even greater progress in the next 6 months. USACOM performed a second assessment in spring 1996, in response to a request from the Chairman of the Joint Chiefs of Staff for a review of the success of USACOM’s implementation plan at the 2-year point. The Command used Joint Vision 2010, the military’s long-range strategic vision, as the template for measuring its success, but the document does not provide specific measures for gauging improvements in operational capabilities. USACOM reported that, overall, it had successfully implemented its key assigned responsibilities and missions. It described its new functional responsibilities as “interrelated,” having a synergistic effect on the evolution of joint operations. It reported that it had placed major emphasis on its joint force trainer role and noted development of a three-tier training model. The Command described its joint force provider role as a five-step process, with adaptive joint force packaging no longer a critical component. Seeing the continuing evolution of its force provider role as a key factor in supporting Joint Vision 2010, USACOM assessed the implementation plan task as accomplished. The Command considered its joint force integrator role the least developed but the most necessary in achieving coherent joint operations and fulfilling Joint Vision 2010. Although the assessment covered only the advanced concept technology demonstrations segment of its integrator role, USACOM reported that it had also successfully implemented this task. As requested by USACOM’s Commander, USACOM staff assessed progress and problems in the Command’s major focus areas in early 1998. This self-assessment covered the Command’s directorate-level leadership responsible for each major focus area. An official involved in this assessment said statistical, quantifiable measures were not documented to support the progress ratings; however, critical and candid comments were made during the process. The assessments cited “progress” or “satisfactory progress” in 38 of 42 rated areas, such as command focus on joint training, advanced concept technology demonstration project management, and monitoring of low-density/high-demand asset tempos. Progress was judged “unsatisfactory” in four areas: (1) exercise requirements determination and worldwide scheduling process; (2) training and readiness oversight for assigned forces; (3) reserve component integration and training, and readiness oversight; and (4) institutionalizing the force provider process. This assessment was discussed within the Command and during reviews of major focus areas and was updated to reflect changes in command responsibilities. USACOM, like other unified commands, uses several mechanisms to report progress and issues to DOD leadership and the Congress. These include periodic commanders-in-chief conferences, messages and reports to or discussions with the Chairman of the Joint Chiefs of Staff, and testimony before the Congress. Minutes were not kept of the commanders-in-chief conferences, but we obtained Commander, USACOM, quarterly reports, which are to focus on the Command’s key issues. Reports submitted to the Secretary of Defense between May 1995 and April 1998 painted a positive picture of USACOM’s progress, citing activities in areas such as joint training exercises, theater missile defense, and advanced technology projects. The reports also covered operational issues but included little discussion of the Command’s problems in implementing its functional roles. For example, none of the reports discussed the wide opposition to adaptive joint force packaging or USACOM’s decision to change its approach, even though the Secretary of Defense approved the implementation plan for its functional roles, which included development of adaptive joint force packages. In congressional testimony in March 1997, the Commander of USACOM (1995-97) discussed the Command’s annual accomplishments, plans for the future, and areas of concern. The Commander noted that U.S. military operations had evolved from specialized joint operations to a level approaching synergistic joint operations. In 1998 testimony, the current USACOM Commander reported continued progress, describing the military as having reached “an unprecedented level of jointness.” USACOM’s ultimate goal is to advance joint warfighting to a level it has defined as “coherent” joint operations with all battle systems, communications systems, and information databases fully interoperable and linked by common joint doctrine. Figure 3.1 depicts the evolution from specialized and synergistic joint operations to coherent joint operations. At the conclusion of our review, USACOM was completing the development of a new strategic planning system to enhance its management of its major focus areas and facilitate strategic planning within the USACOM staff. Goals, objectives, and subobjectives were defined in each of its major focus areas, and an automated internal process was being established to help the Command track actions being taken in each area. The goals and objectives were designed to support the Command’s overall mission to maximize U.S. military capability through joint training, force integration, and deployment of ready forces in support of worldwide operations. Table 3.1 provides examples of goals, objectives, and subobjectives in the joint force trainer, provider, and integrator major focus areas. The goals and the objectives and subobjectives necessary to achieve the goals are established by officials in each major focus area. The objectives and subobjectives are to be understandable, relevant, attainable, and measurable. Progress in achieving the subobjectives becomes the measures for the objective’s success, and progress on objectives is the measure of success in achieving a goal. The relative importance of each objective and subobjective is reflected in weights or values assigned to each and is used to measure progress. Objective and subjective assessments of progress are to be routinely made and reported. Command officials expect that in some areas progress will not be easy to measure and will require subjective judgments. USACOM officials believed the Command’s new planning system, which became operational on October 20, 1998, meets many of the expectations of the Government Performance and Results Act, which requires agencies to set goals, measure performance, and report on their accomplishments. The Command believed that actions it plans to adopt in major focus areas would ultimately improve the military capabilities of U.S. forces, the mission of the Command. The officials, however, recognized that the planning system does not include assessments or measures that can be used to evaluate the Command’s impact on military capabilities. Under the Results Act, agencies’ performance plans are to include performance goals and measures to help assess whether the agency is successful in accomplishing its general goals and missions. The Congress anticipated that the Results Act principles would be institutionalized and practiced at all organizational levels of the federal government. Establishing such performance measures could be difficult, but they could help USACOM determine what it needs to do to improve its performance. DOD has begun to implement the Results Act at all organizational levels, and the Secretary of Defense tasked subordinate organizations in 1998 to align their programs with DOD program goals established under the act. Recognizing that the development of qualitative and quantitative performance measures to assess mission accomplishment has been slow, USACOM has provided training to its military officers on performance objectives. USACOM officials said that while the Command has begun to take steps to implement the principles of the Act, they believed the Command needs additional implementation guidance from the Office of the Secretary of Defense. In the absence of specific assessments of USACOM’s impact on joint operations, we asked representatives from the Joint Staff, USACOM and its service component commands, and supported geographic commands for their views on USACOM’s value and contributions in advancing DOD’s joint military capabilities. Opinions varied by command and functional role and ranged from USACOM having little or no impact to being a great contributor and having a vital role. Generally speaking, Joint Staff officials considered USACOM to be of great value and performing an essential function while views among the geographic commands were more reserved. USACOM and its service components believed the Command’s joint task force headquarters training was among the best joint training available. This training has allowed USACOM components’ three-star commanders and their senior staffs to be trained without fielding thousands of troops and to concentrate on joint tasks considered essential to accomplishing a mission anywhere in the world. The Commander of USACOM cited this training as the best example of USACOM’s success in affecting joint operations. He told us that USACOM has secured the funding it needs to do this training and has developed what he described as a “world-class” joint training program. Representatives of the geographic commands we visited believed USACOM’s joint task force commander training has provided good joint experience to CONUS-based forces. They believed this training has enabled participants to perform more effectively as members of a joint task force staff. While these commands spoke well of the training, they have been slow to avail themselves of it and could not attribute any improvement in joint tasks force operations to it. The commands have not taken advantage of this training for several reasons. First, other geographic commands considered providing headquarters’ staff joint task force commander training their responsibility and were reluctant to turn to USACOM for assistance. Second, USACOM’s joint task force commander training is conducted at the Command’s Joint Training Analysis and Simulation Center in Suffolk, Virginia. Thus, geographic commands would have to make a significant investment to deploy several hundred headquarters staff for up to 18 days to complete the three phases of USACOM’s training. Third, the commands are not confident that the training at the Center provides a true picture of the way they would conduct an operation. That is, the scenarios USACOM uses may have limited application in the other geographic commands’ regional areas of operational responsibility. The commands have, therefore, preferred to train their own forces, with assistance from the Joint Warfighting Center. Representatives from this Center have gone to the commands and assisted them with their training at no cost to the command. In October 1998, the Center was assigned to USACOM. USACOM officials believed this would enhance the training support provided by the Command to geographic commands (see ch. 4). Indications are that the geographic commands are beginning to more fully use USACOM as a training support organization. According to the Commander of USACOM, the current generation of commanders of the geographic commands have been more receptive of USACOM support than their predecessors. Also, as USACOM adjusts its training to make it more relevant to other geographic commanders, the commands are requesting USACOM’s support. In 1998, USACOM sent mobile training teams to the U.S. Central Command in support of an operation in Kuwait. The Command was also supporting the U.S. European Command in one of its major training exercises. U.S. Southern Command has requested support from USACOM for one of its major Caribbean joint exercises and asked the Command to schedule the training exercise for the next 3 years. Regarding interoperability training, USACOM’s component commands believed the Command should be more involved in planning and executing training exercises. Most of this training was existing service exercises selected to be used as joint interoperability training. Some service component officials believed that without sufficient USACOM influence, the sponsoring services would be inclined to make these exercises too service-specific or self-serving. For example, the Navy’s annual joint task force exercise has basically been a preparation for a carrier battle group to make its next deployment. The Air Force has participated, but Air Combat Command officials told us they did not believe they gained much joint training experience from the exercise. USACOM officials recognize that the Command has not given interoperability training the same level of emphasis as its joint task force training. They believed, however, that components’ use of the recently developed universal joint interoperability tasks list in planning this training would result in more joint orientation to the training. As the major joint force provider, USACOM was valued by the Joint Staff, other geographic commands, and its service component commands. The Joint Staff believed that USACOM, as a single joint command assigned the majority of the four services’ forces, has provided a more efficient way of obtaining forces to meet the mission needs of the other geographic commands. Prior to establishing USACOM, the Joint Staff dealt individually with each of the services to obtain the necessary forces. Now, the Joint Staff can go to USACOM, which can coordinate with its service component commands to identify available forces with the needed capabilities and recommend force options. The Chairman of the Joint Chiefs of Staff (1993-97) told us that forces have never been provided as efficiently as USACOM has done it and that forces were better trained and equipped when they arrived where needed. The geographic commands we visited that USACOM primarily supports viewed the Command as a dependable and reliable force provider. The U.S. Central Command stated that forces provided by USACOM have been well trained and have met the Command’s needs. The Command described USACOM forces as having performed exceptionally well in Operation Desert Thunder, in response to Iraq’s denial of access to its facilities to U.N. weapon inspectors in February 1998. The Command also stated that USACOM could provide forces more tailored to fighting in its area of responsibility than the U.S. European or Pacific Commands because USACOM forces have routinely deployed for exercises and missions in support of ongoing operations in their area. Similarly, U.S. European Command officials said that USACOM has been responsive to their Command’s force needs and was doing a good job as a force provider. The U.S. European Command also noted that USACOM has ensured equitable tasking among CONUS-based forces and has allowed the European Command to focus on the operation at hand. The U.S. Southern Command, with few forces of its own, believed that the withdrawal of U.S. forces from Panama throughout 1999 would make the Southern Command more dependent on USACOM for forces to support its exercise and operations requirements. In discussing its contributions as a major provider of forces, USACOM believed that it adds value by providing the Joint Staff with informed force selection inputs based on all capable forces available from across its service components. For example, the European Command requested that an Air Force engineering unit build a bridge in 1997. USACOM identified a Navy Seabees unit already deployed in Spain as an option. The European Command agreed to use this unit. USACOM believed that it has supported other geographic commands by providing well-trained forces and alerting them of any potential training needs when forces are deployed. USACOM and its service component commands viewed the Command as an “honest broker” that has drawn upon the capabilities of all the services, as necessary, to meet the mission requirements of the geographic commands. As pointed out by USACOM’s Commander, while USACOM has not been involved in all deployment decisions concerning its assigned forces—such as the Navy’s carrier battle groups or large Army units—and was not in a position to deny an available force to a supported command, the Command has served as a clearinghouse for high-demand forces. For example: USACOM had provided optometrists for its mobile training teams deployed to Africa to train Africans for peacekeeping activities. Optometrists were needed to diagnose eye problems of African troops, who experienced difficulties seeing with night optical equipment. The Forces Command was unable to provide the needed personnel beyond the first deployment, so USACOM tasked its Atlantic Fleet component to provide personnel for the redeployment. In May 1997, an aerostat (radar balloon) that provided coverage in the Florida straits went down. USACOM tasked the Navy’s Atlantic Fleet to provide radar coverage every weekend with an E-2C aircraft squadron. When the balloon was not replaced as expected and the requirement continued, the Atlantic Fleet asked for relief from USACOM. USACOM adjudicated resources with the Air Combat Command so that the Air Forces’s E-3 aircraft would provide coverage for half of the time. USACOM’s service component commands also saw the benefit in having a single unified command act as an arbitrator among themselves. USACOM can arbitrate differences between two of its component commands that can provide the same capability. It can provide rationale as to why one should or should not be tasked to fill a particular requirement and make a decision based on such things as prior tasking and operating and personnel tempos. Its components also saw USACOM as their representative on issues with DOD and other organizations. In representing its components, for example, USACOM handled politically sensitive arrangements over several months with a U.S. embassy, through the State Department, to provide military support to a foreign government for a counterdrug operation conducted between July 1997 and February 1998. USACOM’s involvement allowed its Air Force component, the Air Combat Command, to limit its involvement in the arrangements and concentrate on sourcing the assets and arranging logistics for the operation. The Commander of USACOM told us he considered joint force integration to be the Command’s most important functional role. He believed that over the next 2 years the Command’s integration efforts would gain more recognition for enhancing joint operational capabilities than its efforts in joint training. He said the Command was beginning to gain access to critical “levers of progress,” such as the Joint Requirements Oversight Council, which would enhance its influence. He cited the Command’s development—in collaboration with other geographic commands—of a theater ballistic missile defense capstone requirements document and its August 1998 approval by the Council as a demonstration of the Command’s growing influence and impact. This document is to guide doctrine development and the acquisition programs for this joint mission. While approval was a very significant step for jointness, it raised important questions, including who will pay for joint requirements in service acquisition programs. The services have opposed USACOM’s role and methodology in developing joint requirements and did not believe they should be responsible for funding costs associated with the joint requirements. The USACOM Commander believed the Command has made considerable progress in developing the process by which joint force integration is accomplished. He cited the Command’s advanced concept technology demonstration projects that have a joint emphasis as one of its primary means of enhancing force integration. He said, for example, that the Command’s high-altitude endurance unmanned aerial vehicle project should soon provide aerial vehicles that give warfighters near-real-time, all-weather tactical radar and optical imagery. Views and knowledge about USACOM’s integration role varied among the geographic commands we visited. Few commands were knowledgeable of USACOM’s efforts at integration but perceived them to be closely aligned with the Command’s joint force trainer and provider functions. While these commands were aware that USACOM had responded to some specific opportunities (for example, theater ballistic missile defense) in its integrator role, they described the Command’s involvement in refining joint doctrine and improving systems interoperability as a responsibility shared among the commands. A representative of the Joint Staff’s Director for Operational Plans and Interoperability told us USACOM’s integrator role, as originally defined, faded along with adaptive joint force packages. He believed the Command’s staff had worked hard to redefine this role and give it a meaningful purpose and considered the Command as adding value and performing a vital mission in its redefined role. USACOM’s evolving functional roles as joint force trainer, provider, and integrator have not been fully embraced throughout DOD. Except for USACOM’s joint force trainer role, its functional roles and responsibilities have not been fully incorporated into DOD joint publications or fully accepted or understood by other commands and the military services. USACOM’s functional responsibilities are expanding with the recent assignment of five additional joint staff activities, a new joint experimentation role, and ownership of the joint deployment process. USACOM’s Commander believes these will have a positive impact on its existing functional roles. Over time, the Joint Staff and USACOM have incorporated the Command’s joint force trainer role into joint publications. These documents provide a common understanding among DOD organizations of USACOM’s role in the joint training of forces. USACOM’s training role is identified in the Chairman, Joint Chiefs of Staff, joint training policy and discussed in detail in the Chairman’s joint training manual and joint training master plan. The Chairman’s joint training master plan makes USACOM responsible for the joint training of assigned CONUS-based forces, preparing them to deploy worldwide and participate as members of a joint task force. It also tasks the Command to train joint task forces not trained by other geographic commands. As defined in the joint training manual, USACOM develops the list of common operational joint tasks, with assistance from the geographic commands, the Joint Warfighting Center, and the Joint Staff. These common tasks, which are used by USACOM to train CONUS-based forces, have been adopted by the Chairman as a common standard for all joint training. To further clarify its training role, USACOM issued a joint training plan that defines its role, responsibilities, and programs for the joint training of its assigned forces. This plan also discusses the Command’s support to the Chairman’s joint training program and other geographic commands’ joint training. USACOM has also developed a joint task force headquarters master training guide that has been disseminated to all geographic commands and is used to develop training guides. While USACOM’s force provider and integrator roles are described in broad terms in the Unified Command Plan, these roles have not been incorporated into joint guidance and publications. This lack of inclusion could hinder a common understanding about these roles and what is expected from USACOM. For example, key joint guidance for planning and executing military operations—the Joint Operational Planning and Execution System—does not specifically discuss USACOM’s role as a force provider even though the Command has the preponderance of U.S. forces. The lack of inclusion in joint guidance and publications also may contribute to other DOD units’ resistance or lack of support and hinder sufficient discussion of these roles in military academic education curriculums, which use only approved doctrine and publications for class instruction. Internally, USACOM’s provider role is generally defined in the Command’s operations order and has recently been included as a major focus area. However, USACOM has not issued a standard operating procedure for its provider role. A standard operating procedure contains instructions covering those features of operations that lend themselves to a definite or standardized procedure without the loss of effectiveness. Such instructions delineate for staffs and organizations how they are to carry out their responsibilities. Not having them has caused some difficulties and inefficiencies among the force provider staff, particularly newly assigned staff. USACOM officials stated that they plan to create a standard operating procedure but that the effort is an enormous task and has not been started. USACOM’s integrator role is defined in the Command’s operations order and included as a major focus area. The order notes that the training and providing processes do much to achieve the role’s stated objective of enhanced joint capabilities but that effectively incorporating new technologies occurs primarily through the integration process. Steps in the integration process include developing a concept for new systems, formulating organizational structure, defining equipment requirements, establishing training, and developing and educating leaders. The major focus area for the integration role defines the role’s three objectives and tasks within each to enhance joint force operations. The Secretary of Defense continued to expand USACOM’s roles and responsibilities in 1998, assigning the Command several activities, the new role of joint experimentation, and ownership of the joint deployment process. These changes significantly expand the Command’s size and responsibilities. Additional changes that will further expand the Command’s roles and responsibilities have been approved. Effective October 1998, five activities, formerly controlled by the Chairman of the Joint Chiefs of Staff, and about 1,100 of their authorized personnel were transferred to USACOM. Table 4.1 identifies the activities and provides information on their location, missions, and fiscal year 1999 budget request and authorized military and civilian positions. According to USACOM’s Commander, these activities will significantly enhance the Command’s joint training and integration efforts. Each of the transferred activities has unique capabilities that complement each other and current USACOM organizations and activities. For example, by combining the Joint Warfare Analysis Center’s analytical capabilities with USACOM’s cruise missile support activity, the Command could make great strides in improving the capability to attack targets with precision munitions. Also, having the Joint Warfighting Center work with USACOM’s Joint Training and Simulation Center is anticipated to improve the joint training program, enhance DOD modeling and simulation efforts, and help to develop joint doctrine and implement Joint Vision 2010. USACOM’s Commander also believed the Command’s control of these activities would enhance its capability to analyze and develop solutions for interoperability issues and add to its ability to be the catalyst for change it is intended to be. The transfer of the five activities was driven by the Secretary of Defense’s 1997 Defense Reform Initiative report, which examined approaches to streamline DOD headquarters organizations. Transferring the activities to the field is expected to enable the Joint Staff to better focus on its policy, direction, and oversight responsibilities. The Chairman also expects the transfer will improve joint warfighting and training by strengthening USACOM’s role and capabilities for joint functional training support, joint warfighting support, joint doctrine, and Joint Vision 2010 development. USACOM plans to provide a single source for joint training and warfighting support for the warfighter, with a strong role in lessons learned, modeling and simulation, doctrine, and joint force capability experimentation. USACOM has developed an implementation plan and coordinated it with the Joint Staff, the leadership of the activities, other commands, and the military services. The intent is to integrate these activities into the Command’s joint force trainer, provider, and integrator responsibilities. Little organizational change is anticipated in the near term, with the same level and quality of support by the activities provided to the geographic commands. The Joint Warfighting Center and USACOM’s joint training directorate will merge to achieve a totally integrated joint training team to support joint and multinational training and exercises. Under the plan, USACOM also expects to develop the foundation for “one stop shopping” support for geographic commanders both before and during operations. In May 1998, the Secretary of Defense expanded USACOM’s responsibilities by designating it executive agent for joint concept development and experimentation, effective October 1998. The charter directs USACOM to develop and implement an aggressive program of experimentation to foster innovation and the rapid fielding of new concepts and capabilities for joint operations and to evolve the military force through the “prepare now” strategy for the future. Joint experimentation is intended to facilitate the development of new joint doctrine, organizations, training and education, material, leadership, and people to ensure that the U.S. armed forces can meet future challenges across the full range of military operations. The implementation plan for this new role provides estimates of the resources required for the joint experimentation program; defines the experimentation process; and describes how the program relates to, supports, and leverages the activities of the other components of the Joint Vision 2010 implementation process. The plan builds upon and mutually supports existing and future experimentation programs of the military services, the other unified commands, and the various defense research and development agencies. The plan was submitted to the Chairman of the Joint Chiefs of Staff in July 1998, with a staffing estimate of 127 additional personnel by September 1999, increasing to 171 by September 2000. In November 1998, USACOM had about 27 of these people assigned and projected it would have 151 assigned by October 2000. USACOM worked closely with the Office of the Secretary of Defense and the Joint Staff to establish the initial funding required to create the joint experimentation organization. USACOM requested about $41 million in fiscal year 1999, increasing to $80 million by 2002. Of the $41 million, $30 million was approved: $14.1 million was being redirected from two existing joint warfighting programs, and $15.9 million was being drawn from sources to be identified by the Office of the Under Secretary of Defense (Comptroller). The Secretary of Defense says DOD is committed to an aggressive program of experimentation to foster innovation and rapid fielding of new joint concepts and capabilities. Support by the Secretary and the Chairman of the Joint Chiefs of Staff is considered essential, particularly in areas where USACOM is unable to gain the support of the military services who questioned the size and cost of USACOM’s proposed experimentation program. Providing USACOM the resources to successfully implement the joint experimentation program will be an indicator of DOD’s commitment to this endeavor. The Congress has expressed its strong support for joint warfighting experimentation. In the National Defense Authorization Act for Fiscal Year 1999 (P.L. 105-261), it was stated that it was the sense of the Congress that the Commander of USACOM should be provided appropriate and sufficient resources for joint warfighting experimentation and the appropriate authority to execute assigned responsibilities. We plan to issue a report on the status of joint experimentation in March 1999. In October 1998, the Secretary of Defense, acting on a recommendation of the Chairman of the Joint Chiefs of Staff, made USACOM owner of the joint deployment process. As process owner, USACOM is responsible for maintaining the effectiveness of the process while leading actions to substantially improve the overall efficiency of deployment-related activities. The Joint Staff is to provide USACOM policy guidance, and the U.S. Transportation Command is to provide transportation expertise. USACOM was developing a charter to be coordinated with other DOD components, and provide the basis for a DOD directive. The deployment process would include activities from the time forces and material are selected to be deployed to the time they arrive where needed and then are returned to their home station or place of origin. According to the Secretary of Defense, USACOM’s responsibilities as joint trainer, force provider, and joint force integrator of the bulk of the nation’s combat forces form a solid foundation for USACOM to meet joint deployment process challenges. The Secretary envisioned USACOM as a focal point to manage collaborative efforts to integrate mission-ready deploying forces into the supported geographic command’s joint operation area. USACOM officials considered this new responsibility to be a significant expansion of the Command’s joint force provider role. They believed that in their efforts to make the deployment process more efficient there would be opportunities to improve the efficiency of its provider role. As executive agent of the Secretary of Defense for the joint deployment process, USACOM’s authority to direct DOD components and activities to make changes to the deployment process has yet to be defined. A Joint Staff official recognized this as a possible point of contention, particularly among the services, as the draft charter was being prepared for distribution for comment in February 1999. In October 1998, the Deputy Secretary of Defense approved the realignment or restructuring of several additional joint activities affecting USACOM. These include giving USACOM representation in the joint test and evaluation program; transferring the services’ combat identification activities to USACOM; and assigning a new joint personnel recovery agency to USACOM. USACOM and the Chairman of the Joint Chiefs of Staff believed these actions strengthened USACOM’s joint force trainer and integrator roles as well as its emerging responsibilities for joint doctrine, warfighting concepts, and joint experimentation. USACOM representation on the joint test and evaluation program, which was to be effective by January 1999, provides joint representation on the senior advisory council, planning committee, and technical board for test and evaluation. Command and control of service combat identification programs and activities provide joint evaluation of friend or foe identification capabilities. The newly formed joint personnel recovery agency provides DOD personnel recovery support by combining the joint services survival, evasion, resistance, and escape agency with the combat search and rescue agency. USACOM is to assume these responsibilities in October 1999. Retaining the effectiveness of America’s military when budgets are generally flat and readiness and modernization are costly requires a fuller integration of the capabilities of the military services. As the premier trainer, provider, and integrator of CONUS-based forces, USACOM has a particularly vital role if the U.S. military is to achieve new levels of effectiveness in joint warfighting. USACOM was established to be a catalyst for the transformation of DOD from a military service-oriented to a joint-oriented organization. But change is difficult and threatening and it does not come easy, particularly in an organization with the history and tradition of DOD. This is reflected in the opposition to USACOM from the military services, which provide and equip the Command with its forces and maintain close ties to USACOM’s service component commands, and from geographic commands it supports. As a result of this resistance, USACOM changed its roles as an integrator and provider of forces and sought new opportunities to effect change. Indications are that the current geographic commanders may be more supportive of USACOM than past commanders have been, as evidenced by their recent receptivity to USACOM’s support in development and refinement of their joint training programs. Such support is likely to become increasingly important to the success of USACOM. During its initial years the Command made its greatest accomplishments in areas where there was little resistance to its role. The Commander of USACOM said that the Command would increasingly enter areas where others have a vested interest and that he would therefore expect the Command to encounter resistance from the military services and others in the future as it pursues actions to enhance joint military capabilities. While USACOM has taken actions to enhance joint training, to meet the force requirements of supported commands, and to improve the interoperability of systems and equipment, the value of its contributions to improved joint military capabilities are not clearly discernable. If the Command develops performance goals and measures consistent with the Results Act, it could assess and report on its performance in accomplishing its mission of maximizing military capabilities. The Command may need guidance from the Secretary of Defense in the development of these goals and measures. In addition to its evolving roles as joint force trainer, provider, and integrator, USACOM is now taking on important new, related responsibilities, including the management of five key joint activities. With the exception of training, these roles and responsibilities, both old and new, are largely undefined in DOD directives, instructions, and other policy documents, including joint doctrine and guidance. The Unified Command Plan, a classified document that serves as the charter for USACOM and the other unified commands, briefly identifies USACOM’s functional roles but does not define them in any detail. This absence of a clear delineation of the Command’s roles, authorities, and responsibilities could contribute to a lack of universal understanding and acceptance of USACOM and impede the Command’s efforts to enhance the joint operational capabilities of the armed forces. While USACOM was established in 1993 by the Secretary of Defense with the open and strong leadership, endorsement, and support of the Chairman of the Joint Chiefs of Staff, General Colin Powell, the Command has not always received the same strong visible support. Without such support, USACOM’s efforts to bring about change could be throttled by other, more established and influential DOD elements with priorities that can compete with those of USACOM. Indications are that the current DOD leadership is prepared to support USACOM when it can demonstrate a compelling need for change. The adoption of the USACOM-developed theater ballistic missile defense capstone requirements document indicates that this rapidly evolving command may be gaining influence and support as the Secretary of Defense’s and Chairman of the Joint Chiefs of Staff’s major advocate for jointness within the Department of Defense. It is important that USACOM be able to evaluate its performance and impact in maximizing joint military capabilities. Such assessments, while very difficult to make, could help the Command better determine what it needs to do to enhance its performance. We, therefore, recommend that the Secretary of Defense direct the Commander in Chief of USACOM to adopt performance goals and measures that will enable the Command to assess its performance in accomplishing its mission of maximizing joint military capabilities. Additionally, as USACOM attempts to advance the evolution of joint military capabilities and its role continues to expand, it is important that the Command’s roles and responsibilities be clearly defined, understood, and supported throughout DOD. Only USACOM’s roles and responsibilities in joint training have been so defined in DOD policy and guidance documents. Therefore, we recommend that the Secretary of Defense fully incorporate USACOM’s functional roles, authorities, and responsibilities in appropriate DOD directives and publications, including joint doctrine and guidance. In written comments (see app. VII) on a draft of this report, DOD concurred with the recommendations. In its comments DOD provided additional information on USACOM’s efforts to establish performance goals and objectives and DOD’s efforts to incorporate USACOM’s functional roles, authorities, and responsibilities in appropriate DOD directives and publications. DOD noted that as part of USACOM’s efforts to establish performance goals and objectives, the Command has provided training on performance measures to its military officers. Regarding our recommendation to incorporate USACOM’s functional roles, authorities, and responsibilities in appropriate DOD directives and publications, DOD said the 1999 Unified Command Plan, which is currently under its cyclic review process, will further define USACOM’s functional roles as they have evolved over the past 2 years. It also noted that key training documents have been, or are being, updated. We believe that in addition to the Unified Command Plan and joint training documents, the joint guidance for planning and executing military operations—the Joint Operational Planning and Execution System process—should discuss USACOM’s role as the major provider of forces.
Pursuant to a congressional request, GAO provided information on Department of Defense (DOD) efforts to improve joint operations, focusing on: (1) the U.S. Atlantic Command's (USACOM) actions to establish itself as the joint force trainer, provider, and integrator of most continental U.S.-based forces; (2) views on the value of the Command's contributions to joint military capabilities; and (3) recent expansion of the Command's responsibilities and its possible effects on the command. GAO noted that: (1) USACOM has advanced joint training by developing a state-of-the-art joint task force commander training program and simulation training center; (2) the Command has also progressed in developing other elements of joint training, though not at the same level of maturity or intensity; (3) however, USACOM has had to make substantive changes in its approach to providing and integrating joint forces; (4) its initial approach was to develop ready force packages tailored to meet the geographic commands' spectrum of missions; (5) this was rebuffed by the military services and the geographic commands, which did not want or value USACOM's proactive role and by the Chairman of the Joint Chiefs of Staff (1993-97), who did not see the utility of such force packages; (6) by late 1995, USACOM reverted to implementing a force-providing process that provides the Command with a much more limited role and ability to affect decisions and change; (7) the Command's force integrator role was separated from force providing and also redirected; (8) the establishment of performance goals and measures would help USACOM assess and report on the results of its efforts to improve joint military capabilities; (9) Congress anticipated that the Government Performance and Results Act principles would be institutionalized at all organizational levels in federal agencies; (10) the Command's recently instituted strategic planning system does not include performance measures that can be used to evaluate its impact on the military capabilities of U.S. forces; (11) the Office of the Secretary of Defense, the Joint Staff, and USACOM believed the Command was providing an important focus to the advancement of joint operations; (12) the views of the geographic commands were generally more reserved, with some benefitting more than others from USACOM's efforts; (13) the Command's new authorities are likely to increase its role and capabilities to provide training and joint war fighting support and enhance its ability to influence decisions within the department; and (14) although USACOM's roles are expanding and the number of functions and DOD organizational elements the Command has relationships with is significant, its roles and responsibilities are still largely not spelled out in key DOD policy and guidance, including joint doctrine, guidance, and other publications.
You are an expert at summarizing long articles. Proceed to summarize the following text: Three main types of pipelines carry hazardous liquid and natural gas from producing wells to end users (residences and businesses) and are managed by about 2,500 operators: Gathering pipelines collect hazardous liquid and natural gas from production areas and transport the products to processing facilities, which in turn refine and send the products to transmission pipelines. These pipelines tend to be located in rural areas but can also be located in urban areas. PHMSA estimates there are 200,000 miles of natural gas gathering pipelines and 30,000 to 40,000 miles of hazardous liquid gathering pipelines. Transmission pipelines carry hazardous liquid or natural gas, sometimes over hundreds of miles, to communities and large-volume users, such as factories. Transmission pipelines tend to have the largest diameters and operate at the highest pressures of any type of pipeline. PHMSA has estimated there are more than 400,000 miles of hazardous liquid and natural gas transmission pipelines across the United States. (See fig. 1.) Distribution pipelines then split off from transmission pipelines to transport natural gas to end users—residential, commercial, and industrial customers. There are no hazardous liquid distribution pipelines. PHMSA has estimated there are roughly 2 million miles of natural gas distribution pipelines, most of which are intrastate pipelines. PHMSA administers the national regulatory program to ensure the safe transportation of hazardous liquid and natural gas by pipeline, including developing safety requirements that all pipeline operators regulated by PHMSA must meet. In 2012, the agency’s budget was $201 million, which was used, in part, to employ over 200 staff in its pipeline safety program. About half of the pipeline safety program staff inspects hazardous liquid and gas pipelines for compliance with safety regulations. Besides PHMSA, over 300 state inspectors help oversee pipelines and ensure safety. State and federal officials may also investigate specific pipeline incidents to determine the reason for the pipeline failure and to take enforcement actions, when necessary. PHMSA enforces two general sets of pipeline safety requirements. The first are minimum safety standards that cover specifications for the design, construction, testing, inspection, operation, and maintenance of pipelines. The second set of safety requirements are part of a supplemental risk-based regulatory program termed “integrity management.” Under transmission pipeline integrity management programs, operators are required to systematically identify and mitigate risks to pipeline segments—discrete sections of the pipeline system separated by valves that can stop the flow of product—that are located in high-consequence areas where an incident would have greater consequences for public safety or the environment. To ensure operators comply with minimum safety standards and integrity management requirements, PHMSA conducts inspections in partnership with state pipeline safety agencies. Inspections may focus on specific pipeline segments or aspects of an operator’s safety program, or both. According to PHMSA, officials conduct an inspection for each operator at least once every 5 to 7 years, but may conduct additional inspections based on safety risk or at the discretion of PHMSA or state officials. PHMSA is authorized to take enforcement actions against operators, including issuing warning letters, notices of probable violation, notices of amendment, notices of proposed safety order, corrective action orders, and imposing civil penalties. Transporting hazardous liquids and natural gas by pipelines is associated with far fewer fatalities and injuries than other modes of transportation. From 2007 to 2011, there was an average of about 14 fatalities per year for all pipeline incidents reported to PHMSA, including an average of about 2 fatalities per year resulting from incidents on hazardous liquid and natural gas transmission pipelines. In comparison, in 2010, 3,675 fatalities resulted from incidents involving large trucks and 730 additional fatalities resulted from railroad incidents. Yet risks to pipelines exist, such as corrosion and third party excavation, which can damage a pipeline’s integrity and result in leaks and ruptures. A leak is a slow release of a product over a relatively small area. A rupture is a breach in the pipeline that may occur suddenly; the product may then ignite resulting in an explosion. According to pipeline operators we met with, of the two types of pipeline incidents, leaks are more common but generally cause less damage. Ruptures are relatively rare but can have much higher consequences because of the damage that can be caused by an associated explosion. According to PHMSA, industry, and state officials, responding to either a hazardous liquid or natural gas pipeline incident typically includes steps such as detecting that an incident has occurred, coordinating with emergency responders, and shutting down the affected pipeline segment. (See fig. 2.) Under PHMSA’s minimum safety standards, operators are required to have a plan that covers these steps for all of their pipeline segments and to follow that plan during an incident. Officials from PHMSA and state pipeline safety offices perform relatively minor roles during an incident, as they rely on operators and emergency responders to take actions to mitigate the consequences of such events. Following an incident, operators must report incidents that meet certain thresholds— including incidents that involve a fatality or injury, excessive property damage or product release, or an emergency shutdown—to the federal National Response Center, as well as conduct an investigation to identify the root cause and lessons learned. Federal and state authorities may also use their discretion to investigate some incidents, which can involve working with operators to determine the cause of the incident. If necessary, authorities will take steps to correct deficiencies in operator safety programs, including taking enforcement actions. While prior research shows that most of the fatalities and damage from an incident occur in the first few minutes following a pipeline rupture, operators can reduce some of the consequences by taking actions that include closing valves that are spaced along the pipeline to isolate segments. The amount of time it takes to close a valve depends upon the equipment installed on the pipeline. For example, valves with manual controls (referred to as “manual valves”) require a person to arrive on site and either turn a wheel crank or activate a push-button actuator. Valves that can be closed without a person located at the valve location (referred to as “automated valves”) include both remote-control valves, which can be closed via a command from a control room, and automatic-shutoff valves, which can close without human intervention based on sensor readings. (See fig. 3.) Automated valves generally take less time to close than manual valves. PHMSA’s minimum safety standards dictate the spacing of all valves, regardless of type of equipment installed to close them, while integrity management regulations require that transmission pipeline operators conduct a risk assessment for high- consequence areas that includes the consideration of automated valves. The ability of transmission pipeline operators to respond to incidents, such as leaks and ruptures, is affected by a number of variables—some of which are under operators’ control—resulting in variances in response time; for a given incident, that time can range from minutes to days. Several states and industry organizations have developed performance- based requirements for operators to meet in responding to incidents. PHMSA has some performance-based requirements, but its current performance goal related to incident response is not well defined. More precise performance measures and targets could lead to improved response times and less damage from incidents in some cases. However, PHMSA would need better data on incidents to determine the feasibility of such an approach. According to PHMSA officials, pipeline safety officials, and industry stakeholders and operators, multiple variables—some controllable by transmission pipeline operators—can influence the ability of operators to respond quickly to an incident. Ensuring a quick response is important because according to pipeline operators and industry stakeholders, reducing the amount of time it takes to respond to an incident can also reduce the amount of property and environmental damage stemming from an incident and, in some cases, the number of fatalities and injuries. For example, several natural gas pipeline operators noted that a faster incident response time could reduce the amount of property damage from secondary fires (after an initial pipeline rupture) by allowing fire departments to extinguish the fires sooner. In addition, hazardous liquid pipeline operators told us that a faster incident response time could result in lower costs for environmental remediation efforts and less product lost. We identified five variables that can influence incident response time and that are within an operator’s control: Leak detection capabilities. How quickly a leak is detected affects how soon an operator can initiate a response. Pipeline operators must perform a variety of leak detection activities to monitor their systems and identify leaks. These activities commonly include periodic external monitoring, such as aerial patrols of the pipeline, as well as continuous internal monitoring, such as measuring the intake and outtake volumes or pressure flows on the pipeline. In addition, pipeline operators must conduct public awareness programs for those living near pipeline facilities about how to recognize, respond to, and report pipeline emergencies; these programs can influence how quickly an operator becomes aware of an incident. Attempting to confirm an incident can also affect response time. Pipeline operators may prefer to have two sources of information to confirm an incident, such as data from a pipeline sensor and a visual confirmation, especially if shutting down the system is a likely response to the incident. Natural gas pipeline operators in particular generally seek to confirm an incident before a shutdown, as shutdowns interrupt the gas flow and can cut off service to their customers. Location of qualified operator response personnel. The proximity of the operator’s response personnel to a facility or shutoff valve can affect the response time. Response personnel who have a greater distance to travel to the facility or valve site can take longer to establish an incident command center or to close manual valves. Along with proximity, incident response time depends on whether qualified operator response personnel—those who are trained and are authorized to take necessary action, such as closing manual valves— are dispatched. Type of valves. The type of valve an operator has installed on a pipeline segment can affect how quickly the segment can be isolated. Automated valves, which can be closed automatically or remotely, can shorten incident response time compared to manual valves, which require that personnel travel to the valve site and turn a wheel crank or activate a push-button actuator to close the valve. However, if affected valves happen to be located at or close to facilities where personnel are permanently stationed, the type of valve could be less critical in influencing incident response time. Control room management. Clear operating policies and shutdown protocols for control room personnel can influence response time to incidents. For example, incident response time might be reduced if control room personnel have the authority to shut down a pipeline or facility if a leak is suspected, and are encouraged to do so. A few of the operators we met with told us that while in the past it was a common practice in the industry to avoid shutdowns unless absolutely necessary, the practice now for these operators is to shut down the line if there is any doubt about safety. An official from one natural gas pipeline operator told us that his company instructs control room personnel that they will not suffer repercussions from shutting down a line for safety reasons. Another official from a hazardous liquid pipeline operator told us that the authority to shut down is at the control room level and that even personnel in the field can make the call to shut down a line. Relationships with local first responders. Operators that have already established effective communications with local first responders— such as fire and police departments—may respond more quickly during emergencies. For example, one natural gas pipeline operator told us that during one incident, the local first responders had turned to the operator personnel for direction on how to respond to a rupture. As a result, the operator said that one of the lessons learned was that the company needed to conduct more emergency response exercises, such as mock drills, with the local first responders so the responders would know their roles and responsibilities. We identified four other variables that influence a pipeline operator’s ability to respond to an incident, but are beyond an operator’s control: Type of release. The type of release—leak or rupture—can influence how quickly an operator responds to an incident. Leaks are generally a slow release of product over a small area, which can go undetected for long periods. Once a leak is detected, it can take additional time to confirm the exact location. Ruptures, which usually produce more significant changes in the external or internal conditions of the pipeline, are typically easier to detect and locate. Time of day. The time of day when an incident occurs can affect incident response time. The operator’s response personnel may be delayed in reaching facilities in urban or suburban areas during peak traffic times. Conversely, if an incident occurs during the evening or on a weekend, the operator’s response personnel could be able to reach the facility more quickly, because of lighter traffic. For example, one natural gas pipeline operator told us about an incident that occurred on a Saturday afternoon, which meant that traffic did not delay response personnel traveling to the scene. Weather conditions. Weather conditions can affect how quickly an operator can respond to an incident. For example, one natural gas pipeline operator described an incident caused by a hurricane’s storm surge that pushed debris into the pipeline at a facility, and flooding prevented the response personnel from reaching the site for several days, during which time the pipe continued to leak gas. Winter conditions can also make it more difficult for the operator’s response personnel to reach a facility or to access valve sites in remote areas. As another example, windy conditions can disperse natural gas and make it hard to detect a leak. Other operators’ pipeline in the same area. If two or more operators own pipeline in a shared right of way, determining whose system is affected can increase incident response time. Operators may delay responding if they have not confirmed that the incident is on their pipeline. For example, one natural gas pipeline operator told us about an incident that took 2 days to repair because when their personnel first detected a leak, the personnel initially contacted another operator, whose line crossed over theirs, to make sure the leak was not the other operator’s. Operators we spoke with stated that the amount of time it takes to respond to an incident can depend on all of the variables listed above and can range from several minutes to days (see table 1). We and others have recommended that the federal government move toward performance-based regulatory approaches to allow those being regulated to determine the most appropriate way to achieve desired, measurable outcomes. For example, Executive Order 13563 calls for improvements to the nation’s regulatory system, including the use of the best, most innovative and least burdensome tools for achieving regulatory ends. We have also previously reported on the benefits of a performance-based framework, which helps agencies focus on achieving outcomes. Such a framework should include: 1) national goals; 2) performance measures that are linked to those national goals; and 3) appropriate performance targets that promote accountability and allow organizations to track their progress towards goals. PHMSA has included these three elements of a performance-based framework in some aspects of its pipeline safety program, but not for incident response times. For example, PHMSA has set national goals intended to reduce the number of pipeline incidents involving fatality or major injury and the number of hazardous liquid pipeline spills with environmental consequences. Each of these national goals has associated performance measures (i.e., the number of such incidents) and specific targets (such as reducing the number of incidents involving a fatality or major injury from 39 to less than 28 per year by 2016) that allow PHMSA to track its progress toward the goals. However, while PHMSA has established a national goal for incident response times, it has not linked performance measures or targets to this goal. Specifically, PHMSA directs operators to respond to certain incidents–emergencies that require an immediate response–in a “prompt and effective” manner, but neither PHMSA’s regulations nor its guidance describe ways to measure progress toward meeting this goal. Without a performance measure and target for a prompt and effective incident response, PHMSA cannot quantitatively determine whether an operator meets this goal. PHMSA officials told us that because each incident presents unique circumstances, its inspectors must determine whether an operator’s incident response was prompt and effective on a case-by-case basis. According to PHMSA, in making this determination, inspectors must use their professional judgment to balance any challenges the operator faced in responding with the operator’s obligation to the public’s safety. Other organizations in the pipeline industry, including some state regulatory agencies, have developed methods for measuring the performance of operators responding to incidents by using specific incident response times. According to the National Association of Pipeline Safety Representatives, several state pipeline safety offices have initiatives that require natural gas pipeline operators to respond within a specified time frame to reports of pipeline leaks. For example, the New Hampshire Public Utilities Commission has established incident response time standards—ranging from 30 to 60 minutes, with performance targets—for natural gas distribution companies to meet when responding to reports of a leak. In addition, members of the Interstate Natural Gas Association of America have committed to achieving a 1-hour incident response time for large diameter (greater than 12 inches) natural gas pipelines in highly populated areas. To meet this goal, operators are planning changes to their systems, such as relocating response personnel and automating over 1,800 valves throughout the United States. According to PHMSA officials, pipeline incidents often have unique characteristics, so developing a performance measure and associated target for incident response time similar to those used by other pipeline organizations would be difficult. In particular, it would be challenging to establish a performance measure using incident response time in a way that would always lead to the desired outcome of a prompt and effective response. Officials stated that the intention behind requiring operators to respond promptly and effectively is to make the area safe as quickly as possible. In some instances, an operator can accomplish this outcome in the time it takes to close valves and isolate pipeline segments, while in other instances, an operator might need to completely vent or drain the product from the pipeline. Likewise, it would be difficult to identify a specific target for incident response time, as pipeline operators likely should respond to some incidents more quickly than others. For example, industry officials noted that while most fatalities and injuries caused by a pipeline explosion occur in the initial blast, a faster incident response time could help reduce fatalities and injuries in cases where there are sites nearby whose occupants have limited mobility (e.g., prisons, hospitals). In these situations, operators told us they want to ensure their incident response time is faster than for more remote locations where an explosion would have less of an impact on people, property, and the environment. Although defining performance measures and targets for incident response can be challenging, one way for PHMSA to move toward a more quantifiable, performance-based approach would be to develop strategies to improve incident response based on nationwide data. For example, performing an analysis of nationwide incident data—similar to PHMSA’s current analyses of fatality and injury data—could help PHMSA determine response times for different types of pipelines (based on characteristics such as location, operating pressure, and diameter); identify trends; and develop strategies to improve incident response. Furthermore, as part of this analysis of response times for various types of pipelines, PHMSA could explore the feasibility of integrating incident response performance measures and targets for individual pipelines into its integrity management program. For example, PHMSA might identify performance measures that are appropriate for various types of pipelines and allow operators to determine which measures and targets best apply to their individual pipeline segments, based on the characteristics of those segments. Such an approach would be consistent with our prior work on performance measurement, as it would allow operators the flexibility to meet response time targets in several ways, including changes to their leak detection methods, moving personnel closer to the valve location, or installing automated valves. PHMSA would then review an operator’s selection of measures and targets as part of ongoing integrity management inspections; this process is similar to how inspectors review other provisions in the integrity management program. PHMSA would need reliable national data to implement a performance- based framework for incident response times to ensure operators are responding in a prompt and effective manner. However, the data currently collected by PHMSA do not enable them to accurately determine incident response times for all recent incidents for two reasons: 1) operators are not required to fill out certain time-related fields in the PHMSA incident-reporting form and 2) when operators do provide these data, they are interpreting the intended content of the data fields in different ways. Specifically, PHMSA requires operators to report the date and time when the incident occurred. Operators are not required to report the dates and times when: the operator identified the incident; the operator’s resources (personnel or equipment) arrived on site; and the operator shut down and restarted a pipeline or facility. As a result, our analysis determined that hazardous liquid pipeline operators did not report the date and time for two of these variables— when the incident was identified and when operator resources arrived on site—for 26 percent (178 out of 674) of incidents that occurred in 2010 and 2011. Also, these operators did not identify whether a shutdown took place in 16 percent (108 out 674) of incidents over the same time period. In comparison, natural gas pipeline operators reported more complete data; these operators did not report data for when the operator identified the incident and resources arrived on site in only 3 percent (6 out of 191) of incidents that occurred in 2010 and 2011. Also, these operators did not identify whether a shutdown took place in only about 2 percent (3 out of 191) of incidents over the same period. PHMSA officials told us that because they have not used the time-related data to identify safety trends, the omissions have not been a problem for them, although in the future they may decide to make some of these data fields mandatory. In addition to omitting certain incident data fields, several officials from pipeline operators told us that they interpret what to include in the time- related, incident data fields differently. For example, according to one official from a natural gas operator, some operators interpret the time when an operator identified the incident as the time when operator personnel first received a call about a potential leak, while others may interpret the time when an operator identified an incident as the time when operator personnel received an on-site confirmation of a leak. These differing interpretations occur even though guidance on PHMSA’s website instructs operators how to complete the reporting forms, including the time-related data fields. The primary advantage of installing automated valves is reducing the time to shut down and isolate a pipeline segment after a leak or rupture occurs, while disadvantages include the potential for accidental closures and monetary cost. Because these advantages and disadvantages vary among valve locations, operators should make decisions about whether to install automated valves—as opposed to other safety measures—on a case-by-case basis. PHMSA has several opportunities to assist operators in making these evaluations, including communicating guidance and sharing information on some methods operators use to make these decisions. Research and industry stakeholders indicate that the primary advantage of installing automated valves is related to the time it takes to respond to an incident. Although automated valves cannot mitigate the fatalities, injuries, and damage that occur in the initial blast, quickly isolating the pipeline segment through automated valves can significantly reduce subsequent damage by reducing the amount of hazardous liquid and natural gas released. For example, NTSB found that automated valves would have reduced the amount of time taken to stop the flow of natural gas in the San Bruno incident and, therefore, reduced the severity of property damage and life-threatening risks to residents and emergency responders. According to research and industry stakeholders, automated valves will only decrease the number of fatalities and injuries in those cases when people cannot easily evacuate the area, such as cases involving hospital patients or prison inmates. Research and industry stakeholders identified several disadvantages operators should consider when determining whether to install automated valves, related to potential accidental closures and the monetary costs of purchasing and installing the equipment. Specifically, automated valves can lead to accidental closures, which can have severe, unintended consequences, including loss of service to residences and businesses. For example, according to a pipeline operator, an accidental closure on a natural gas pipeline in New Jersey resulted in significant disruption and downstream curtailments to customers in New York City during high winter demand. In addition, the monetary costs of installing automated valves can range from tens of thousands to a million dollars per valve, which may be significant expenditures for some pipeline operators. (See table 2.) Research and industry stakeholders also indicate the importance of determining whether to install valves on a case-by-case basis because the advantages and disadvantages can vary considerably based on factors specific to a unique valve location. These sources indicated that the location of the valve, existing shutdown capabilities, proximity of personnel to the valve location, the likelihood of an ignition, type of product being transported, operating pressure, topography and pipeline diameter, among others, all play a role in determining the extent to which an automated valve would be advantageous. Operators we met with are using a variety of methods for determining whether to install automated valves. One of the eight operators we met with had decided to install automatic-shutoff valves across its pipeline system, regardless of risk, to eliminate the need for control room staff to make judgment calls on whether or not to close valves to isolate pipeline segments. However, seven of the eight operators we met with developed their own risk-based approach for considering potential advantages and disadvantages when making these decisions on a case-by-case basis. For example, two natural gas pipeline operators told us that they applied a decision tree analysis to all pipeline segments in highly populated and frequented areas. They used the decision tree to guide a variety of yes- or-no questions on whether installing an automated valve would improve response time to less than an hour and provide advantages for locations where people might have difficulty evacuating quickly in the event of a pipeline incident. Other operators said they used computer-based spill modeling to determine whether the amount of product release would be significantly reduced by installing an automated valve. These seven operators told us that their approaches for making decisions about whether to install automated valves considered the advantages and the disadvantages we identified above. Improved response time. Most operators we spoke with considered whether automated valves would lead to a faster response time. For example, the primary criterion used by two of the natural gas pipeline operators was the amount of time it would take to shut down the pipeline and isolate the segment and population along the segment. In one instance, an operator decided to install a remote-control valve in a location that would take pipeline personnel 2.5 hours to reach and 30 minutes more to close the valve. Installing the automated valve is expected to reduce the total response time to under an hour, including detecting the incident and making the decision to isolate the pipeline segment. In addition, several hazardous liquid pipeline operators used spill modeling to determine whether an automated valve would result in a reduced amount of damage from product release at individual locations. This spill modeling typically considered topography, operating pressure, and placement of existing valves. For example, one hazardous liquid pipeline operator used spill modeling to make the decision to install a remote-control valve on a pipeline segment with a large elevation change after evaluating the spill volume reduction. Accidental closures. Operators indicated that installing automated valves, especially automatic-shutoff valves, could have unintended consequences, which they considered as part of their decisions to install automated valves. For example, two natural gas pipeline operators considered whether there is the potential for accidentally cutting off service when assessing individual locations for the possible installation of an automatic-shutoff valve. As noted, one natural gas pipeline operator has made the decision to install automatic-shutoff valves across its pipeline system. The operator stated that in the past, there were concerns with relying on automatic-shutoff valves because of the possibility for accidental closures, but the operator believes it has developed a process that effectively adapts to pressure and flow change and minimizes or eliminates the risk of the valve accidentally closing. Other natural gas pipeline operators stated that relying on pressure sensing systems can be dangerous because “tuning” the pressure activation in an effort to avoid accidental closures can result in situations where the valve will not automatically close during an actual emergency. For hazardous liquid, all operators we spoke with stated that they either do not consider or do not typically install automatic-shutoff valves because an accidental closure has the potential to lead to an incident. Specifically, operators stated that an unexpected valve closure can result in decompression waves in the pipeline system, which might cause the pipeline to rupture if operators cannot reduce the flow of product promptly. Monetary costs. According to operators and other industry stakeholders, considering monetary costs is important when making decisions to install automated valves because resources spent for this purpose can take away from other pipeline safety efforts. Specifically, operators and industry stakeholders told us they often would rather focus their resources on incident prevention to minimize the risk of an incident instead of focusing resources on incident response. PHMSA stated that it generally supports the idea that pipeline operators should be given flexibility to target compliance dollars where they will have the most safety benefit when it is possible to do so. Operators we spoke with stated that they considered costs associated with purchasing and installing equipment. For example, four operators indicated that they will consider the costs related to communications equipment when determining whether to install automated valves. In addition, three operators stated that decisions to install automated valves are affected by whether the operator has or can gain access to the pipeline right of way. Other cost considerations mentioned by at least one operator included local construction costs and possible changes to leak detection systems. Finally, two natural gas pipeline operators stated that monetary cost plays a role in determining what steps they plan to take to meet a one-hour response time goal for pipelines in highly populated areas. For example, the operator might choose to move personnel closer to valves rather than installing automated valves, if that is the more cost-effective option. PHMSA has developed guidance to help operators understand current regulations on what operators must consider when deciding to install automated valves, but not all operators are aware of the guidance. PHMSA includes on its primary website two types of guidance that can be useful for operators in determining whether to install automated valves on transmission pipelines. First, PHMSA has developed inspection protocols for both the hazardous liquid and natural gas integrity management program. Second, PHMSA has developed guidance on the enforcement actions inspectors will take—such as a notice of proposed violation and warning letter, among others—should PHMSA discover a violation. Both of these pieces of guidance provide additional detail—not included in regulation—on the steps operators might take in considering whether to install automated valves. For example, PHMSA’s inspection protocol for natural gas operators describes several studies on the generic costs and benefits of automated valves and indicates that operators may use this research as long as they document the reasons why the study is applicable to the specific pipeline segment. However, operators we spoke with were unaware of existing guidance to varying degrees. Specifically, of the eight operators we met with, three were unaware of both the inspection and enforcement guidance, and the remaining five operators were unaware of the enforcement guidance. Operators we spoke with, including those that were unaware of the guidance, told us that having this information would be helpful in making decisions to install automated valves. According to PHMSA, the agency provides this guidance to operators to ensure operators follow it as they make decisions on whether to install automated valves, but does not re-distribute the guidance at regular intervals (e.g., annually). According to PHMSA, inspectors see examples of how operators make decisions to install automated valves during integrity management inspections, but the inspectors do not formally collect this information or share it with other operators. Current regulations give operators a large degree of flexibility in making decisions in deciding to install automated valves. As mentioned earlier, we spoke with operators that are using a variety of risk-based methods for making decisions about automated valves. For example, some used basic yes-or-no criteria, while others applied commercially available computer software to model potential incident outcomes. According to PHMSA, officials do not formally share what they view as good methods for determining whether to install automated valves. Officials stated they do not believe it is appropriate for PHMSA to publicly share decision-making approaches from a single operator, as doing so might be seen as an endorsement of that approach. However, according to PHMSA, its inspectors may informally discuss methods used by operators for making decisions to install automated valves and suggest these approaches to other operators during inspections. While the operators we spoke with represent roughly 18 percent of the overall hazardous liquid and natural gas transmission pipelines in high-consequence areas in 2010, there are over 650 additional pipeline operators we did not speak with that may be using other methods for determining whether to install automated valves. As such, we believe that both operators and inspectors could benefit from exposure to some of the methods used by other operators to make decisions on whether to install automated valves. We have previously reported on the value of organizations reporting and sharing information and recommended that PHMSA develop methods to share information on practices that can help ensure pipeline safety. PHMSA already conducts a variety of information-sharing activities that could be used to ensure operators are aware of both existing guidance and of approaches used by other operators for making decisions to install valves. While, according to PHMSA officials, the agency will not endorse a particular operator’s approach or practice, it can and does facilitate the exchange of information among operators and other stakeholders. For example, PHMSA issues advisory alerts in the Federal Register on emerging safety issues, including identified mechanical defects on pipelines, incidents that occurred under special circumstances, and reminders to correctly implement safety programs (e.g., drug and alcohol screening). In addition, PHMSA administers a website different from its primary website that, according to officials, is intended to ensure communication with pipeline safety stakeholders, including the public, emergency officials, pipeline safety advocates, regulators, and pipeline operators. PHMSA also periodically conducts public workshops with pipeline stakeholders on a wide variety of topics, including one in March 2012 on automated valves. While PHMSA currently requires operators to respond to incidents in a “prompt and effective manner,” the agency does not define these terms or collect reliable data on incident response times to evaluate an operator’s ability to respond to incidents. A more specific response time goal may not be appropriate for all pipelines. However, some organizations in the pipeline industry believe that such a performance-based goal can allow operators to identify actions that could improve their ability to respond to incidents in a timelier manner, and are taking steps to implement a performance-based approach. A performance-based goal that is more specific than “prompt and effective” could allow operators to examine the numerous variables under their control within the context of an established time frame to understand their current ability to respond and identify the most effective changes to improve response times, if needed, on individual pipeline segments. Reliable data would improve PHMSA’s ability to measure incident response and assist the agency in exploring the feasibility of developing a performance-based approach for improving operator response to pipeline incidents. One of the methods operators could choose to meet a performance- based approach to incident response is installing automated valves, a measure some operators are already taking to reduce risk. Given the different characteristics among valve locations, it is important for operators to carefully weigh the potential for improved incident response times against any disadvantages, such as the potential for accidental closure and monetary costs, in deciding whether to install automated valves as opposed to other safety measures. However, not all operators we spoke with were aware of existing PHMSA guidance and PHMSA does not formally collect or share evaluation approaches used by other operators to make decisions about whether to install automated valves. Such information could assist operators in evaluating the advantages and disadvantages of these valves and help them determine whether automated valves are the best option for meeting a performance-based incident response goal. We recommend that the Secretary of Transportation direct the PHMSA Administrator to take the following two actions: To improve operators’ incident response times, improve the reliability of incident response data and use these data to evaluate whether to implement a performance-based framework for incident response times. To assist operators in determining whether to install automated valves, use PHMSA’s existing information-sharing mechanisms to alert all pipeline operators of inspection and enforcement guidance that provides additional information on how to interpret regulations on automated valves, and to share approaches used by operators for making decisions on whether to install automated valves. We provided the Department of Transportation with a draft of this report for review and comment. The department had no comments and agreed to consider our recommendations. We are sending copies of this report to relevant congressional committees, the Secretary of Transportation, and other interested parties. In addition, this report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3824 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives of our review were to determine (1) the opportunities that exist to improve the ability of transmission pipeline operators to respond to incidents and (2) the advantages and disadvantages of installing automated valves in high-consequence areas and ways that the Pipeline and Hazardous Materials and Safety Administration (PHMSA) can assist operators in deciding whether to install valves in these areas. To address our objectives, we reviewed regulations, National Transportation Safety Board (NTSB) incident reports, and PHMSA guidance and data on enforcement actions, pipeline operators, and incidents, related to onshore natural gas transmission and hazardous liquid pipelines. We also attended industry conferences and interviewed officials at PHMSA headquarters and regional offices (Eastern, Southwestern, and Western), state pipeline safety agencies, pipeline safety groups, and industry associations. Specifically, we interviewed officials from the American Gas Association, American Petroleum Institute, Arizona Office of Pipeline Safety, Association of Oil Pipelines, Interstate Natural Gas Association of America, National Association of Pipeline Safety Representatives, NTSB, Pipeline Research Council International, Public Utilities Commission of Ohio, and West Virginia Public Service Commission. To address both objectives, we also conducted case studies on eight hazardous liquid and natural gas pipeline operators. We selected these operators based on our review of PHMSA data on the operators’ onshore pipeline mileage, product type and prior incidents, recommendations from industry associations and PHMSA, and to ensure geographic diversity. We selected six hazardous liquid and natural gas pipeline operators with a large amount of pipeline miles in high-consequence areas that also reported recent incidents (i.e., one or more incident(s) reported from 2007 through 2011) with a range of characteristics, such as: affected a high- consequence area; resulted in an ignition/explosion; or involved an automated valve. We also selected one natural gas pipeline operator and one hazardous liquid pipeline operator with a small number of pipeline miles in high-consequence areas, to obtain the perspective of smaller pipeline operators. Specifically, we interviewed officials from: Belle Fourche Pipelines (Casper, Wyoming)—Hazardous Liquids; Buckeye Partners (Breinigsville, Pennsylvania)—Hazardous Liquids; Enterprise Products (Houston, Texas)—Hazardous Liquids and Granite State Gas Transmission (Portsmouth, New Hampshire)— Kinder Morgan-Natural Gas Pipeline Company (Houston, Texas)— Phillips 66 (Houston, Texas)—Hazardous Liquids; Northwest Pipeline GP (Salt Lake City, Utah)—Natural Gas; and Williams-Transco (Houston, Texas)—Natural Gas. To determine what opportunities exist to improve the ability of transmission pipeline operators to respond to incidents, we identified several factors that influence pipeline operators’ incident response capabilities. To do so, we discussed prior incidents, incident response times, and federal oversight of the pipeline industry with officials from PHMSA, state pipeline safety offices, industry associations, and safety groups. We also spoke with operators about their prior incidents and the factors that influenced their ability to respond. We also examined 2007 to 2011 PHMSA incident data, including data on total number of incidents, type of incident (leak or rupture), type of pipeline where the incident occurred, and the date and time when: an incident occurred; an operator identified the incident; operator resources (personnel and equipment) arrived on site; and an operator shut down a pipeline or facility. We assessed the reliability of these data through discussions with PHMSA officials and selected operators. We determined that data elements related to numbers of incidents, types of releases, and types of pipeline where incidents occurred were reliable for the purpose of providing context, but that data elements related to response time were not sufficiently reliable for the purpose of conducting a detailed analysis of relationships between response time and other factors. We also reviewed federal requirements, prior GAO reports, and industry and government performance standards related to emergency response within the pipeline industry. To determine the advantages and disadvantages of installing automated valves in high-consequence areas and the ways that PHMSA can assist operators in deciding whether to install these valves, we identified the key factors that should be used in deciding whether to install automated valves in high-consequence areas. We used two categories of sources to identify the key factors: (1) Literature review. We conducted a literature review of previous research on pipeline incidents. Specifically, we used online research software to search through databases of scholarly and peer-reviewed materials—including articles, journals, reports, studies, and conferences dating back to 1995—which identified over 200 sources. (2) Interviews with industry stakeholders. During our interviews with officials from industry associations and pipeline safety groups, we discussed the advantages and disadvantages of installing these valves. To ensure that the literature review included just those documents that were relevant to our purpose, two analysts independently reviewed abstracts from the 200 sources identified to determine whether they were within the scope of our review. Each source had to meet specific criteria, including mentioning automated valves, pipeline incidents, and operator emergency response. We excluded sources that were overly technical for the purposes of our review. To ensure these analysts were making similar judgments, they separately examined a random sampling of each other’s sources. The analysts then added sources suggested by industry stakeholders during our interviews and reviewed them using the same criteria. After excluding documents that were not publicly available, one analyst reviewed these sources to identify advantages and disadvantages operators should consider when making decisions to install automated valves. A second analyst reviewed the analysis and performed a spot check on identified advantages and disadvantages. Specifically, the second analyst picked four of the sources at random to review and compared the advantages and disadvantages he identified to those of the first analyst. As part of our case studies, we discussed these advantages and disadvantages with operators. We also collected information from operators on their methods for deciding whether to install automated valves, as well as specific pipeline segments and valve locations where operators made such decisions (see app. II). We contacted vendors (manufacturers and installers) of automated valves to identify the range of costs for purchasing and installing these valves. We also discussed the regulations with officials from PHMSA headquarters and regional offices, state pipeline safety offices, and pipeline operators to determine what, if any, additional guidance would help operators apply the current regulations on installing automated valves. We conducted this performance audit from March 2012 to January 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted site visits to eight hazardous liquid and natural gas pipeline operators with different amounts of pipeline miles in or affecting high-consequence areas. Seven of the eight operators we visited told us they use approaches that consider both the advantages and disadvantages of installing automated valves on a case-by-case basis as opposed to other safety measures; the eighth operator stated that it follows a corporate strategy of installing automated valves in all high- consequence areas. A brief description of the approach used by each of the eight operators, based on our discussions with them, follows. Pipeline operator: Belle Fourche Product type: Hazardous liquid Number of pipeline miles: 460 (total); 135 (could affect high- consequence areas) Decision-making approach: The operator assesses each pipeline segment using spill-modeling software to determine the amount of product release and extent of damage that would occur in the event of an incident. The software considers flow rates, pressure, terrain, product type, and whether the segment is located over land or a waterway. Monetary costs are considered as part of the decision-making process, including the cost of installing communications equipment and gaining access to the valve location when the operator does not own the right of way. The operator stated that installing a remote-control valve costs between $100,000 and $500,000. Automatic-shutoff valves are not considered as the operator believes an accidental closure could lead to pipeline ruptures. Results to date: According to Belle Fourche officials, this approach has not resulted in any decisions to install automated valves because the advantages have not outweighed the disadvantages on any of the pipeline segments assessed. Pipeline operator: Buckeye Partners Product type: Hazardous liquid Number of pipeline miles: 6,400 (total); 4,179 (could affect high- consequence areas) Decision-making approach: The operator assesses each pipeline segment using spill-modeling software to determine the amount of product release and extent of damage that would occur in the event of an incident. The operator considers installation of an automated valve when this modeling shows such a valve would 1) reduce the size of the incident by 50 percent or more and 2) significantly reduce the consequences of an incident. The operator conducts additional analysis to determine the location where the automated valve would lead to the largest reduction in spill volume and overall consequences of an incident. Monetary costs are considered as part of the decision-making process, including costs for gaining access to pipeline when the operator does already not own the right of way. The operator stated that installing a remote-control valve costs between $35,000 and $325,000. Automatic-shutoff valves are considered, but not typically installed, as the operator believes an accidental closure could lead to a pipeline rupture. Results to date: According to Buckeye Partners officials, this approach has resulted in additional analysis of the possible installation of 25 remote-control valves along 75 pipeline segments assessed. Pipeline operator: Phillips 66 Product type: Hazardous liquid Number of pipeline miles: 11,290 (total); 3,851 (could affect high- consequence areas) Decision-making approach: The operator assesses every 100 feet of pipeline (which covers all pipeline segments) using spill-modeling software to determine the amount of product release and extent of damage that would occur in the event of a complete rupture. The operator also uses a relative consequence index for individual pipeline segments that considers the impact to high-consequence areas. Automated valve projects are further evaluated if 1) the potential drain volume is greater than 1,000 barrels, 2) the pipeline segment exceeds a certain threshold on the consequence index, or 3) the existing automated valves are greater than 7.5 miles apart. Monetary costs are considered as part of the decision-making process, including the cost of installing communications equipment, access to power, gaining access to the valve’s location when the operator does not own the right of way, and local construction costs. The operator stated that installing an automated valve costs between $250,000 and $500,000. Automatic-shutoff valves are not considered as the operator believes an accidental closure could lead to pipeline ruptures. Results to date: According to the Phillips 66 officials, this approach has resulted in decisions to install 71 automated valves in the 508 high- consequence area locations assessed. Pipeline operator: Enterprise Products Product type: Hazardous liquid and natural gas Number of pipeline miles: 23,012 (total); 8,783 (could affect or in high- consequence areas) Decision-making approach: The operator assesses each pipeline segment using spill-modeling software to determine the amount of product release and extent of damage that would occur in the event of an incident. The software considers factors such as topography and the placement of existing valves. The operator also uses a risk algorithm to identify threats to individual pipeline segments. The operator told us that it does not have specific criteria for guiding decisions to install automated valves; rather, officials make judgment calls based on the results of spill modeling and the application of the risk algorithm. Monetary costs are considered as part of the decision-making process, including the cost of installing communications equipment and the amount of necessary infrastructure work. The operator stated that installing a remote-control valve costs between $250,000 and $500,000. Pipelines carrying gas or highly volatile liquids—which are in gas form when released into the atmosphere—are excluded from consideration, according to the operator, because industry studies have shown that automated valves do not significantly improve incident outcomes for these product types. Results to date: According to Enterprise Products officials, this approach has not resulted in any decisions to install automated valves because the advantages have not outweighed the disadvantages on any of the pipeline segments assessed. Pipeline operator: Granite State Gas Transmission Product type: Natural gas Number of pipeline miles: 86 (total); 11 (high-consequence areas) Decision-making approach: The operator assesses individual pipeline segments in high-consequence areas using risk analysis software that considers the operator’s response time to an incident, population in the area, and pipeline diameter, among other variables. Monetary costs are considered as part of the decision-making process, including the cost of installing communications equipment and costs to change or improve the existing leak detection system. The operator stated that installing an automated valve costs between $40,000 and $50,000. Automatic-shutoff valves are not considered, as officials believe that they could lead to unintended consequences, such as accidental closures. Results to date: According to Granite State Gas Transmission officials, this approach has resulted in decisions to install remote-control valves in 30 of the 30 locations assessed. Pipeline operator: Kinder Morgan-Natural Gas Pipeline Company of America (NGPL) Product type: Natural gas Number of pipeline miles: 9,800 (total); 569 (high-consequence areas) Decision-making approach: The operator follows a long-term corporate risk management strategy for NGPL, developed in the 1960s, that calls for installing automatic-shutoff valves across its pipeline system regardless of advantages and disadvantages for individual pipeline segments. The operators told us that automatic-shutoff valves, as opposed to remote-control valves, were chosen because they reduce the potential for human error when making decisions to close valves. Officials stated that the biggest concern of using automatic-shutoff valves is the potential for accidental closures, but they believe they have developed a procedure for managing the pressure sensing system that effectively adapts to pressure and flow change and minimizes or eliminates these types of closures. Monetary costs are not considered as part of the decision-making process. The operator stated that installing automatic- shutoff valve on an existing manual valve costs between $48,000 and $100,000. Results to date: According to Kinder Morgan officials, this approach has resulted in the installation of automated valves at 683 out of 832 locations across the pipeline system. Officials plan to automate the remaining valves over the next several years. Pipeline operator: Northwest Pipeline GP Product type: Natural gas Number of pipeline miles: 3,900 (total); 170 (high-consequence areas) Decision-making approach: The operator uses a decision tree to assess individual pipeline segments based on several criteria, including the location of the valve (e.g., high-consequence area), diameter of the pipe, and the amount of time it takes for an operator to respond upon notification of an incident. The operator will install an automated valve in any high-consequence, class 3, or class 4 areas on large diameter pipe (i.e., above 12 inches) where personnel cannot reach and close the valve in under an hour. Monetary costs are considered as part of the decision- making process for the purposes of determining the most cost-effective way to ensure the operator can respond within one hour to incidents in high-consequence areas. The operator stated that installing an automated valve costs between $37,000 and $240,000. Automatic-shutoff valves are not installed in areas where an accidental closure could lead to customers losing service (i.e., in places where there is a single line feed servicing the entire area) or where pressure fluctuations may inadvertently activate the valve. Results to date: According to Northwest Pipeline GP officials, this approach has resulted in decisions to install automated valves at 59 of the 730 locations assessed. Pipeline operator: Williams Gas Pipeline-Transco Product type: Natural gas Number of pipeline miles: 11,000 (total); 1,192 (high-consequence areas) Description of decision-making method: The operator uses a decision tree to assess individual pipeline segments based on several criteria, including the location of the valve (e.g., high-consequence area), diameter of the pipe, and the amount of time it takes for an operator to respond upon notification of an incident. The operator will install an automated valve in any high-consequence, class 3, or class 4 areas on large diameter pipe (i.e., above 12 inches) where personnel cannot reach and close the valve in under an hour. Monetary costs are considered as part of the decision-making process for the purposes of determining the most cost-effective way to ensure the operator can respond within one hour to incidents in high-consequence areas. The operator stated that installing an automated valve costs between $75,000 and $500,000. Automatic-shutoff valves are not installed in areas where an accidental closure could lead to customers losing service (i.e., in places where there is a single line feed servicing the entire area) or where pressure fluctuations may inadvertently activate the valve. Results to date: According to Williams Gas Pipeline-Transco officials, this approach has resulted in decisions to install automated valves at 56 of the 2,461 locations assessed. The eight operators we spoke with provided a range of cost estimates for installing automated valves—from as low as $35,000 to as high as $500,000 depending on the location and size of the pipeline, and the type of equipment being installed, among other things. While both hazardous liquid and natural gas transmission pipeline operators estimated a similar cost range from about $35,000 to $500,000, hazardous liquid pipeline operators tended to estimate higher costs. Specifically, two of the three operators that exclusively transport hazardous liquids estimated that the minimum costs of installing an automated valve was $100,000 or higher and the maximum was $500,000. In contrast, pipeline operators that exclusively transport natural gas all estimated that the minimum cost was $75,000 or lower and three of the four operators estimated that maximum costs would be $240,000 or lower. We also spoke with five equipment vendors and six contractors that install valves to gather additional perspective on the cost of purchasing and installing automated valve equipment. According to estimates provided by these businesses, the combined equipment and labor costs range between $40,000 and $380,000. Specifically, equipment costs range from $10,000 to $75,000 while labor costs range from $30,000 to $315,000. (See table 3.) Vendors stated that the cost of installing an automated valve depends primarily on the functionality of the equipment (for example, additional controls would increase the cost), while contractors stated that these costs depend on the diameter and location of the pipeline. Vendors and contractors had varying opinions on whether the costs were greater to install an automated valve on hazardous liquid or natural gas pipeline. Susan Fleming, (202) 512-3824 or flemings@gao.gov. In addition to the contact above, Sara Vermillion (Assistant Director), Sarah Arnett, Melissa Bodeau, Russ Burnett, Matthew Cook, Colin Fallon, Robert Heilman, David Hooper, Mary Koenen, Grant Mallie, Josh Ormond, Daniel Paepke, Anne Stevens, and Adam Yu made key contributions to this report.
The nation's 2.5 million mile network of hazardous liquid and natural gas pipelines includes more than 400,000 miles of "transmission" pipelines, which transport products from processing facilities to communities and large-volume users. To minimize the risk of leaks and ruptures, PHMSA requires pipeline operators to develop incident response plans. Pipeline operators with pipelines in highly populated and environmentally sensitive areas ("high-consequence areas") are also required to consider installing automated valves. The Pipeline Safety, Regulatory Certainty, and Job Creation Act of 2011 directed GAO to examine the ability of transmission pipeline operators to respond to a product release. Accordingly, GAO examined (1) opportunities to improve the ability of transmission pipeline operators to respond to incidents and (2) the advantages and disadvantages of installing automated valves in high-consequence areas and ways that PHMSA can assist operators in deciding whether to install valves in these areas. GAO examined incident data; conducted a literature review; and interviewed selected operators, industry stakeholders, state pipeline safety offices, and PHMSA officials. The Department of Transportation's (DOT) Pipeline and Hazardous Materials Safety Administration (PHMSA) has an opportunity to improve the ability of pipeline operators to respond to incidents by developing a performance-based approach for incident response times. The ability of transmission pipeline operators to respond to incidents--such as leaks and ruptures--is affected by numerous variables, some of which are under operators' control. For example, the use of different valve types (manual valves or "automated" valves that can be closed automatically or remotely) and the location of response personnel can affect the amount of time it takes for operators to respond to incidents. Variables outside of operators' control, such as weather conditions, can also influence incident response time, which can range from minutes to days. GAO has previously reported that a performance-based approach--including goals and associated performance measures and targets--can allow those being regulated to determine the most appropriate way to achieve desired outcomes. In addition, several organizations in the pipeline industry have developed methods for quantitatively evaluating response times to incidents, including setting specific, measurable performance goals. While defining performance measures and targets for incident response can be challenging, PHMSA could move toward a performance-based approach by evaluating nationwide data to determine response times for different types of pipeline (based on location, operating pressure, and pipeline diameter, among other factors). However, PHMSA must first improve the data it collects on incident response times. These data are not reliable both because operators are not required to fill out certain time-related fields in the reporting form and because operators told us they interpret these data fields in different ways. Reliable data would improve PHMSA's ability to measure incident response and assist the agency in exploring the feasibility of developing a performance-based approach for improving operator response to pipeline incidents. The primary advantage of installing automated valves is that operators can respond quickly to isolate the affected pipeline segment and reduce the amount of product released; however, automated valves can have disadvantages, including the potential for accidental closures--which can lead to loss of service to customers or even cause a rupture--and monetary costs. Because the advantages and disadvantages of installing an automated valve are closely related to the specifics of the valve's location, it is appropriate to decide whether to install automated valves on a case-by-case basis. Several operators we spoke with have developed approaches to evaluate the advantages and disadvantages of installing automated valves. For example, some operators of hazardous liquid pipelines use spill-modeling software to estimate the amount of product release and extent of damage that would occur in the event of an incident. While PHMSA conducts a variety of information-sharing activities, the agency does not formally collect or share evaluation approaches used by operators to decide whether to install automated valves. Furthermore, not all operators we spoke with were aware of existing PHMSA guidance designed to assist operators in making these decisions. PHMSA could assist operators in making this decision by formally collecting and sharing evaluation approaches and ensuring operators are aware of existing guidance. DOT should (1) improve incident response data and use these data to evaluate whether to implement a performance-based framework for incident response times and (2) share guidance and information on evaluation approaches to inform operators’ decisions. DOT agreed to consider these recommendations.
You are an expert at summarizing long articles. Proceed to summarize the following text: The four social service programs included in our review—child care, child welfare services, child support enforcement, and the Temporary Assistance for Needy Families (TANF) block grant—provide a broad range of services and benefits for children and families. While each program is administered by HHS’ Administration for Children and Families, primary responsibility for operating these programs rests with state governments. Within many states, local governments operate social service programs with considerable autonomy. The major goals, services, and federal funding for the four programs are described below. Federally funded child care services consist primarily of subsidized care for children of low-income families while their parents are working, seeking work, or attending training or education. Other subsidized child care activities include providing information, referrals, and counseling to help families locate and select child care programs and training for child care providers. State child care agencies can provide child care directly, arrange for care with providers through contracts or vouchers, provide cash or vouchers in advance to families, reimburse families, or use other arrangements. Two settings for which states pay for care are family day care, under which care is provided for a small group of children in the caregiver’s home, and center care, under which establishments care for a group of children in a nonresidential setting, such as nonprofit centers sponsored by schools or religious organizations and for-profit centers that may be independent or members of a chain. The primary federal child care subsidy program is the Child Care Development Block Grant (CCDBG). In fiscal year 1996, about $2 billion was distributed to states to assist low-income families obtain child care so they could work or attend training or education. Under CCDBG, states are not required to provide state funds to match federal funding. Child welfare services aim to (1) improve the conditions of children and their families and (2) improve—or provide substitutes for—functions that parents have difficulty performing. Whether administered by a state or county government, the child welfare system is generally composed of the following service components: child protective services that entail responding to and investigating reports of child abuse and neglect, identifying services for the family, and determining whether to remove a child from the family’s home; family preservation and family support services that are designed to strengthen and support families who are at risk of abusing or neglecting their children or losing their children to foster care and that include family counseling, respite care for parents and caregivers, and services to improve parenting skills and support child development; foster care services that provide food and housing to meet the physical needs of children who are removed from their homes and placed with a foster family or in a group home or residential care facility until their family can be reunited, the child is adopted, or some other permanent placement is arranged; adoption services that include recruiting potential adoptive parents, placing children in adoptive homes, providing financial assistance to adoptive parents to assist in the support of special needs children, and initiating proceedings to relinquish or terminate parental rights for the care and custody of their children; and independent living services that are activities for older foster children—generally age 16 and older—to help them make the transition from foster care to living independently. Almost all states are also operating or developing an automated foster care and adoption data collection system. Federal funding for child welfare services totaled about $4 billion in fiscal year 1996. Nearly 75 percent of these funds were for foster care services. Depending on the source, the federal match of states’ program costs can range from 50 to 78 percent. The child support enforcement program enforces parental child support obligations by locating noncustodial parents, establishing paternity and child support orders, and collecting support payments. These services, established under title IV-D of the Social Security Act, are available to both welfare and nonwelfare families. In addition, states are operating or developing automated management information systems to help locate noncustodial parents and monitor child support cases. The federal government pays two-thirds of the states’ costs to administer the child support enforcement program. The states can also receive incentive funds based on the cost-effectiveness of child support enforcement agencies in making collections. In 1996, federal funding for program administration and incentives totaled almost $3 billion. The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 made major changes to the nation’s welfare system. In place of AFDC and the Job Opportunities and Basic Skills Training (JOBS) programs, the 1996 law created a block grant for states, or TANF, that has more stringent requirements than AFDC for welfare parents to obtain jobs in return for their benefits. In 1996, the federal government spent about $11 billion on AFDC benefit payments, and JOBS provided almost $1 billion to help families on welfare obtain education, training, and work experience to become self-sufficient. TANF provides states flexibility in, among other things, providing assistance to needy families and promoting job preparation and work. Federal spending through the TANF block grant is currently funded at $16.4 billion per year. States are not required to match federal funds but must maintain specified historic levels of state spending on behalf of families eligible for TANF. The federal, state, and local governments have for decades privatized a broad range of government activities in both nonsocial and social service programs. This trend is continuing. Since 1990, more than half of the state and local governments we contacted have increased their contracting for services, as indicated by the number and type of services privatized and the percentage of social service budgets paid to private contractors. Spurred by political leaders and top program managers, states and localities privatized social services in an attempt to reduce program costs and improve services by using the technology and management flexibility they believe private contractors offer. In addition, studies we examined and federal, state, and local government officials we interviewed expect privatization to increase with the enactment of recent federal welfare legislation and anticipated managed care initiatives in child welfare. State and local officials also anticipated increased contracting for services in the child care and child support enforcement programs. Privatization is commonly defined as any process aimed at shifting functions and responsibilities, in whole or in part, from the government to the private sector. Privatization can take various forms, including divestiture, contracting out, vouchers, and public-private partnerships. Most common is contracting, which typically entails efforts to obtain competition among private bidders to perform government activities. With contracting, the government remains the financier and is responsible for managing and setting policies on the type and quality of services to be provided. Depending on the program, government agencies can contract with other government entities—often through cooperative agreements—and with for-profit and nonprofit agencies. Using a variety of strategies, the federal, state, and local governments have for decades relied on private entities to provide a wide range of services and program activities. Programs as diverse as corrections, transportation, health services, and information resource management have been privatized to varying degrees. As all levels of government attempt to meet existing or growing workloads with fewer resources, privatization has more frequently been considered a viable means of service delivery. Child care, child welfare, child support enforcement, and welfare-to-work programs have long used contractors to provide certain services. For example, most states and local governments have relied on an existing network of private day care centers to provide certain child care services. Foster care services in child welfare have also traditionally been provided by private providers. Finally, state and local governments have also generally relied on contractors to provide certain automated data processing and related support activities. In addition to state and local governments’ past use of contractors in social services, a national study has reported recent growth in state privatization of these programs. In its 1993 national study, the Council of State Governments reported that almost 80 percent of the state social service departments surveyed in the study indicated they had expanded their use of privatization of social services in the preceding 5 years. The council’s study reported that child care services and several child welfare services, such as adoption, foster care, and independent living support services, were among the services in which privatization increased the most. During our review, we found that privatization of social services has generally continued to expand, despite certain challenges confronting state and local governments seeking to privatize services, as discussed below. Representatives of several national associations told us that state and local social service privatization has increased throughout the country in the last several years, as indicated by the percentage of state and local social service budgets paid to contractors. Among the state and local governments we contacted, most officials said the percentage of program budgets paid to contractors has increased since 1990. While the percentage of funds paid to private contractors has generally increased in the states and programs we selected, we found that the proportion of state and local social service budgets paid to private contractors varies widely among the programs we reviewed. According to local program officials, for example, the Los Angeles County child support enforcement program spent less than 5 percent of its $100 million program budget on contracted services in 1996. In comparison, program officials said the child care component of San Francisco’s Greater Avenues for Independence (GAIN) program spent all its program funds, or $2.1 million, on privatized services in 1996. State and local government officials we interviewed generally said that, in addition to the increased and varied portion of program budgets spent on privatized services, the number of functions performed by private contractors has increased since 1990. In Virginia, for example, officials said that the state has recently begun to contract out case management and assessment functions in its welfare-to-work program, a function previously performed by government employees. State and local governments have also recently begun to privatize a broad array of child support enforcement services. While it is not uncommon for states to contract out certain child support enforcement activities, in 1996 we reported that 15 states had begun to privatize all the activities of selected child support enforcement offices in an effort to improve performance and handle growing caseloads. For most of the state and local governments we interviewed, privatized social services are now provided by nonprofit organizations, especially in child welfare. However, most of the state and local officials we contacted indicated that they also contract with for-profit organizations to deliver social services. The state and local officials we interviewed told us that among their programs the proportion of the budget for private contractors that is spent on for-profit organizations varied, ranging from as low as zero for child welfare to as high as 100 percent for child support enforcement. Within each program, the proportion of funds paid to for-profit organizations has remained about the same since 1990. A variety of reasons have prompted states and localities to contract out social services. The growth in privatization has most often been prompted by strong support from top government officials, an increasing demand for public services, and the belief that private contractors are able to provide higher-quality services more cost-effectively because of their management flexibility. In addition, state and local governments have chosen to contract out to compensate for the lack of government expertise in certain service areas, such as in the development of automated information systems. The following examples highlight common privatization scenarios: Several local child support offices in Virginia each contracted with a for-profit organization to provide a full range of program services such as locating absent parents, establishing paternity and support orders, and collecting support payments. The local offices undertook these contracts to improve program effectiveness and efficiency. Some California counties privatized job training and placement services in their GAIN program as a way to meet new state-legislated program requirements or avoid hiring additional government employees. Some state and local governments have expanded already privatized services in programs such as child care to respond to a greater public demand for services. Texas contracts to provide food stamp and other benefits electronically to use the technical expertise of private providers. State and local government officials and other experts told us they expect the growth of privatization to continue. Increasingly, future trends in privatization may incorporate additional functions traditionally performed by state and local governments. For example, as a result of the recent welfare legislation, state and local governments now have greater flexibility in deciding how welfare programs will be administered, including an expanded authority that allows them to use private contractors to determine eligibility, an activity that has traditionally been conducted by government employees. Additionally, the Congress has shown greater interest in broadening the range of government activities that could be privatized in other social service programs. Such activities include eligibility and enrollment determination functions in the Medicaid and Food Stamps programs. The Clinton administration has opposed these proposals to expand privatization, stating that the certification of eligibility for benefits and related operations, such as verification of income and other eligibility factors, should remain public functions. In addition to the changes anticipated from the welfare legislation and more recent legislative proposals, state and local officials anticipate that privatization will continue to increase in the three other social service programs we examined. In child welfare services, according to a 1997 Child Welfare League of America survey, 31 states are planning or implementing certain management functions or use of managed care approaches to apply some combination of managed care principles—currently used in physical and behavioral health services—in the management, financing, and delivery of child welfare services. These principles include contracting to meet all the needs of a specific group of clients for a set fee rather than being paid for each service they provide. Also, in child care programs, states are increasingly privatizing the management of their voucher systems. In these cases, contractors manage the system that provides vouchers or cash certificates to families who purchase child care services from authorized providers. Finally, in child support enforcement, state program officials expect that more states will begin to contract out the full range of child support services. In two California counties we contacted, county officials, after initially contracting out for certain services, decided to discontinue the practice and now have those services performed by county employees. Los Angeles County, for example, had contracted with a for-profit organization to perform the case management function in its GAIN program; however, following a change in the composition of the county’s board of supervisors, the board opposed privatizing these functions. Program officials did not renew the contract. In San Bernardino County’s GAIN program, a portion of the job search services was initially contracted out because the county did not itself have the capacity to provide all such services when the program was first implemented. Once the county hired and trained the necessary public workers, the contractor’s services were no longer needed and the contract was terminated. In both these cases, local program officials were satisfied with the contractors’ performance. Federal, state, and local government officials, union representatives, national associations, advocacy groups, contractors, and other experts in social service privatization identified several challenges that state and local governments most often encountered when they privatized social services. These challenges include obtaining a sufficient number of qualified bidders, developing sufficiently detailed contract specifications, and implementing effective methods of monitoring contractor performance. The challenges may make it difficult for state and local governments to reduce program costs and improve services. State and local government officials we contacted reported mixed results from their past and present efforts to privatize social services. However, few empirical studies compare the program costs and quality of publicly and privately provided services, and the few studies that do make such comparisons report mixed results overall. Competition has long been held as a principle central to the efficient and effective working of businesses in a free-market economy. In a competitive market, multiple parties attempt to secure the business of a customer by offering the most favorable terms. Competition in relation to government activities can occur when private sector organizations compete among themselves or public sector organizations compete with the private sector to conduct public sector business. In either case, competition for government business attempts to bring the same advantages of a competitive market economy—lower prices and higher-quality goods or services—to the public sector. Competitive markets can help governments reduce program costs and improve service quality. In many cases, the benefits from competition have been established for nonsocial service programs, such as trash collection, traffic enforcement, and other functions intended to maintain or improve a government’s infrastructure. State and local governments that have contracted out public works programs competitively have documented cost savings, improved service delivery, or gained customer satisfaction.By contracting out, for example, the city of Indianapolis has already accrued cost savings and estimated that it would save a total of $65 million, or 42 percent, in its wastewater treatment operations between 1994 and 1998. The city also reported that the quality of the water it treated improved. In addition, New York State estimated that it saved $3 million annually by contracting out certain economic development and housing loan functions. However, not all experts agree whether it is possible to achieve the same results with privatization of social service programs. Some experts believe that competition among social service providers can indeed reduce program costs and improve services for children and families since, in their view, private firms inherently deliver higher-quality services at lower costs than public firms. In contrast, other experts hold that social services are significantly different from services such as trash collection or grounds maintenance—so different, in fact, that one cannot assume that competition will be sufficient to increase effectiveness or reduce costs. Several factors make it difficult to establish and maintain competitive markets with contractors that can respond to the diverse and challenging needs of children and families. These factors include the lack of a large number of social service providers with sufficiently skilled labor, the high cost of entry into the social services field, and the need for continuity of care, particularly in services involving residential placement or long-term therapy. Some experts believe that these constraints reduce the likelihood of achieving the benefits anticipated from social service privatization.Appendix II contains a more detailed comparison of characteristics associated with privatizing social services and nonsocial services. Many state and local program officials we contacted reported that they were satisfied with the number of qualified bidders in their state or locality. However, some of these officials expressed concern about the insufficient number of qualified bidders, especially in rural areas and when the contracted service calls for higher-skilled labor. For example, in certain less-urban locations, officials found only one or two contractors with the requisite skills and expertise to provide needed services. In Wisconsin, some county child welfare officials told us that their less-populous locations made them dependent on a single off-site contractor to provide needed services. As a result, program officials believed, the contractor was less responsive to local service needs than locally based public providers usually are. Similarly, officials in Virginia’s welfare-to-work program said rural areas of the state have less-competitive markets for services, thereby minimizing benefits from contracting by raising contractor costs to levels higher than they would be in a more competitive market. State and local officials also encountered situations with few qualified bidders when they contracted for activities that required higher-skilled labor. In Texas, only one contractor bid to provide electronic benefit transfer services for recipients of cash assistance and other benefits, and the bid exceeded anticipated cost estimates. Faced with only one bidder, the state had to rebid the contract and cap the funds it was willing to pay. Although state and local program officials reported instances of insufficient qualified bidders, we found few empirical studies of social service programs that examine the link between the level of competition and costs, or service quality, and these studies taken together were inconclusive. Given the uncertainties of the market, several state and local governments can use creative approaches to augment the competitive environment in order to reduce program costs and improve services. For example, under “managed competition” a government agency may prepare a work proposal and submit a bid to compete with private bidders. The government may award the contract to the bidding agency or to a private bidder. In Wisconsin, counties are competing against nongovernment providers to provide welfare-to-work services in the state’s Wisconsin Works program. Some state and local governments have configured their service delivery system to encourage ongoing competition between private and public providers. In some cases, a jurisdiction awards a contract to a private provider to serve part of its caseload and allows its public agency to continue to serve the rest. The competition fostered between public and private providers can lead to improved services, as in both the Orange County and San Bernardino County GAIN programs. In these counties, program officials concluded that when public agencies provide services side-by-side with private providers, both government personnel and private sector personnel were motivated to improve their performance. In Orange County, GAIN program job placements increased by 54 percent in 1995 when both the public agency and a private provider provided job placements to different groups of clients, compared with 1994, when only the public agency provided job placement services to all clients. While many state and local government officials advocate privatization, others believe that it is possible, through better management, to reduce the costs and improve the quality of services delivered by programs that government employees administer. Internal management techniques include basing performance on results, consolidating and coordinating human services, and reforming management systems. For example, the Oregon Option, a partnership between the federal government and the state, aims to, among other things, improve the delivery of social services by forging partnerships among all levels of government for the purpose of focusing on measurable results. Successful contracting requires devoting adequate attention and resources to contract development and monitoring. Even when contractors provide services, the government entity remains responsible for the use of the public resources and the quality of the services provided. Governments that privatize social services must oversee the contracts to fully protect the public interest. One of the most important, and often most difficult, tasks in privatizing government activities is writing clear contracts with specific goals against which contractors can be held accountable. Although some program officials told us that they had an ample number of staff who were experienced with these tasks, others said that they had an insufficient number of staff with the requisite skills to prepare and negotiate contracts. When contract requirements are vague, both the government and contractor are left uncertain as to what the contractor is expected to achieve. Contract monitoring should assess the contractor’s compliance with statutes, regulations, and the terms of the agreement, as well as evaluate the contractor’s performance in delivering services, achieving desired program goals, and avoiding unintended negative results. In this and previous reviews of privatization efforts, we found that monitoring contractors’ performance was the weakest link in the privatization process. Increasingly, governments at all levels are trying to hold agencies accountable for results, amid pressures to demonstrate improved performance while cutting costs. Privatization magnifies the importance of focusing on program results, because contractor employees, unlike government employees, are not directly accountable to the public. However, monitoring the effectiveness of social service programs, whether provided by the government or through a contract, poses special challenges because program performance is often difficult to measure. State and local governments have found it difficult to establish a framework for identifying the desired results of social service programs and to move beyond a summary of a program’s activities to distinguish desired outcomes or results of those activities, such as the better well-being of children and families or the community at large. For example, a case worker can be held accountable for making a visit, following up with telephone calls, and performing other appropriate tasks; however, it is not as easy to know whether the worker’s judgment was sound and the intervention ultimately effective. Without a framework for specifying program results, several state and local officials said that contracts for privatized social services tend to focus more on the day-to-day operations of the program than on service quality. For example, officials in San Francisco’s child care program told us that their contracts were often written in a way that measured outputs rather than results, using specifications such as the number of clients served, amount of payments disbursed, and the total number of hours for which child care was provided. In addition, monitoring efforts focused on compliance with the numbers specified in the contracts for outputs rather than on service quality. These practices make it difficult to hold contractors accountable for achieving program results, such as providing children with a safe and nurturing environment so that they can grow and their parents can work. Reliable and complete cost data on government activities are also needed to assess a contractor’s overall performance and any realized cost savings. However, data on costs of publicly provided services are not always adequate or available to provide a sound basis for comparing publicly and privately provided services. In some cases, preprivatization costs may not be discernible for a comparable public entity, or the number of cases available may be insufficient to compare public and privatized offices’ performance. In other cases, the privatized service may not have been provided by the public agency. To address many of the difficulties in monitoring contractor performance, government social service agencies are in the early stages of identifying and measuring desired results. For example, California’s state child care agency is developing a desired-results evaluation system that will enable state workers to more effectively monitor the results of contractors’ performance. Many agencies may need years to develop a sound set of performance measures, since the process is iterative and contract management systems may need updating to establish clear performance standards and develop cost-effective monitoring systems. In the child support enforcement program, for example, performance measures developed jointly by HHS and the states provide the context for each state to assess the progress contractors make toward establishing paternities, obtaining support orders, and collecting support payments. Developing the agreed-upon program goals and performance measures was a 3-year process. Some experts in social service privatization have expressed concern that contractors, especially when motivated by profit-making goals and priorities, may be less inclined to provide equal access to services for all eligible beneficiaries. These experts believe that contractors may first provide services to clients who are easiest to serve, a practice commonly referred to as “creaming,” leaving the more difficult cases to the government to serve or leaving them unserved. Among the organizations we contacted—federal, state, and local governments, unions, public interest and advocacy groups, and contractors—we found differing views on whether all eligible individuals have the same access to privatized services as they had when such services were publicly provided. Generally, federal, state, and local government officials whom we interviewed were as confident in contractors as they were in the government to grant equal access to services for all eligible citizens. For example, an official in Wisconsin said that after privatization of some county welfare-to-work services, she saw no decline in client access to services. In contrast, representatives from advocacy groups and unions were less confident that contractors would provide equal access to services for all eligible citizens than the government would. We found no conclusive research that evaluated whether privatization affects access to services. Various groups have also raised concerns about recent changes that permit contractors to perform program activities that government employees traditionally conduct. Advocacy groups, unions, and some HHS officials expressed concern about privatizing activities that have traditionally been viewed as governmental, such as determining eligibility for program benefits or services, sanctioning beneficiaries for noncompliance with program requirements, and conducting investigations of child abuse and neglect for purposes of providing child protective services. Under federal and state requirements, certain activities in most of the programs we studied were to be performed only by government employees. Under TANF, however, contractors can determine program eligibility. Several union representatives and contractors told us that they believe certain functions, including policy-making responsibilities and eligibility determinations, often based on confidential information provided by the service recipient and requiring the judgment of the case worker, should always be provided by government employees. Officials from several of the organizations we interviewed believe that equal access to services and other recipient rights can be protected by making several practices an integral part of social service privatization. Two contractor representatives said that carefully crafted contract language could help ensure that contracted services remain as accessible as publicly provided services. Other officials told us that remedies for dispute resolution should be provided to help beneficiaries resolve claims against contractors. Another suggested practice would require government agencies to approve contractor recommendations or decisions regarding clients in areas traditionally under government jurisdiction. In the Los Angeles County GAIN program, for example, county officials had to approve contractor recommendations to sanction certain clients for noncompliance with program requirements before those sanctions could be applied. While these options may provide certain protections, they may be difficult to implement. The limited experiences of state and local governments in writing and monitoring contracts with clearly specified results could lead to difficulties in determining which clients are eligible for services and in determining whether or not these clients received them. In addition, advocacy groups and unions said some remedies for dispute resolution might be difficult to implement because contractors do not always give beneficiaries the information they need to resolve their claims. Finally, others noted that any additional government review of contractor decisions can be costly and can reduce contractor flexibility. While numerous experts believe that contracted social services can reduce costs and improve service quality, a limited number of studies and evaluations reveal mixed results, as illustrated by the following examples: Our previous report on privatization of child support enforcement services found that privatized child support offices performed as well as or, in some instances, better than public programs in locating noncustodial parents, establishing paternity and support orders, and collecting support owed. The relative cost-effectiveness of the privatized versus public offices varied among the four sites examined. Two privatized offices were more cost-effective, one was as cost-effective, and one was less cost-effective. A California evaluation of two contracts in Orange County’s GAIN employment and training program found that the one contract for orientation services resulted in good service quality and less cost than when performed by county employees. The other contract for a portion of case management services had more mixed results; the contractor did not perform as well as county staff on some measures but was comparable on others. For example, county workers placed participants in jobs at a higher rate and did so more cost-effectively than private workers. Yet client satisfaction with contractor- and county-provided services was comparable. A comparison of public and private service delivery in Milwaukee County, Wisconsin, found that the cost of foster care services was higher when provided by private agencies than when provided by county staff. Further, the private agencies did not improve the quality of services when measured by the time it took to place a child in a permanent home or by whether the child remained in that home. State governments have contracted to upgrade automated data systems in the child support enforcement program. Since 1980, states have spent a combined $2.6 billion on automated systems—with $2 billion of the total being federally funded. As we reported earlier, these systems appear to have improved caseworker productivity by helping track court actions relating to paternity and support orders and amounts of collections and distributions. According to HHS, almost $11 billion in child support payments were collected in 1995—80 percent higher than in 1990. While it is too early to judge the potential of fully operational automated systems, at least 10 states are now discovering that their new systems will cost more to operate once they have been completed. One state estimated that its new system, once operational, would cost three to five times more than the old system and former operating costs could be exceeded by as much as $7 million annually. Potential savings from privatizing social services can be offset by various factors, such as the costs associated with contractor start-up and government monitoring. While direct costs attributable to service delivery may be reduced, state and local agencies may incur additional costs for transition, contract management, and the monitoring of their privatization efforts. Despite the lack of empirical evidence, most state and local government officials told us they were satisfied with the quality of privatized services. Some officials said that efficiencies were realized as a result of contractors’ expertise and management flexibility. In many cases, public agencies established collaborative relationships with private providers that helped them be more responsive to beneficiaries. Still other officials, however, said they saw no significant benefits resulting from privatization because outcomes for children and families were the same as when the government provided the service. For example, Milwaukee’s privatization of foster care services had not improved the proportion of children who remained in permanent homes, a specified goal of the program. The increase in privatization combined with the difficulties states are having in developing methods to monitor program results raise questions about how HHS can ensure that broad program goals are achieved. It will be challenging for HHS to develop and implement approaches to help states assess results of federally funded programs and track them over time so that state and local governments are better prepared to hold contractors accountable for the services they provide. Currently, monitoring program results poses a challenge throughout the government. Some state and local government officials whom we interviewed believed they should pay greater attention to program results, given the increased use of private contractors. Several officials mentioned that HHS could help the states and localities develop methods of assessing program results by clarifying program goals, providing more responsive technical assistance, and sharing best practices. The fact that officials in most of the states we contacted said they currently do not have methods in place to assess program results suggests that unless HHS provides states with this help, it will have difficulty assessing the effectiveness of social service programs nationally. HHS’ current focus on compliance with statutes and regulations poses a challenge in monitoring the effectiveness of state programs and in identifying the effects of privatization on these programs. HHS carries out its oversight function largely through audits conducted by the Office of the Inspector General, program staff, and other HHS auditors. HHS officials told us that the department has focused its auditing of the states on compliance with federal statutes and regulations more than other areas of focus, such as results achieved or client satisfaction. For example, HHS may conduct a compliance audit to verify that state programs spent federal money in ways that are permitted by federal regulations. The Government Performance and Results Act of 1993 may provide an impetus for HHS to place a greater emphasis on monitoring the effectiveness of state programs. Under this act, federal agencies are required to develop a framework for reorienting program managers toward achieving better program results. As a federal agency, HHS must refocus from compliance to developing and implementing methods to assess social service program results. However, this transition will not be easy, given the challenge that government agencies face when attempting to orient their priorities toward achieving better program results and the difficulty inherent in defining goals and measuring results for social service programs. Some agencies within HHS have made progress in including the assessment and tracking of program results within their oversight focus. For example, within HHS, the Office of Child Support Enforcement has recently increased its emphasis on program results by establishing, in conjunction with the states, a strategic plan and a set of performance measures for assessing progress toward achieving national program goals. Child support enforcement auditors have also recently begun to assess the accuracy of state-reported data on program results. These initiatives may serve as models for HHS as it attempts to enhance accountability for results in social service programs supported with federal funds. Our work suggests that privatization of social services has not only grown but is likely to continue to grow. Under the right conditions, contracting for social services may result in improved services and cost savings. Social service privatization is likely to work best at the state and local levels when competition is sufficient, contracts are effectively developed and monitored by government officials, and program results are assessed and tracked over time. The observed increase in social service privatization highlights the need for state and local governments to specify desired program results and monitor contracts effectively. At the same time, the federal government, through the Government Performance and Results Act of 1993, is focusing on achieving better program results. These concurrent developments should facilitate more effective privatized social services. More specifically, HHS in responding to its Government Performance and Results Act requirement could help states find better ways to manage contracts for results. This could, in turn, help state and local governments ensure that they are holding contractors accountable for the results they are expected to achieve, thus optimizing their gains from privatization. We provided draft copies of this report to HHS, the five states we selected for review, and other knowledgeable experts in social service privatization. HHS did not provide comments within the allotted 30-day comment period. We received comments from California, Texas, and Virginia. These states generally concurred with our findings and conclusions. Specifically, officials from Texas and Virginia agreed that developing clear performance measures and monitoring contractor performance present special challenges requiring greater priority and improvement. These states also support a stronger federal-state partnership to help them address these special challenges. Comments received from other acknowledged experts in social service privatization also concurred with the report and cited the need to increase competition, develop effective contracts, and monitor contractor performance, thereby increasing the likelihood that state and local governments would achieve intended results sought through social service privatization. The comments we received did not require substantive or technical changes to the report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretary of HHS and HHS’ Assistant Secretary for Children and Families. We will also make copies available to others on request. If you or your staff have any questions about this report, please contact Kay E. Brown, Assistant Director, or Mark E. Ward, Senior Evaluator, at (202) 512-7215. Other major contributors to this report are Gregory Curtis, Joel I. Grossman, Karen E. Lyons, and Sylvia L. Shanks. To meet the objectives of this study, we reviewed and synthesized studies and published articles on social service privatization by conducting a literature review and synthesis of articles, studies, and other documents selected from economic, social science, and business bibliographic files. We also considered articles and studies recommended by other organizations. As a result of these efforts, we selected 14 articles or studies on social service privatization in the United States. These articles are listed in the bibliography. We chose the four programs included in our study because they constitute an increasingly important component of the nation’s welfare system in terms of both the diversity of services they provide and the magnitude of federal funding used to support state program administration. To select states for study, we reviewed GAO reports and other studies of privatization, concluding that we would interview state and local government officials in California, Massachusetts, Texas, Virginia, and Wisconsin regarding their respective child care, child welfare, child support enforcement, and family assistance programs supported by TANF. We selected these states to learn how state and local governments have implemented privatized services among the four social service programs we included in our review. We chose these states also because we were aware that they had some experience in the privatization of social services and we could thus examine a mix of state- and county-administered social service programs. To broaden our coverage of the diverse views on privatization, we also interviewed officials of HHS, national associations and advocacy groups, unions, and contractors. During our interviews, we obtained and reviewed agency documents. For our interviews, we used semistructured guides containing both closed and open-ended questions to obtain information on the extent of recent social service privatization, type of program functions being privatized, issues leading to the decision to privatize, issues in implementation of social service privatization, degree and type of monitoring and evaluation conducted, and federal policy implications stemming from social service privatization. We conducted 36 interviews in total concerning the four social service programs we studied. In conducting our interviews, we asked the interviewees to respond from the perspective that seemed to us most consistent with their knowledge base and area of primary interest. For example, we asked state program officials to respond from the perspective of their entire state, whereas we asked local officials to base their responses solely on their experiences in their own locality. Similarly, we asked officials in HHS, national associations and advocacy groups, unions, and contractors to provide a national perspective on key issues surrounding privatization in each of the four social service programs. The interview responses that we report on reflect the views of only the officials we interviewed. The following information lists the federal, state, and local government, union, advocacy group, national association, and contractor contacts we made. The number of interviews conducted with representatives of each organization appears in parentheses. Department of Health and Human Services, Administration for Children and Families (6) Department of Education (1) Department of Social Services (3) Department of Public Social Services, Employment Program Bureau (1); District Attorney’s Office, Bureau of Family Support Operations (1) Jobs and Employment Services Department (1) San Francisco City and County Department of Human Services, Employment and Training Services (1); Department of Human Services, Family and Children’s Services Division (1) Social Service Agency, Family and Children Services Division (1) Department of Social Services (1) Department of Transitional Assistance (1) Department of Social Services (2) Department of Human Services (1) Department of Protective and Regulatory Services (1) State Attorney General’s Office (1) Department of Health and Family Services (1) Department of Workforce Development (1) Department of Human Services (1) Department of Child Services (1) American Public Welfare Association (1) Center for Law and Social Policy (1) Child Welfare League of America (1) National Association of Counties (1) National Conference of State Legislatures (1) National Governors Association (1) Maximus, Government Operations Division (1) Lockheed Martin IMS (1) We conducted our study between October 1996 and July 1997 in accordance with generally accepted government auditing standards. Performance is difficult to measure because most services cannot be judged on the basis of client outcomes; treatment approaches cannot be standardized, nor can the appropriateness of workers’ decisions be effectively assessed (Table notes on next page) Chi, K.S. “Privatization in State Government: Trends and Options.” Prepared for the 55th National Training Conference of the American Society for Public Administration, Kansas City, Missouri, July 23-27, 1994. Donahue, J.D. “Organizational Form and Function.” The Privatization Decision: Public Ends, Private Means. New York: Basic Books, 1989. Pp. 37-56. Drucker, P.F. “The Sickness of Government.” The Age of Discontinuity: Guidelines to Changing Our Society. New York: Harper and Row, 1969. Pp. 212-42. Eggers, W.D., and R. Ng. Social and Health Service Privatization: A Survey of County and State Governments, Policy Study 168. Los Angeles, Calif.: Reason Foundation, Oct. 1993. Pp. 1-18. Gronbjerg, K.A., T.H. Chen, and M.A. Stagner. “Child Welfare Contracting: Market Forces and Leverage.” Social Service Review (Dec. 1995), pp. 583-613. Leaman, L.M., and others. Evaluation of Contracts to Privatize GAIN Services, County of Orange, Social Services Agency, December 1995. Matusiewicz, D.E. “Privatizing Child Support Enforcement in El Paso County.” Commentator, Vol. 6, No. 32 (Sept.-Oct. 1995), p. 16. Miranda, R. “Privatization and the Budget-Maximizing Bureaucrat.” Public Productivity and Management Review, Vol. 17, No. 4 (summer 1994), pp. 355-69. Nelson, J.I. “Social Welfare and the Market Economy.” Social Science Quarterly, Vol. 73, No. 4 (Dec. 1992), pp. 815-28. O’Looney, J. “Beyond Privatization and Service Integration: Organizational Models for Service Delivery.” Social Service Review (Dec. 1993), pp. 501-34. Smith, S.R., and M. Lipsky. “Privatization of Human Services: A Critique.” Nonprofits for Hire: The Welfare State in the Age of Contracting. Cambridge, Mass.: Harvard University Press, 1994. Pp. 188-205. Smith, S.R., and D.A. Stone. “The Unexpected Consequences of Privatization,” Remaking the Welfare State: Retrenchment and Social Policy in America and Europe, Michael K. Brown (ed.). Philadelphia, Pa.: Temple University Press, 1988. Pp. 232-52. VanCleave, R.W. “Privatization: A Partner in the Integrated Process.” Commentator, Vol. 6, No. 32 (Sept.-Oct. 1995), pp. 14-17. Weld, W.F., and others. An Action Agenda to Redesign State Government. Washington, D.C.: National Governors’ Association, 1993. pp. 42-63. The Results Act: Observations on the Department of Health and Human Services’ April 1997 Draft Strategic Plan (GAO/HEHS-97-173R, July 11, 1997). Child Support Enforcement: Strong Leadership Required to Maximize Benefits of Automated Systems (GAO/AIMD-97-72, June 30, 1997). Privatization and Competition: Comments on S. 314, the Freedom From Government Competition Act (GAO/T-GGD-97-134, June 18, 1997). The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997). Managing for Results: Analytic Challenges in Measuring Performance (GAO/HEHS/GGD-97-138, May 30, 1997). Welfare Reform: Three States’ Approaches Show Promise of Increasing Work Participation (GAO/HEHS-97-80, May 30, 1997). Welfare Reform: Implications of Increased Work Participation for Child Care (GAO/HEHS-97-75, May 29, 1997). Foster Care: State Efforts to Improve the Permanency Planning Process Show Some Promise (GAO-HEHS-97-73, May 7, 1997). Privatization: Lessons Learned by State and Local Governments (GAO/GGD-97-48, Mar. 14, 1997). Child Welfare: States’ Progress in Implementing Family Preservation and Support Activities (GAO/HEHS-97-34, Feb. 18, 1997). Child Support Enforcement: Early Results on Comparability of Privatized and Public Offices (GAO/HEHS-97-4, Dec. 16, 1996). Child Support Enforcement: Reorienting Management Toward Achieving Better Program Results (GAO/HEHS/GGD-97-14, Oct. 25, 1996). Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996). District of Columbia: City and State Privatization Initiatives and Impediments (GAO/GGD-95-194, June 28, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined issues related to social service privatization, focusing on the: (1) recent history of state and local government efforts to privatize federally funded social services; (2) key issues surrounding state and local privatized social services; and (3) federal policy implications of state and local social service privatization. GAO found that: (1) since 1990, more than half of the state and local governments GAO contacted have increased their contracting for services, as indicated by the number and type of services privatized and the percentage of social service budgets paid to private contractors; (2) many experts GAO consulted expect privatization to expand further; (3) GAO's research found that the recent increases in privatization were most often prompted by political leaders and top program managers, who were responding to an increasing demand for public services and a belief that contractors can provide higher-quality services more cost-effectively than can public agencies; (4) in attempts to provide more cost-effective services, more states are contracting out larger portions of their child support enforcement programs; (5) state and local governments are turning to contractors to provide some services and support activities in which they lack experience or technical expertise; (6) state and local governments face several key challenges as they plan and implement strategies to privatize their social services; (7) first is the challenge to obtain sufficient competition to realize the benefits of privatization; (8) second, state and local governments often have little experience in developing contracts that specify program results in sufficient detail to effectively hold contractors accountable; (9) third, it can be difficult for states to monitor performance in some social service programs; (10) increased privatization raises questions about how the Department of Health and Human Services (HHS) will fulfill its obligation to ensure that broad program goals are achieved; (11) assessing program results presents a significant challenge throughout the government, yet it is an important component of an effective system for holding service providers accountable; (12) the difficulties the states have in monitoring privatized social services focus attention on the need to improve accountability for results; (13) some of the state and local officials GAO interviewed believe that HHS should clarify its program goals and develop performance measures states can use to monitor and evaluate contractor efforts; (14) the Government Performance and Results Act of 1993 requires federal agencies like HHS to focus their efforts on achieving better program results; (15) HHS' practice of holding states accountable primarily for compliance with statutes and regulations may make the transition particularly difficult; and (16) however, promising approaches are available within HHS in moving to a program results orientation.
You are an expert at summarizing long articles. Proceed to summarize the following text: One of the main purposes of guidance is to explain and help regulated parties comply with agency regulations. As shown in figure 1, guidance may explain how agencies plan to interpret regulations. Agencies sometimes include disclaimers in their guidance to note that the documents have no legally binding effect on regulated parties or the agencies. Even though not legally binding, guidance documents can have a significant effect on regulated entities and the public, both because of agencies’ reliance on large volumes of guidance documents and because the guidance can prompt changes in the behavior of regulated parties and the general public. Nevertheless, defining guidance can be difficult. To illustrate that difficulty, several of the components told us that they do not consider many of the communication documents they issue to the public to be guidance. Regulations and guidance documents serve different purposes. The Administrative Procedure Act (APA) established broadly applicable requirements for informal rulemaking, also known as notice and comment rulemaking. Among other things, the APA generally requires that agencies publish a notice of proposed rulemaking in the Federal Register. After giving the public an opportunity to comment on the proposed regulation by providing “written data, views, or arguments,” and after considering the public comments received, the agency may then publish the final regulation. To balance the need for public input with competing societal interests favoring the efficient and expeditious conduct of certain government affairs, the APA exempts certain types of rules from the notice and comment process, including “interpretative rules” (we will refer to these as interpretive rules in this statement) and “general statements of policy.” Regulations affect regulated entities by creating binding legal obligations. Regulations are generally subject to judicial review by the courts if, for example, a party believes that an agency did not follow required rulemaking procedures or went beyond its statutory authority. Despite the general distinctions between regulations and guidance documents, legal scholars and federal courts have at times noted that it is not always easy to determine whether an agency action should be issued as a regulation subject to the APA’s notice and comment requirements, or is guidance or a policy statement, and therefore exempt from these requirements. Among the reasons agency guidance may be legally challenged are procedural concerns that the agency inappropriately used guidance rather than the rulemaking process or concerns that the agency has issued guidance that goes beyond its authority. On March 9, 2015, the Supreme Court held that an agency could make substantive changes to an interpretive rule without going through notice and comment under the APA. This decision overturned prior federal court rulings that had held that an agency is precluded from substantively changing its interpretation of a regulation through issuance of a new interpretive rule without notice and comment. Other concerns raised about agency use of guidance include consistency of the information being provided, currency of guidance, and whether the documents are effectively communicated to affected parties. An OMB Bulletin establishes policies and procedures for the development, issuance, and use of “significant” guidance documents. OMB defines “significant guidance documents” as guidance with a broad and substantial impact on regulated entities. Pursuant to a memorandum issued by the Director of OMB in March 2009, OMB’s Office of Information and Regulatory Affairs (OIRA) reviews some significant guidance documents prior to issuance. All significant guidance documents, whether reviewed by OIRA or not, are subject to the OMB Bulletin. “Economically significant guidance documents” are also published in the Federal Register to invite public comment. Non- significant guidance is not subject to the OMB Bulletin, and any procedures for developing and disseminating it are left to agency discretion. Selected departments considered few of their guidance documents to be significant as defined by OMB. For example, as of February 2015, agencies listed the following numbers of significant guidance documents on their websites: Education, 139; DOL, 36; and USDA, 34. We were unable to determine the number of significant guidance documents issued by HHS. In contrast, some of the agencies issued hundreds of non- significant guidance documents. All selected components told us that they did not issue any economically significant guidance. OIRA staff told us they accepted departments’ determinations of which types of guidance meet the definition of significant guidance. The selected components we reviewed differed in both the terminology they used for their external non-significant guidance documents and in the amounts of non-significant guidance they issued. We found the components used many names for these guidance documents—for example, Education components’ guidance documents included FAQs and “Dear Colleague” letters, while DOL components used varied terms including bulletins, “Administrator Interpretations,” directives, fact sheets, and policy letters. The components issued varying amounts of guidance ranging from 10 to more than 100 documents issued by a component in a single year. Component officials said a component’s mission or the types of programs it administers can affect the number of guidance documents issued. Officials from DOL’s Bureau of Labor Statistics (BLS) told us their agency, as a non-regulatory component, rarely issues guidance. They said BLS has issued about 10 routine administrative memorandums each year related to the operation of two cooperative agreement statistical programs. In contrast, DOL Occupational Safety and Health Administration (OSHA) officials told us they have regularly issued guidance to assist with regulatory compliance, and could easily produce 100 new or updated products each year to provide guidance to stakeholders. Although the DOL Office of Workers’ Compensation Programs has regulatory authority, officials told us that they have not frequently issued guidance because their authorizing statutes have not changed recently and their programs focus on administering benefits. Agencies have used guidance for multiple purposes, including explaining or interpreting regulations, clarifying policies in response to questions or compliance findings, disseminating suggested practices or leadership priorities, and providing grant administration information. Component officials told us they used guidance to summarize regulations or explain ways for regulated entities to meet regulatory requirements. For example, Education officials told us that they often follow their regulations with guidance to restate the regulation in plainer language, to summarize requirements, to suggest ways to comply with the new regulation, or to offer best practices. In a few cases, components used guidance to alert affected entities about immediate statutory requirements or to anticipate upcoming requirements to be promulgated through the rulemaking process. Education officials told us they often used guidance to help their field office staff understand and apply new statutory requirements. While this may provide timely information about new or upcoming requirements, it may also cause confusion as details are revised during the rulemaking process. Officials from USDA’s Food and Nutrition Service (FNS) told us that when a new statute becomes effective immediately and there is little ambiguity in how the statute can be interpreted, they use a “staging process.” In this process, they issue informational guidance so their stakeholders are aware of and consistently understand new requirements before the more time-consuming rulemaking process can be completed. Other officials told us that in rare instances, they have issued guidance while a proposed rule is out for comment. They noted that statutory deadlines for implementation may require them to issue guidance before issuing a final rule. Component officials cited instances in which they used guidance to provide information on upcoming requirements to be promulgated through regulation to those affected. In one example, HHS’s Office of Child Care within the Administration for Children and Families issued recommendations to its grantees to foreshadow future binding requirements. In that case, the office issued an Information Memorandum in September 2011 recommending criminal background checks. It later published a proposed rule in May 2013 to mandate the background checks. Multiple component officials told us that they used guidance to clarify policies in response to questions received from the field, or regional office input about questions received from grantees or regulated entities. Officials at Education’s Office for Civil Rights and OSHA told us that they often initiated guidance in response to findings resulting from their investigatory or monitoring efforts, among other things. Component officials also told us that they used guidance to distribute information on program suggestions (sometimes called best practices). In particular, we heard this from component officials who administered formula grants in which wide discretion is given to grantees, such as states. Officials at Education’s Office of Postsecondary Education told us that component leadership initiates guidance related to priorities the administration wants to accomplish. One example they cited was a Dear Colleague letter explaining that students confined or incarcerated in locations such as juvenile justice facilities were eligible for federal Pell grants. Components that administered grants also issued procedural guidance related to grant administration. For example, BLS issued routine administrative memorandums to remind state partners of federal grant reporting requirements and closeout procedures. In other examples, DOL provided guidance on how to apply and comply with Office of Disability Employment Policy grants. Officials considered a number of factors before deciding whether to issue guidance or undertake rulemaking. Among these factors, a key criterion was whether officials intended for the document to be binding (in which case they issued a regulation). Officials from all components that issue regulations told us that they understood when guidance was inappropriate and when regulation was necessary and that they consulted with legal counsel when deciding whether to initiate rulemaking or issue guidance. According to DOL officials, new regulations may need to be issued if components determined that current regulations could not reasonably be interpreted to encompass the best course of action, a solution was not case specific, or a problem was widespread. An Education official told us that Education considered multiple factors, including the objective to be achieved, when choosing between guidance and regulations. Similarly, HHS’s Administration for Community Living officials told us that they considered a number of factors, including whether the instructions to be disseminated were enforceable or merely good practice. For example, when Administration for Community Living officials noticed that states were applying issued guidance related to technical assistance and compliance for the state long-term care ombudsman program differently, they decided it would be best to clarify program actions through a regulation, as they could not compel the states to comply through guidance. Officials believed that a regulation would ensure consistent application of program requirements and allow them to enforce those actions. They issued the proposed rule in June 2013 and the final rule in February 2015. FNS officials told us that the decision to issue guidance or undertake rulemaking depended on (1) the extent to which the proposed document was anticipated to affect stakeholders and the public, and (2) what the component was trying to accomplish with the issued document. OIRA staff concurred that agencies understood what types of direction to regulated entities must go through the regulatory process. We found that agencies did not always adhere to OMB requirements for significant guidance. The OMB Bulletin establishes standard elements that must be included in significant guidance documents and directs agencies to (1) develop written procedures for the approval of significant guidance, (2) maintain a website to assist the public in locating significant guidance documents, and (3) provide a means for the public to submit comments on significant guidance through their websites. Education and USDA had written procedures for the approval of significant guidance as directed by OMB. While DOL had written approval procedures, they were not available to the appropriate officials and DOL officials noted that they required updating. HHS did not have any written procedures. We found that Education, USDA, and DOL consistently applied OMB’s public access and feedback requirements for significant guidance, while HHS did not. We made recommendations to HHS and DOL to better adhere to OMB’s requirements for significant guidance. Both agencies concurred with those recommendations. Without written procedures or wide knowledge of procedures for the development of significant guidance, HHS and DOL may be unable to ensure that their components consistently follow other requirements of the OMB Bulletin and cannot ensure consistency in their processes over time. Further, because agencies rely on their websites to disseminate guidance, it is important that they generally follow requirements and guidelines for online dissemination for significant guidance. In the absence of government-wide standards for the production of non- significant guidance, officials must rely upon internal controls—which are synonymous with management controls—to ensure that guidance policies, processes, and practices achieve desired results and prevent and detect errors. We selected four components of internal control and applied them to agencies’ guidance processes (see appendix I). Departments and components identified diverse and specific practices that addressed these four components of internal control. However, the departments and components typically had not documented their processes for internal review of guidance documents. Further, agencies did not consistently apply other components of internal control. Some of the selected components identified practices to address these internal controls that we believe could be more broadly applied by other agencies. Wider adoption of these practices could better ensure that components have internal controls in place to promote quality and consistency of their guidance development processes. To improve agencies’ guidance processes, we recommended that the Secretaries of USDA, HHS, DOL, and Education strengthen their components’ application of internal controls by adopting, as appropriate, practices developed by other departments and components, such as assessment of risk; written procedures and tools to promote the consistent implementation and communication of management directives; and ongoing monitoring efforts to ensure that guidance is being issued appropriately and has the intended effect. USDA, Education, HHS, and DOL generally agreed with the recommendations. Although no component can insulate itself completely from risks, it can manage risk by involving management in decisions to initiate guidance, prioritize among proposed guidance, and determine the appropriate level of review prior to issuance. In addition, if leadership is not included in discussions related to initiation of guidance, agencies risk expending resources developing guidance that is unnecessary or inadvisable. At a few components, officials told us that leadership (such as component heads and department-level management) decided whether to initiate certain guidance, and guidance did not originate from program staff for these components. For example, guidance at DOL’s Employee Benefits Security Administration related to legal, policy, and programmatic factors were proposed by office directors and approved by Assistant Secretaries and Deputy Assistant Secretaries. In most other cases, ideas for additional guidance documents originated from program staff and field offices or from leadership, depending on the nature of the guidance. Education officials told us that component program staff and leadership work together to identify issues to address in guidance. At most components, officials told us that they determine the appropriate level of review and final clearance of proposed guidance, and in many cases guidance was reviewed at a higher level if the document was anticipated to affect other offices or had a particular subject or scope. Risk was one factor agency officials considered when determining the anticipated appropriate level of review and final clearance of proposed guidance. For example, officials at the Employee Benefits Security Administration told us that the need for department-level clearance depended on various factors, including likely congressional interest, potential effects on areas regulated by other DOL components, expected media coverage, and whether the guidance was likely to be seen as controversial by constituent groups. A few agencies reported they considered two other factors in making this decision: whether guidance was related to a major priority or would be “impactful.” Control activities (such as written procedures) help ensure that actions are taken to address risks and enforce management’s directives. Only 6 of the 25 components we reviewed had written procedures for the entire guidance production process, and several of these components highlighted benefits of these procedures for their guidance processes. These components included HHS’s Administration for Children and Families Office of Head Start and five DOL components. The DOL Mine Safety and Health Administration’s written procedures contained information officials described as essential to the effective and consistent administration of the component’s programs and activities. OSHA officials reported that their written procedures were designed to ensure that the program director manages the process for a specific policy document by considering feedback and obtaining appropriate concurrence to ensure that guidance incorporates all comments and has been cleared by appropriate officials. The Deputy Assistant Secretary resolves any disagreements about substance, potential policy implications, or assigned priority of the document. In contrast, Education’s Office of Innovation and Improvement and Office of Elementary and Secondary Education and DOL’s Veterans’ Employment and Training Service had written procedures only for the review and clearance phase. Components without written procedures said they relied on officials’ understanding of the guidance process. In these cases, officials told us that the guidance process was well understood by program staff or followed typical management hierarchies. Officials from all components could describe standard review practices to provide management the opportunity to comment and ensure that its comments were addressed by program staff. Nonetheless, documented procedures are an important internal control activity to help ensure that officials understand how to adequately review guidance before issuance. Most selected components had guidance practices to ensure either intra- agency and interagency review (or both) of guidance documents before issuance. Obtaining feedback from management, internal offices, the public, and other interested parties is essential to ensuring guidance is effective. Intra-agency communications. To ensure that management concurrence was recorded, most components we reviewed used communication tools, such as electronic or hard-copy routing slips, to document approval for guidance clearance or to communicate with management and other offices about proposed or upcoming guidance. In particular, officials at 20 components used a routing slip to document management concurrence. Interagency communications. Most component officials told us that they conferred with other affected components or federal departments to ensure consistency during the development of guidance. External stakeholders. Officials told us that feedback from external nonfederal stakeholders often served as the impetus for the initiation of guidance, and more than half of the selected components cited examples in which they conferred with external nonfederal stakeholders during the guidance development process. At OSHA, for example, external stakeholders were not involved in developing directives or issuing policy, but assisted with developing educational, non-policy guidance, such as hazard alerts. Nearly half of the components we reviewed did not regularly evaluate whether issued guidance was effective and up to date. Without a regular review of issued guidance, components can miss the opportunity to revisit whether current guidance could be improved and thereby provide better assistance to grantees and regulated entities. DOL’s Office of Labor- Management Standards officials told us they had not evaluated the relative success of existing guidance and therefore did not often revise guidance. A few selected components had initiated or established a process for tracking and evaluating guidance to identify necessary revisions. For example, in November 2011, officials at DOL’s Office of Federal Contract Compliance Programs initiated a 2-year project to review their directives system to ensure that they only posted up-to-date guidance. As a result of the project, in 2012 and 2013 officials identified necessary updates to guidance, clarified superseded guidance, and rescinded guidance where appropriate. Officials told us that these actions reduced the original number of directives by 85 percent. Officials also told us that they did this to ensure that their guidance was more accurate and correct, and the actions resulted in officials posting only relevant and current guidance information on the component’s website. Officials told us they now routinely monitor their directives about once a year and review other guidance documents each time they issue new regulations or change a policy to decide if they need to revise them. DOL’s Employment and Training Administration used a checklist to review a list of active guidance documents and identified whether to continue, cancel, or rescind the guidance. In addition, officials indicated which documents were no longer active on their website. Lastly, DOL’s Mine Safety and Health Administration also ensured that program officials periodically reviewed and updated guidance documents and canceled certain guidance. Chairman Lankford, Ranking Member Heitkamp, and members of the Subcommittee, this concludes my prepared remarks. I look forward to answering any questions you may have. For questions about this statement, please contact me at (202) 512-6806 or sagerm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony were Tim Bober, Assistant Director, Robert Gebhart, Shirley Hwang, Andrea Levine, and Wesley Sholtes. Component of Internal Control Risk Assessment Internal control should provide for an assessment of the risks the agency faces from both external and internal sources. Once risks have been identified, they should be analyzed for their possible effects. Application to Guidance Processes Agencies should assess the level of risk associated with potential guidance at the outset to determine 1. the legal implications of the use of guidance based on available criteria, and the appropriate level of review. 2. Some agencies have found it helpful to categorize proposed guidance at initiation to determine different types and levels of review. Control activities Internal control activities help ensure that management’s directives are executed. Control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives. They help ensure that actions are taken to address risks. The control activities should be effective and efficient in accomplishing the agency’s control objectives. The agency should maintain written policies, procedures, and processes to ensure that once the appropriate level of review has been determined, agency officials understand the process to adequately review guidance prior to issuance. Written policies and procedures should designate: 1. 2. the appropriate level of review to maintain appropriate segregation of duties, and the means by which management can comment on the draft guidance and program staff can address those comments. Information and communication Information should be recorded and communicated to management and others within the entity who need it. In addition to internal communications, management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders who have a significant impact on the agency achieving its goals. Internal communications: Agencies should have procedures in place to get feedback from management and other internal offices on guidance to be issued. For example, they should have a written mechanism (such as a routing slip) to document management review and associated comments and suggestions. External communications: Agencies should provide a means, via an e-mail box or contact person, for the public and interested parties to comment on the guidance, ask questions about the guidance, and facilitate two-way feedback and communication. Monitoring Internal control should generally be designed to ensure that ongoing monitoring occurs in the course of normal operations. Processes should be established to collect feedback on both the substance and clarity of guidance, to communicate this feedback to the appropriate officials, and to maintain applicable feedback to inform future guidance and revisions of guidance. Department United States Department of Agriculture (USDA) Department of Health and Human Services (HHS) Administration for Children and Families’ Office of Child Care Administration for Children and Families’ Office of Head Start Bureau of International Labor Affairs Mine Safety and Health Administration Occupational Safety and Health Administration Office of Disability Employment Policy Office of Federal Contract Compliance Programs Office of Workers’ Compensation Programs Veterans Employment and Training Service Women’s Bureau This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Regulatory guidance is an important tool agencies use to communicate timely information about regulatory and grant programs to regulated parties, grantees, and the public. Guidance provides agencies flexibility to articulate their interpretations of regulations, clarify policies, and address new issues more quickly than may be possible using rulemaking. The potential effects of guidance and risks of legal challenges underscore the need for consistent processes for the development, review, dissemination, and evaluation of guidance. This statement discusses four key questions addressed in GAO's April 2015 report on regulatory guidance: (1) what it is; (2) how agencies use it; (3) how agencies decide whether to use guidance or undertake rulemaking; and (4) steps agencies can take to ensure more effective guidance processes. To conduct that work, GAO reviewed relevant requirements, written procedures, guidance, and websites, and interviewed agency officials. What is regulatory guidance? One of the main purposes of guidance is to explain and help regulated parties comply with agencies' regulations. Even though not legally binding, guidance documents can have a significant effect on regulated entities and the public, both because of agencies' reliance on large volumes of guidance documents and because the guidance can prompt changes in the behavior of regulated parties and the general public. How do agencies use regulatory guidance? The four departments GAO reviewed—Agriculture (USDA), Education (Education), Health and Human Services (HHS), and Labor (DOL)—and the 25 components engaged in regulatory or grant making activities in these departments used guidance for multiple purposes, such as clarifying or interpreting regulations and providing grant administration information. Agencies used many terms for guidance and agency components issued varying amounts of guidance, ranging from about 10 to more than 100 guidance documents each year. Departments typically identified few of their guidance documents as “significant,” generally defined by the Office of Management and Budget (OMB) as guidance with a broad and substantial impact on regulated entities. How do agencies determine whether to issue guidance or undertake rulemaking? According to officials, agencies considered a number of factors when deciding whether to issue a regulation or guidance. However, the key criterion in making the choice was whether they intended the document to be binding; in such cases agencies proceeded with regulation. How can agencies ensure more effective guidance processes that adhere to applicable criteria? All four departments we studied identified standard practices to follow when developing guidance but could also strengthen their internal controls for issuing guidance. Agencies addressed OMB's requirements for significant guidance to varying degrees. Education and USDA had written departmental procedures for approval as required by OMB. DOL's procedures were not available to staff and required updating. HHS had no written procedures. In addition, USDA, DOL, and Education consistently applied OMB's public access and feedback requirements for significant guidance, while HHS did not. In the absence of specific government standards for non-significant guidance—the majority of issued guidance—the application of internal controls is particularly important. The 25 components GAO reviewed addressed some control standards more regularly than others. For example, few components had written procedures to ensure consistent application of guidance processes. However, all components could describe standard review practices and most used tools to document management approval of guidance. Not all components conferred with external nonfederal stakeholders when developing guidance. Finally, nearly half of the components GAO reviewed did not regularly evaluate whether issued guidance was effective and up to date. GAO is making no new recommendations in this statement. In the April 2015 report, GAO recommended steps to ensure consistent application of OMB requirements for significant guidance and to strengthen internal controls in guidance production processes. The agencies generally agreed with the recommendations.
You are an expert at summarizing long articles. Proceed to summarize the following text: FSA is USDA’s primary federal agency charged with administering farm programs at the local level. FSA’s fiscal year 1997 salary and expenses were $956 million. This amount provided funding for 17,269 federal and nonfederal employees at the national office, 50 state offices, and 2,440 county offices. In fiscal year 1997, more than 1.6 million farmers participated in USDA’s farm support programs and received more than $7.4 billion in benefits. Most farm support programs are implemented at the county office level under the direction of a county committee of locally elected farmers. This county committee hires a county executive director, who manages the local county office staff. As a condition of participation in any USDA farm program, farmers generally visit their FSA county office in person to identify the particular tract of cropland that is being enrolled in a program. This information ties the individual to the tract of land in order to ensure compliance with various statutes dealing with program eligibility, payment limitations, and conservation requirements. FSA employees review program requirements with the participating farmer and complete most of the paperwork that the farmer signs. Much of the paperwork associated with farm programs consists of contractual agreements between the farmer and USDA. For example, the marketing assistance loan form is a legal agreement between USDA and the farmer in which the farmer agrees to repay the loan within a specified period of time. The current county-based delivery structure for farm program benefits originated in the 1930s, when the first agricultural acts established farm support programs. At that time, more than one-fourth of Americans were involved in farming, and the lack of an extensive communications and transportation network limited the geographic boundaries that could be effectively served by a single field office. Over the past several years, the Department has made a number of changes to the delivery structure that were recommended by us and others. USDA has collocated agencies; consolidated agencies; closed smaller, less efficient county offices; and streamlined some program requirements. However, despite advancements in technology and communications, farmers generally still deal with USDA in person at their local FSA office. See appendix I for more information on the recent changes USDA has made. Farmers who participated in USDA’s commodity programs for major crops saw a reduction in their administrative requirements because of the program changes resulting from the 1996 act. The savings in time spent on paperwork are due mainly to farmers’ not having to make decisions about program participation and planting alternatives. The reduction in the number of visits results from eliminating the requirements that farmers report the number of acres they plant, except for fruits and vegetables. For farm programs other than the commodity programs, we found no substantial change in the amount of time farmers spend on paperwork and the number of visits they make to county offices. The 1996 act significantly changed USDA’s administrative requirements for the commodity programs. Farmers saw their time spent on paperwork reduced from a minimum of 1-1/2 hours to about 15 minutes annually and the number of office visits reduced from twice to once a year. Under the federal commodity programs in existence until 1995, USDA regulated agricultural production by controlling the crops that farmers could grow and the amount of acreage that they could plant. USDA provided annual payments to participating farmers that were based on annual calculations involving historical acreage and yields devoted to agricultural production, market prices for crops, and support prices set by the Congress and the Secretary of Agriculture. Signing up for the programs normally required that farmers visit the county office annually in order to determine the optimal planting option that they should follow for that year. More specifically, if farmers decided to participate in a commodity program, they selected from several available planting options, such as (1) idling a percentage of land, receiving benefits, and producing a commodity or (2) not planting anything and receiving 92 percent of the benefits. FSA staff completed participation worksheets and calculated benefits using different scenarios as many times as the farmers deemed necessary to determine which annual program provisions best met their needs. After the farmers selected an option, FSA staff generated the contract for their signature. Subsequently, the farmers returned to the office to report the acreage actually planted on the farm. The farmer reported the types of crops planted, the number of acres of each crop planted, and the number and location of acres that were not planted. Farmers could use FSA’s aerial photographs to identify fields planted to program crops or idled. Because incorrect reporting could lead to the loss of benefits, farmers often requested measurement services from FSA to guarantee compliance. According to county office staff and participating farmers, these sign-up and acreage reporting visits took a minimum of 1-1/2 hours altogether and two visits to the county office. The 1996 act eliminated annual sign-ups for the commodity programs and allowed eligible farmers to enter cropland previously enrolled in USDA’s commodity programs into 7-year production contracts. The new program is far less complicated than the commodity programs because once farmers chose to participate in the 7-year program, annual decisions on participation or planting alternatives were no longer necessary. Instead, farmers receive fixed annual payments that are based upon the enrolled land’s previous crop production history. Furthermore, farmers are no longer required to report the acreage planted unless they plant fruits and vegetables. In some cases, farmers do not need to visit the county office during the duration of the 7-year contract. Farmers who own and operate their cropland could make payment designations for all 7 years of the contract during their initial visit. However, many farmers who lease land will visit the county office annually because payment designations can be made only for the length of the lease. Because most farmers lease cropland for one season (1 year) at a time, they are required to visit the county office annually to designate the cropland they will farm in order for FSA to determine the payments they are eligible for. According to the farmers and county office staff we interviewed, this process generally involves one visit of about 15 minutes. The 1996 act generally did not change administrative requirements for other farm support programs, such as the Conservation Reserve Program (CRP), direct farm loans, and the Noninsured Crop Disaster Assistance Program (NAP). Accordingly, the amount of paperwork associated with these programs generally did not change. The number of participants in these programs is relatively small in comparison with the number of participants in the commodity programs. For example, in 1996, 1.6 million farmers signed production contracts and 64,000 farmers participated in CRP. See appendix II for more information on the administrative requirements associated with these other farm support programs. FSA could use alternative methods—such as mail and telecommunications—to enroll farmers in programs and deliver program benefits more efficiently. However, shifting to alternative delivery methods would require FSA to change its long-standing tradition of providing personal service to farmers and would shift the burden of completing many administrative requirements to farmers. USDA could use a number of alternatives that could improve the efficiency of its program delivery. These could include greater use of the U.S. mail, telecommunications, and computer technologies. Generally, using these resources should allow USDA to operate with fewer staff and offices and could save millions of dollars annually. However, absent detailed study, the extent to which delivery efficiencies would be achieved is uncertain. We found no statutory or regulatory requirements that direct farmers to visit a county office in order to meet paperwork requirements. Furthermore, while it may be desirable for farmers to visit the county office to identify cropland and ownership when initially enrolling in USDA farm programs, once enrolled, farmers could obtain the forms they need and comply with program requirements by using alternative methods, such as the mail, telephone, or computers. During the course of our review, we talked to farmers who indicated that they had, or could have, used these alternatives to conduct business with FSA. Several farmers we talked with already conducted some of their business with FSA by mail, such as enrolling acreage coming out of CRP in a new production contract. However, most of the farmers stated that because the office was conveniently located, they preferred to conduct business in person. Our discussions with county executive directors and farmers also suggest that more opportunities exist to use these alternative methods to conduct business. For example, a participant could mail acreage reports to the county office, call the office to apply for assistance, and receive benefits (if qualified) electronically without ever visiting the county office. In the case of the direct loan program, a farmer could complete the loan application on a computer and send this information electronically to FSA for approval. Institutions such as the Farm Credit System—a commercial lender that provides credit to agricultural producers and cooperatives—now accept farm loan applications over the Internet. The use of alternatives such as these could reduce the number of visits farmers make to local offices but will not completely eliminate the need for FSA staff to visit farms to inspect and verify loan collateral and carry out compliance activities. Federal agencies and private companies with much larger customer bases than FSA already use some of these alternative delivery methods to reduce the need for customers to visit an office. For example, the Internal Revenue Service has used the U.S. mail for years and now allows individuals to file tax returns electronically or by telephone and deposits refunds directly into customers’ bank accounts. The Social Security Administration has a free telephone service to answer questions and handle simple transactions, such as a change of address. Banks use automatic teller machines to conduct simple transactions, and individuals can apply for loans using the telephone. Similarly, FSA could make greater use of the mail and telecommunications to deliver farm programs to reduce the need for farmers to visit a county office. Using alternative delivery methods should allow USDA to operate with fewer staff and offices, which could reduce personnel expenses by millions of dollars. For every staff-year reduced, FSA could save more than $32,000 in personnel expenses. However, the actual efficiencies attained would depend largely on how USDA restructured its operations using alternative delivery methods. Changing the current delivery system, which is based on county offices, can only occur with a fundamental shift in the long-standing practices and relationships that FSA has with participating farmers. While farmers we talked to said that they could conduct business by mail, telephone, or computer, they generally prefer the personal service they receive at the county office. This is in part because many farmers rely on FSA staff to help them fill out forms for the program. FSA county offices have long provided a high level of personal service to farmers. Historically, this service has included reminding farmers 15 days prior to the ending date of a sign-up period that they had not enrolled in the current year’s commodity program. Likewise, farmers have been able to walk into a county office without an appointment to receive service. Shifting to the use of alternative delivery methods may reduce FSA’s costs of operation but would have several effects that could be considered undesirable. First, because farmers would receive less personal assistance from FSA staff, alternative delivery methods would place greater responsibility on farmers for knowing which programs are available and what the procedures are for enrolling in them. For example, if FSA consolidated its operations into fewer locations and made greater use of the mail, telephone, and computers, FSA staff could be reduced, and fewer staff would be available to meet face-to-face with farmers and complete their paperwork. The available FSA staff could still be used to carry out required functions, such as explaining program requirements, processing applications, and determining program eligibility. Second, the closure of county offices that could result from alternative delivery methods would increase USDA’s distance from many farmers. This increase would probably have the biggest impact on farmers who are members of a minority and those with small farms, who generally have fewer alternative resources available to assist them and may have the greatest need for USDA’s assistance. Minority farmers have criticized USDA recently for not providing adequate service to them. In addition farmers, who as a group are generally older, may not be able to drive greater distances in order to obtain whatever personal service is available. Third, alternative delivery methods could result in less local control. FSA officials told us that farmers who serve on local committees are a valuable resource because they know the farmers in their county and help monitor their compliance with program requirements. In addition to these consequences, many farmers may not have access to the technology needed to conduct business with alternative methods. According to a recent USDA survey, only 30 percent of farmers own a computer. In addition, because farmers are normally located in rural areas, local access to the Internet may not be available. The role of the county office and its relationship to farmers has not changed significantly since USDA began delivering programs at the local level in the 1930s. Even though improvements have been made in the transportation and communications infrastructure, and the number of farmers living in rural America has declined, USDA continues to provide the same kind of personalized service in the county office that it did 60 years ago. However, this service comes at a cost of almost $1 billion annually. While many farmers prefer this kind of service, some taxpayers may be unwilling to support its high cost over the long term. Using alternative delivery methods should allow USDA to operate with fewer staff and offices, which could reduce personnel expenses by millions of dollars. However, any changes in USDA’s field office structure need to take into account the culture that has existed for decades at the county office level. Making significant changes to this structure to reduce government expenses and improve program efficiency could increase the administrative requirements for, and thereby the costs to, farmers who participate in farm programs. Although farmers prefer the current level of personalized service, continued pressure to reduce federal expenditures requires USDA to look for ways to deliver these services more efficiently. Accordingly, we recommend that the Secretary of Agriculture direct the Administrator of the Farm Service Agency, in coordination with the Natural Resources Conservation Service and the Rural Development mission area (the Farm Service Agency’s Service Center partners), to study the costs and benefits of using alternative delivery methods to accomplish mission objectives. We provided USDA with a draft of this report for its review and comment. We met with departmental officials, including the Associate Administrator of the Farm Service Agency. USDA generally agreed with the information presented in the report. While the Department agreed with the intent of our recommendation, it stated that any study of alternative delivery methods should include the Natural Resources Conservation Service and the Rural Development mission area. We have expanded our recommendation in response to this comment. USDA also commented that while most farmers experienced reductions in their administrative requirements, some farmers participating in programs not substantially affected by the 1996 act, such as those for peanuts and tobacco, experienced no change or slightly increased administrative requirements. In addition, the Department noted that while alternative delivery methods may reduce government expenses, such changes could increase costs and administrative requirements for the farmers themselves. We provided additional language in the report to recognize these comments. USDA also provided technical and clarifying comments that were incorporated as appropriate. To determine the extent to which the changes in the farm programs resulting from the 1996 act have reduced farmers’ administrative requirements, we discussed the administrative requirements for major farm programs prior to and after the 1996 act with USDA headquarters, state, and county officials. We reviewed the documentation that USDA submitted to the Office of Management and Budget to justify the need for the paperwork requirements for these programs, as well as the time associated with completing the forms. In considering changes in administrative requirements directed by the 1996 act, our analysis does not consider changes in requirements after 2002, when the current law expires. In looking at alternative delivery methods, we did not analyze the implications of changes in delivery methods on USDA’s process for gathering the farm data used by other USDA agencies. We met with USDA headquarters, state, and county officials, as well as farmers, to obtain their views on whether USDA could use alternative methods to deliver farm support programs. We visited county offices located in California, Connecticut, Georgia, Illinois, Massachusetts, Missouri, Nebraska, North Carolina, and Washington State. In these offices, we met with the county executive director, the manager for agricultural credit, and farmers from the FSA county committee. In six of these states, we also met with the FSA state executive director, and in one state, we met with a member of the state FSA committee. We also called farmers across the nation who were enrolled in CRP, the direct loan program, and the commodity programs for major crops and who had participated in USDA’s customer satisfaction survey to obtain first-hand information on their personal visits and time spent in FSA county offices before and after the 1996 act. We conducted our work from September 1997 through March 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairman, House Committee on Agriculture; other interested congressional committees; the Secretary of Agriculture; and the Director, Office of Management and Budget. We will also make copies available to others on request. Please call me at (202) 512-5138 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix III. The U.S. Department of Agriculture’s (USDA) recent efforts to improve its delivery of farm programs include a wide range of efforts. These efforts are only incremental measures, however, that cut at the margins of existing operations. They do not address large-scale concerns affecting the Department’s overall design, mission, and service delivery method. More specifically, since 1994, USDA has consolidated two of its former county-based agencies—the Agricultural Stabilization and Conservation Service, and the Farmers Home Administration—into the Farm Service Agency (FSA). USDA has also collocated these FSA offices with the Natural Resources Conservation Service (NRCS), and the Rural Development mission area into one-stop shopping centers for farmers. With this arrangement, farmers can get farm program information and complete necessary paperwork requirements at one location. In addition, FSA is reviewing its paperwork requirements for farm programs. The Paperwork Reduction Act of 1995 requires federal agencies, including USDA, to reduce their paperwork burden by 25 percent by 1999. USDA has established teams to review its paperwork requirements to determine how they can be streamlined. Furthermore, USDA is undertaking an effort to streamline its administrative activities at the state and national level. In December 1997, the Secretary of Agriculture approved an administrative convergence plan that will consolidate a number of administrative activities at headquarters and in state offices. The plan establishes a Support Services Bureau in headquarters and one state administrative support unit in each state. This organization will provide administrative services, including financial management, human resources, civil rights, information technology, and management services (including procurement), to field-based agencies. USDA also has contracted for an independent study to examine FSA, NRCS, and the Rural Development mission area for opportunities to improve overall customer service and the efficiency of the delivery system. Results of this study will be incorporated into the future iterations of FSA’s strategic plan. The administrative processes and paperwork requirements for many of FSA’s major farm programs—Conservation Reserve, Nonrecourse Marketing Assistance Loans, Peanuts, Tobacco, Direct Loans, and Noninsured Crop Disaster Assistance—are described below. The Conservation Reserve Program (CRP) makes annual rental payments to farmers to retire environmentally sensitive land from production, usually for 10 years. The 1996 act made several changes to CRP to extend, simplify, and refocus the program. We found that farmers spent three visits totaling a minimum of 1 hour to complete the paperwork requirements for CRP. Because there were two CRP signups in 1997, some farmers made more than three visits to an FSA office. On a farmer’s first visit to enroll land in CRP, the farmer reviews an FSA map and indicates the tracts of land he or she is interested in enrolling in the program. FSA staff enter the tract identification information on a CRP worksheet, and the farmer certifies that this information is correct. If the land is determined to be eligible for CRP, the farmer returns to the FSA office to indicate the rental rate he or she will bid and signs a CRP contract, agreeing to the terms and conditions set forth in the appendix to the contract. FSA staff enter the bid amount on a CRP contract, which the farmer signs. FSA selects bids from across the country. The farmers whose bids are accepted return to the county office to review and sign a conservation plan prepared by NRCS. Marketing assistance loans provide farmers with interim financing, using the crop as collateral. These loans allow farmers to hold their crops for sale at a later date, when prices may be higher than they would have been at harvest. Farmers make two to three visits and spend a minimum of 1 hour in total to obtain and repay a marketing assistance loan. On the first visit to obtain a nonrecourse marketing assistance loan, the farmer files an acreage report, unless one has already been filed. Depending on the crop, the farmer brings warehouse receipts or bin measurements to the FSA office and signs a Commodity Credit Corporation Note and Security Agreement, which states that the farmer agrees to pay back the loan or forfeit the collateral, which is the crop. To satisfy the loan, the farmer can either sell the commodity and bring the check for FSA’s signature to pay off the loan or forfeit the loan and arrange for delivery of the commodity to the government. The peanut program establishes annual poundage quotas to limit production as a way of supporting crop prices. The program requires FSA to keep a record of the acreage planted and the sales of this commodity to ensure that farmers stay within their quotas. Farmers generally make about five to six office visits and spend a minimum of 1 hour in total to complete paperwork and obtain marketing cards. In 1997, 25,000 farmers participated in the peanut program. On the first visit to participate in the peanut program, the farmer may request FSA’s measurement services to accurately determine his or her peanut acreage. After planting, the farmer visits the FSA office to certify the acreage actually planted. The farmer then completes a Report of Seed Peanuts, which FSA uses to determine if the amount planted is reasonable for the acreage reported. On the basis of the acreage planted, FSA allocates a temporary seed quota to cover the producer’s purchase of seed. After harvest, the farmer may visit the FSA office to obtain a Peanut Marketing Card. After selling the peanuts, the farmer must bring his or her Peanut Marketing Card to the FSA office and review a Poundage Sales summary, which reflects the sales of the farmer’s peanuts in the marketplace. In addition, if the farmer has excess quota or needs additional quota, he or she will need to make one or more additional visits to the FSA office to complete a Temporary Lease and Transfer of Peanut Quota, which requires witnessed signatures. The tobacco program establishes annual marketing quotas to limit production as a way of supporting crop prices. The program requires FSA to keep a record of the acreage planted (except Burley tobacco) and the sale of this commodity to ensure that farmers stay within their quotas. Farmers generally make about five to six office visits and spend a minimum of 1 hour in total to complete paperwork and obtain marketing cards. In 1997, 330,000 farmers participated in the tobacco program. Farmers may visit the FSA county office to request measurement services to accurately determine their tobacco acreage. After planting, the farmer visits the FSA office to certify the acreage actually planted in tobacco, except for Burley tobacco. After harvest, the farmer visits the FSA office to obtain a Tobacco Marketing Card and sign the Certification of Eligibility to Receive Price Support on Tobacco. If the farmer has excess quota or needs additional quota, he or she will need to make one or more additional visits to the FSA office to complete a Temporary Lease and Transfer of Tobacco Quota, which requires witnessed signatures. At the end of the selling season, the farmer must return, either in person or by mail, the marketing cards and complete a Report of Unmarketed Tobacco. The direct loan program provides operating and ownership loans to farmers who cannot obtain credit elsewhere. There are statutory limitations on the size of these loans. Farmers visit their county office three to four times and spend a minimum of 3 hours in total completing paperwork to obtain a loan. To obtain a direct loan, a farmer generally visits the FSA county office to obtain a Farm Programs Application Package, which includes all of the forms a farmer must complete. A farmer may complete some of these forms during this visit or may gather documentation and complete some of the paperwork before returning to the county office. Credit managers indicated that they usually scheduled a visit to review the application. A complete application package generally includes a Request for Direct Loan Assistance; a Farm and Home Plan showing projected production, income, and expenses; financial records for the past 5 years; and various other documents that describe the applicant’s operations. A farmer makes additional visits to provide more information and, if the loan is approved, to sign the loan agreement. After the loan is approved, a farmer may be required to visit the county office to get signatures on the checks that the farmer receives for selling commodities or to pay back the loan. The Noninsured Crop Disaster Assistance Program (NAP) protects the growers of many crops for which federal crop insurance is not available. FSA makes NAP payments to eligible farmers when an area’s expected yield is less than 65 percent of the normal yield. Farmers who participate in NAP make at least one visit for a minimum of 15 minutes to the county office each year to file an acreage report. If they suffer a disaster, they will make two additional visits and spend a minimum of 30 minutes for these two visits to apply for assistance. To be eligible for NAP, a farmer must file an acreage report annually with the local FSA office. If a farmer suffers a disaster, that farmer can visit the FSA office to complete a Request for Acreage/Disaster Credit. After the area has been declared a disaster, the farmer signs the NAP Certification of Income Eligibility; provides production records, if needed; and signs a Crop Insurance Acreage Report and a Production Yield Report. Ronald E. Maxon, Jr., Assistant Director Fred Light Renee McGhee-Lenart Paul Pansini Carol Herrnstadt Shulman Janice Turner The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the administrative requirements placed on farmers participating in the revamped farm programs, as well as the Department of Agriculture's (USDA) efficiency in delivering program services to farmers, focusing on the: (1) extent to which the changes to the farm programs resulting from the Federal Agriculture Improvement and Reform Act of 1996 have reduced farmers' administrative requirements; and (2) possibility of having USDA use alternative delivery methods to more efficiently administer farm programs. GAO noted that: (1) farmers are now generally spending less time on administrative requirements than they did before the 1996 act; (2) the number of required visits to county offices has declined, as has the amount of time spent completing paperwork for the farm programs; (3) the Farm Service Agency (FSA) could transact more with business farmers through the mail and by telephone and computer, thus increasing the efficiency of its operations; (4) using alternative delivery methods should allow USDA to operate with fewer staff and offices, which could reduce expenses by millions of dollars; (5) while GAO found no statutory or regulatory requirements that direct farmers to visit county offices, changing delivery methods to rely more on such approaches will require fundamental changes in the FSA's long-standing practices and relationships with farmers; and (6) in particular, such methods would reduce farmers' personal contact with county office staff and place greater administrative responsibility on farmers to ensure that required paperwork is completed and submitted in a timely fashion.
You are an expert at summarizing long articles. Proceed to summarize the following text: DLA is DOD’s logistics manager for all departmental consumable items and some repair parts. Its primary business function is materiel management: providing supply support to sustain military operations and readiness. In addition, DLA performs five other supply-related business functions: distributing materiel from DLA and service-owned inventories, purchasing fuels for DOD and the U.S. government, storing strategic materiel, marketing surplus DOD materiel for reuse and disposal, and providing numerous information services, such as item cataloging, for DOD and the U.S. government, as well as selected foreign governments. These six business functions are managed by field commands that report to and support the agency’s central command authority. In 2000, DLA refocused its logistics mission from that of a supplier of materiel to a manager of supply chain relationships. To support this transition, the agency developed a strategic plan (known as DLA 21) to reengineer and modernize its operations. Among the goals of DLA 21 are to optimize inventories, improve efficiency, increase effectiveness through organizational redesign, reduce inventories, and modernize business systems. DLA relies on over 650 systems to support warfighters by allowing access to global inventories. Whether it is ensuring that there is enough fuel to service an aircraft fleet, providing sufficient medical supplies to protect and treat military personnel, or supplying ample food rations to our soldiers on the frontlines, information technology plays a key role in ensuring that Defense Department agencies are prepared for their missions. Because of its heavy reliance on IT to accomplish its mission, DLA invests extensively in this area. For fiscal year 2002, DLA’s IT budget is about $654 million. Our recent reviews of DLA’s IT management have identified weaknesses in such important areas as enterprise architecture management, incremental investment management, and software acquisition management. In June 2001, we reported that DLA did not have an enterprise architecture to guide the agency’s investment in its Business Systems Modernization (BSM) project—the agency’s largest IT project. The use of an enterprise architecture, which describes an organization’s mode of operation in useful models, diagrams, and narrative, is required by the OMB guidance that implements the Clinger-Cohen Act of 1996 and is a commercial best practice. Such a “blueprint” can help clarify and optimize the dependencies and relationships among an agency’s business operations and the IT infrastructure and applications supporting them. An effective architecture describes both the environment as it is and the target environment that an organization is aiming for (as well as a plan for the transition from one to the other). We concluded that without this architecture, DLA will be challenged in its efforts to successfully acquire and implement BSM. Further, we reported that DLA was not managing its investment in BSM in an incremental manner, as required by the Clinger-Cohen Act of 1996 and OMB guidance and in accordance with best commercial practices. An incremental approach to investment helps to minimize the risk associated with such large-scale projects as BSM. Accordingly, we recommended that DLA make the development, implementation, and maintenance of an enterprise architecture an agency priority and take steps to incrementally justify and validate its investment in BSM. According to DLA officials, the agency is addressing these issues. In January 2002, we reported a wide disparity in the rigor and discipline of software acquisition processes between two DLA systems. Such inconsistency in processes for acquiring software (the most costly and complex component of systems) can lead to the acquisition of systems that do not meet the information needs of management and staff, do not provide support for necessary programs and operations, and cost more and take longer than expected to complete. We also reported that DLA did not have a software process-improvement program in place to effectively strengthen its corporate software acquisition processes, having eliminated the program in 1998. Without a management-supported software process-improvement program, it is unlikely that DLA can effectively improve its institutional software acquisition capabilities, which in turn means that the agency’s software projects will be at risk of not delivering promised capabilities on time and within budget. Accordingly, we recommended that DLA institute a software process-improvement program and correct the software acquisition process weaknesses that we identified. According to DLA officials, the agency is addressing each of these issues. In May 2000, we issued the Information Technology Investment Management (ITIM) maturity framework, which identifies critical processes for successful IT investment and organizes these processes into an assessment framework comprising five stages of maturity. This framework supports the fundamental requirements of the Clinger-Cohen Act of 1996, which requires IT investment and capital planning processes and performance measurement. Additionally, ITIM can provide a useful roadmap for agencies when they are implementing specific, fundamental IT capital planning and investment management practices. The federal Chief Information Officers Council has favorably reviewed the framework, and it is also being used by a number of executive agencies and organizations for designing related policies and procedures and self-led or contractor-based assessments. ITIM establishes a hierarchical set of five different maturity stages. Each stage builds upon the lower stages and represents increased capabilities toward achieving both stable and effective (and thus mature) IT investment management processes. Except for the first stage—which largely reflects ad hoc, undefined, and undisciplined decision and oversight processes—each maturity stage is composed of critical processes essential to satisfy the requirements of that stage. These critical processes are defined by core elements that include organizational commitment (for example, policies and procedures), prerequisites (for example, resource allocation), and activities (for example, implementing procedures). Each core element is composed of a number of key practices. Key practices are the specific tasks and conditions that must be in place for an organization to effectively implement the necessary critical processes. Figure 1 shows the five ITIM stages and a brief description of each stage. Using ITIM, we assessed the extent to which DLA satisfied the five critical processes in stage 2 of the framework. Based on DLA’s acknowledgment that it had not executed any of the key practices in stage 3, we did not independently assess the agency’s capabilities in this stage or stages 4 and 5. To determine whether DLA had implemented the stage 2 critical processes, we compared relevant DLA policies, procedures, guidance, and documentation associated with investment management activities to the key practices and critical processes in ITIM. We rated the key practices as “executed” based on whether the agency demonstrated (by providing evidence of performance) that it had met the criteria of the key practice. A key practice was rated as “not executed” when we found insufficient evidence of a practice during the review, or when we determined that there were significant weaknesses in DLA’s execution of the key practice. As part of our analysis, we selected four IT projects as case studies to verify application of the critical processes and practices. We selected projects that (1) supported different DLA business areas (such as materiel management), (2) were in different lifecycle phases (for example, requirements definition, design, operations and maintenance), (3) represented different levels of risk (such as low or medium) as designated by the agency, and (4) included at least one investment that required funding approval by a DOD authority outside of DLA (for example, the Office of the Secretary of Defense (OSD)). The four projects are the following: Business Systems Modernization: This system, which supports DLA’s materiel management business area, is in the concept demonstration phase of development. DLA reported that it spent about $136 million on this system in fiscal year 2001, and it has budgeted about $133 million for fiscal year 2002. BSM is intended to modernize DLA’s materiel management business function, replacing two of its standard systems (the Standard Automated Materiel Management System and the Defense Integrated Subsistence Management System). The project is also intended to enable the agency to reengineer its logistics practices to reflect best commercial business practices. For example, in support of DLA’s goal of reducing its role as a provider and manager of materiel and increasing its role as a manager of supply chain relationships, BSM is to help link customers with appropriate suppliers and to incorporate commercial business practices regarding physical distribution and financial management. The agency has classified this project as high risk, and OSD has funding approval authority for this project. Hazardous Materials Information System (HMIS): This system, which supports DLA’s logistics operations function, was implemented in 1978. In fiscal year 2001, DLA reported that it spent about $1 million on this system and budgeted about $2.4 million for fiscal year 2002. In 1999 DLA began a redesign effort to transform HMIS into a Web-based system with a direct interface to the manufacturers and suppliers of hazardous material. The project is in the development stage. It contains data on the chemical composition of materials classified as “hazardous” for the purposes of usage, storage, and transportation. The system is used by Emergency Response Teams whenever a spill or accident occurs involving hazardous materials. The agency classified this project as low risk, and funding approval occurs within DLA. The Defense Reutilization and Marketing Automated Information System (DAISY): This system, which supports DLA’s materiel reuse and disposal mission, is in the operations and maintenance lifecycle phase. The agency reported that it spent approximately $4.4 million on DAISY in fiscal year 2001, and it has budgeted about $7 million for fiscal year 2002. This system is a repository for transactions involving the reutilization, transfer, donation, sale, or ultimate disposal of excess personal property from DOD, federal, and state agencies. The excess property includes spare and repair parts, scrap and recyclable material, precious metals recovery, hazardous material, and hazardous waste disposal. Operated by the Defense Reutilization and Marketing Service, the system is used at 190 locations worldwide. The agency classified this project as low risk, and funding approval occurs within DLA. Standard Automated Materiel Management System (SAMMS): This system, which supports DLA’s materiel management business area, is 30 years old and approaching the end of its useful life. The agency reports that investment in SAMMS (budgeted at approximately $19 million for fiscal year 2002) is directed toward keeping the system operating until its replacement, BSM, becomes fully operational (scheduled for fiscal year 2005). This system provides the Inventory Control Points with information regarding stock levels, as well as with the capabilities required for (1) acquisition and management of wholesale consumable items, (2) direct support for processing requisitions, (3) forecasting of requirements, (4) generation of purchase requests, (5) maintenance of technical data, (6) financial management, (7) identification of items, and (8) asset visibility. The agency has classified the maintenance of SAMMS as a low risk effort, and funding approval occurs within DLA. For these projects, we reviewed project management documentation, such as mission needs statements, project plans, and status reports. We also analyzed charters and meeting minutes for DLA oversight boards, DLA’s draft Automated Information System Emerging Program Life Management (LCM) Review and Milestone Approval Directive and Portfolio Management and Oversight Directives, and DOD’s 5000 series guidance on systems acquisition. In addition, we reviewed documentation related to the agency’s self-assessment of its IT investment operations. To supplement our document reviews, we interviewed senior DLA officials, including the vice director (who sits on the Corporate Board, DLA’s highest level investment decisionmaking body), the chief information officer (CIO), the chief financial officer, and oversight board members. We also interviewed the program managers of our four case study projects, as well as officials responsible for managing the IT investment process and other staff within Information Operations. To determine what actions DLA has taken to improve its IT investment management processes, we interviewed the CIO and officials of the Policy, Plans, and Assessments and the program executive officer (PEO) operations groups within the Information Operations Directorate. These groups are primarily responsible for implementing investment management process improvements. We also reviewed a draft list of IT investment management improvement tasks. We conducted our work at DLA headquarters in Fort Belvoir, Virginia, from June 2001 through January 2002, in accordance with generally accepted government auditing standards. In order to have the capabilities to effectively manage IT investments, an agency should (1) have basic, project-level control and selection practices in place and (2) manage its projects as a portfolio of investments, treating them as an integrated package of competing investment options and pursuing those that best meet the strategic goals, objectives, and mission of the agency. DLA has a majority of the project-level practices in place. However, it is missing several crucial practices, and it is not performing portfolio-based investment management. According to the CIO, the evolving state of its investment management capabilities is the result of agency leadership’s recently viewing IT investment management as an area of management focus and priority. Without having crucial processes and related practices in place, DLA lacks essential management controls over its sizable IT investments. At ITIM stage 2 maturity, an organization has attained repeatable, successful IT project-level investment control processes and basic selection processes. Through these processes, the organization can identify expectation gaps early and take appropriate steps to address them. According to ITIM, critical processes at stage 2 include (1) defining investment board operations, (2) collecting information about existing investments, (3) developing project-level investment control processes, (4) identifying the business needs for each IT project, and (5) developing a basic process for selecting new IT proposals. Table 1 discusses the purpose for each of the stage 2 critical processes. To its credit, DLA has put in place about 75 percent of the key practices associated with stage 2 critical processes. For example, DLA has oversight boards to perform investment management functions, and it has basic project-level control processes to help ensure that IT projects are meeting cost and schedule expectations. However, DLA has not executed several crucial stage 2 investment practices. For example, the business needs for IT projects are not always clearly identified and defined, basic investment selection processes are still being developed, and policies and procedures for project oversight are not documented. Table 2 summarizes the status of DLA’s stage 2 critical processes, showing how many associated key practices the agency has executed. DLA’s actions in each of the critical processes are discussed in the sections that follow. To help ensure executive management accountability for IT capital planning and investment decisions, an organization should establish a governing board or boards responsible for selecting, controlling, and evaluating IT investments. According to ITIM, effective IT investment board operations require, among other things, that (1) board membership have both IT and business knowledge, (2) board members understand the investment board’s policies and procedures and exhibit core competencies in using the agency’s IT investment policies and procedures, (3) the organization’s executives and line managers support and carry out board decisions, (4) the organization create organization-specific process guidance that includes policies and procedures to direct the board’s operations, and (5) the investment board operate according to written policies and procedures. (The full list of key practices is provided in table 3.) DLA has established several oversight boards that perform IT investment management functions. These boards include the following: The DLA Investment Council, which is intended to review, evaluate, and approve new IT and non-IT investments between $100,000 and $1,000,000. The Program Executive Officer Review Board, which is intended to review and approve the implementation of IT investments that are budgeted for over $25 million in all or over $5 million in any one year. The Corporate Board, which is intended to review, evaluate, and approve all IT and non-IT investments over $1 million. DLA is executing four of the six key practices needed for these boards to operate effectively. For example, the membership of these boards integrates both IT and business knowledge. In addition, board members informed us of their understanding of their board’s informal practices. Further, according to IT investment officials, project managers, and agency documentation, the boards have a process for ensuring that their decisions are supported and carried out by organization executives and line managers. This process involves documenting board decisions in meeting minutes, assigning staff to carry out the decisions, and tracking the actions taken on a regular basis until the issues are addressed. Nonetheless, DLA is missing the key ingredient associated with two of the board oversight practices that are needed to operate effectively— organization-specific guidance. This guidance, which serves as official operations documentation, should (1) clearly define the roles of key people within its IT investment process, (2) delineate the significant events and decision points within the processes, (3) identify the external and environmental factors that will influence the processes (that is, legal constraints, the behavior of key subordinate agencies and military customers, and the practices of commercial logistics that DLA is trying to emulate as part of DLA 21); and (4) explain how IT investment-related processes will be coordinated with other organizational plans and processes. DLA does not have guidance that sufficiently addresses these issues. Policies and procedures governing operations are in draft for one board and have not been developed for the two other boards. Without this guidance governing the operations of the investment boards, the agency is at risk of performing key investment decisionmaking activities inconsistently. Such guidance would also provide a degree of transparency that is helpful in both communicating and demonstrating how these decisions are made. Table 3 summarizes the ratings for each key practice and the specific findings supporting the ratings. An IT project inventory provides information to investment decision- makers to help evaluate the impacts and opportunities created by proposed or continuing investments. This inventory (which can take many forms) should, at a minimum, identify the organization’s IT projects (including new and existing systems) and a defined set of relevant investment management information about them (for example, purpose, owner, lifecycle stage, budget cost, physical location, and interfaces with other systems). Information from the IT project inventory can, for example, help identify systems across the organization that provide similar functions and help avoid the commitment of additional funds for redundant systems and processes. It can also help determine more precise development and enhancement costs by informing decisionmakers and other managers of interdependencies among systems and how potential changes in one system can affect the performance of other systems. According to ITIM, effectively managing an IT project inventory requires, among other things, (1) identifying IT projects, collecting relevant information about them, and capturing this information in a repository, (2) assigning responsibility for managing the IT project inventory process to ensure that the inventory meets the needs of the investment management process, (3) developing written policies and procedures for maintaining the IT project inventory, (4) making information from the inventory available to staff and managers throughout the organization so they can use it, for example, to build business cases and to support project selection and control activities, and (5) maintaining the IT project inventory and its information records to contribute to future investment selections and assessments. (The full list of key practices is provided in table 4.) DLA has executed many of the key practices in this critical process. For example, according to DLA’s CIO, IT projects are identified and specific information about them is entered into a central repository called the DLA Profile System (DPS). DPS includes, among other things, project descriptions, key contact information, lifecycle stage, and system interfaces. In addition, the CIO is responsible for managing the IT project identification process to ensure that DPS meets the needs of the investment management process. However, DLA has not defined written policies and procedures for how and when users should add to or update information in the DPS. In addition, DLA is not maintaining DPS records, which would be useful during future project selections and investment evaluations, and for documenting the evolution of a project’s development. Without appropriate policies and procedures in place to describe the objectives and information requirements of the inventory, DPS is not being maximized as an effective tool to assist in the fundamental analysis essential to effective decisionmaking. Table 4 summarizes the ratings for each key practice and the specific findings supporting the ratings. Investment review boards should effectively oversee IT projects throughout all lifecycle phases (concept, design, development, testing, implementation, and operations/maintenance). At stage 2 maturity, investment review boards should review each project’s progress toward predefined cost and schedule expectations, using established criteria and performance measures, and should take corrective actions to address cost and milestone variances. According to ITIM, effective project oversight requires, among other things, (1) having written polices and procedures for project management, (2) developing and maintaining an approved management plan for each IT project, (3) having written policies and procedures for oversight of IT projects, (4) making up-to-date cost and schedule data for each project available to the oversight boards, (5) reviewing each project’s performance by regularly comparing actual cost and schedule data to expectations, (6) ensuring that corrective actions for each under- performing project are documented, agreed to, implemented, and tracked until the desired outcome is achieved, and (7) using information from the IT project inventory. (The complete list of key practices is provided in table 5.) DLA has executed most of the key practices in this area. In particular, DLA relies on the guidance in the Department of Defense 5000 series directives for project management and draft guidance in an Automated Information System (AIS) Emerging Program Life-Cycle Management (LCM) Review and Milestone Approval Directive for specific IT project management. In addition, for each of the four projects we reviewed, a project management plan had been approved, and cost and schedule controls were addressed during project review meetings. Further, based on our review of project documentation and in discussion with project managers, up-to-date cost and schedule project data were provided to the PEO Review Board. This board oversees project performance regularly by comparing actual cost and schedule data to expectations and has a process for ensuring that, for underperforming projects, corrective actions are documented, agreed to, and tracked. Notwithstanding these strengths, DLA has some weaknesses in project oversight. Specifically, although the Corporate Board and the Investment Council have written charters, there are no written policies or procedures that define their role in collectively overseeing IT projects. Without these policies and procedures, project oversight may be inconsistently applied, leading to the risk that performance problems, such as cost overruns and schedule slippages, may not be identified and resolved in a timely manner. In addition, according to representatives from the oversight boards, they do not use information from the IT project inventory to oversee projects because they are more comfortable using more traditional methods of obtaining and using information (that is, informally talking with subject matter experts and relying on experience). The inventory is of value only to the extent that decisionmakers use it. As discussed earlier, while the inventory need not be the only source of information, it should nevertheless serve as a reliable and consistent tool for understanding project and overall portfolio decisions. Table 5 summarizes the ratings for each key practice and the specific findings supporting the ratings. Defining business needs for each IT project helps ensure that projects support the organization’s mission goals and meets users’ needs. This critical process creates the link between the organization’s business objectives and its IT management strategy. According to ITIM, effectively identifying business needs requires, among other things, (1) defining the organization’s business needs or stated mission goals, (2) identifying users for each project who will participate in the project’s development and implementation, (3) training IT staff adequately in identifying business needs, and (4) defining business needs for each project. (The complete list of key practices is provided in table 6.) DLA has executed all but one of the key practices associated with effectively defining business needs for IT projects. For example, DLA’s mission goals are described in DLA’s strategic plan. In addition, according to IT investment management officials, the IT staff is adequately trained in identifying business needs because they generally have prior functional unit experience. In addition, according to DLA directives, IT projects are assigned an Integrated Process Team (IPT) to guide and direct the project through the development lifecycle. The IPTs are composed of IT and functional staff. Moreover, DOD and DLA directives require that business requirements and system users be identified and that users participate in the lifecycle management of the project. According to an IT investment official, each IT project has a users’ group that meets throughout the lifecycle to discuss problems and potential changes related to the system. We verified that this was the case for the four projects we reviewed. While the business needs for three of the four projects we reviewed were clearly identified and defined, DLA has reported that this has not been consistently done for all IT projects. According to IT investment management officials, this inconsistency arose because policies and procedures for developing business needs were not always followed or required. DLA officials have stated that they are developing new guidance to address this problem. However, until this guidance is implemented and enforced, DLA cannot effectively demonstrate that priority mission and business improvement needs are forming the basis for all its IT investment decisions. Table 6 summarizes the ratings for each key practice and the specific findings supporting the ratings. Selecting new IT proposals requires an established and structured process to ensure informed decisionmaking and infuse management accountability. According to ITIM, this critical process requires, among other things, (1) making funding decisions for new IT proposals according to an established process, (2) providing adequate resources for proposal selection activities, (3) using an established proposal selection process, (4) analyzing and ranking new IT proposals according to established selection criteria, including cost and schedule criteria, and (5) designating an official to manage the proposal selection process. (The complete list of key practices is provided in table 7.) DLA has executed some of the key practices for investment proposal selection. For example, DLA executives make funding decisions for IT investments using DOD’s Program Objective Memorandum (POM) process, which is part of DOD’s annual budgeting process. Through this process, proposals for new projects or enhancements to ongoing projects are evaluated by DLA’s IT and financial groups and submitted to OSD through DLA’s Corporate Board with recommendations for funding approval. In addition, according to the CIO, adequate resources have been provided to carry out activities related to the POM process. Nonetheless, DLA has yet to execute some of the critical practices related to this process area. Specifically, DLA acknowledges that the agency is not analyzing and prioritizing new IT proposals according to established selection criteria. Instead, the Corporate Board uses the expertise from the IT organization and its own judgment to analyze and prioritize projects. To its credit, DLA recognizes that it cannot continue to rely solely on the POM process to make sound IT investment selection decisions. Therefore, the agency has been working to establish an IT selection process over the past two budget cycles that is more investment-focused and includes increased involvement from IT Operations staff, necessary information, and established selection criteria. Until DLA implements an effective IT investment selection process that is well established and understood throughout the agency, executives cannot be adequately assured that they are consistently and objectively selecting proposals that best meet the needs and priorities of the agency. Table 7 summarizes the ratings for each key practice and the specific findings supporting the ratings. An IT investment portfolio is an integrated, enterprisewide collection of investments that are assessed and managed collectively based on common criteria. Managing investments within the context of such a portfolio is a conscious, continuous, and proactive approach to expending limited resources on an organization’s competing initiatives in light of the relative benefits expected from these investments. Taking an enterprisewide perspective enables an organization to consider its investments comprehensively so that the collective investments optimally address its mission, strategic goals, and objectives. This portfolio approach also allows an organization to determine priorities and make decisions about which projects to fund based on analyses of the relative organizational value and risks of all projects, including projects that are proposed, under development, and in operation. According to ITIM, stage 3 maturity includes (1) defining portfolio selection criteria, (2) engaging in project-level investment analysis, (3) developing a complete portfolio based on the investment analysis, (4) maintaining oversight over the investment performance of the portfolio, and (5) aligning the authority of IT investment boards. Table 8 describes the purposes for the critical processes in stage 3. According to DLA officials, they are currently focusing on implementing stage 2 processes and have not implemented any of the critical processes in stage 3. Until the agency fully implements both stage 2 and 3 processes, it cannot consider investments in a comprehensive manner and determine whether it has the appropriate mix of IT investments to best meet its mission needs and priorities. DLA recognizes the need to improve its IT investment processes, but it has not yet developed a plan for systematically correcting weaknesses. To properly focus and target IT investment process improvements, an organization should fully identify and assess current process strengths and weaknesses (that is, create an investment management capability baseline) as the first step in developing and implementing an improvement plan. As we have previously reported, this plan should, at a minimum, (1) specify measurable goals, objectives, milestones, and needed resources, and (2) clearly assign responsibility and accountability for accomplishing well-defined tasks. The plan should also be documented and approved by agency leadership. In implementing the plan, it is important that DLA measure and report progress against planned commitments, and that appropriate corrective action be taken to address deviations. DLA does not have such a plan. In March 2001, it attempted to baseline agency IT operations by reviewing its project-level investment management practices using ITIM. This effort identified practice strengths and weaknesses, but DLA considered the assessment to be preliminary (to be followed by a more comprehensive assessment at an unspecified later date) and limited in scope. DLA used the assessment results to establish broad milestones for strengthening its investment management process. The agency did not, however, develop a complete process improvement plan. For example, it did not (1) specify required resources to accomplish the various tasks, (2) clearly assign responsibility and accountability for accomplishing the tasks, (3) obtain support from senior level officials, and (4) establish performance measures to evaluate the effectiveness of the completed tasks. At the same time, the agency has separately begun other initiatives to improve its investment management processes, but these initiatives are not aligned with the established milestones or with each other. The DLA CIO characterizes the agency’s approach to its various process improvement efforts as a necessary progression that includes some inevitable “trial and error” as it moves toward a complete process improvement plan. Without such a plan that allows the agency to systematically prioritize, sequence, and evaluate improvement efforts, DLA jeopardizes its ability to establish a mature investment process that includes selection and control capabilities that result in greater certainty about future IT investment outcomes. Until recently, IT investment management has not been an area of DLA management attention and focus. As a result, DLA currently finds itself without some of the capabilities that it needs to ensure that its mix of IT investments best meets the agency’s mission and business priorities. To its credit, DLA now recognizes the need to strengthen its IT investment management and has taken positive steps to begin doing so. However, several critical IT investment management capabilities need to be enhanced before DLA can have reasonable assurance that it is maximizing the value of its IT investment dollar and minimizing the associated risks. Moreover, DLA does not yet have a process improvement plan that is endorsed and supported by agency leadership. The absence of such a plan limits DLA’s prospects for introducing the management capabilities necessary for making prudent decisions that maximize the benefits and minimize the risks of its IT investment. To strengthen DLA’s investment management capability and address the weaknesses discussed in this report, we recommend that the secretary of defense direct the DLA director to designate the development and implementation of effective IT investment management processes as an agencywide priority. Further, we recommend that the secretary of defense have the DLA director do the following: Develop a plan, within 6 months, for implementing IT investment management process improvements that is based on GAO’s ITIM stage 2 and 3 critical processes. Ensure that the plan specifies measurable goals and time frames, defines a management structure for directing and controlling the improvements, and establishes review milestones. Ensure that the plan focuses first on correcting the weakness in the ITIM stage 2 critical processes, because these processes collectively provide the foundation for building a mature IT investment management process. Specifically: Develop and issue guidance covering the scope and operations of DLA’s investment review boards. Such guidance should include, at a minimum, specific definitions of the roles and responsibilities within the IT investment process; an outline of the significant events and decision points within the processes; an identification of the external and environmental factors that will influence the processes (for example, legal constraints, the behavior of key suppliers or customers, or industry norms), and the manner in which IT investment-related processes will be coordinated with other organization plans and processes. Develop and issue policies and procedures for maintaining DLA’s IT projects inventory for investment management purposes. Finalize and issue policies and procedures (including the use of information from the IT systems and project inventory) for the PEO Review Board’s oversight of IT projects. Develop and issue similar policies and procedures for the other investment boards. Finalize and issue guidance supporting the identification of business needs and implementing management controls to ensure that proposals submitted to DLA for review clearly identify and define business requirements. Develop and issue guidance for the proposal selection process in such a way that the criteria for selection are clearly set forth, including formally assigning responsibility for managing the proposal selection process and establishing management controls to ensure that the proposal selection process is working effectively. Ensure that the plan next focuses on stage 3 critical processes, which are necessary for portfolio management, because along with the stage 2 foundational processes, these processes are necessary for effective management of IT investments. Implement the approved plan and report on progress made against the plan’s goals and time frames to the secretary of defense every 6 months. DOD provided what it termed “official oral comments” from the director for acquisition resources and analysis on a draft of this report. In its comments, DOD concurred with our recommendations and described efforts under way and planned to implement them. However, it recommended that two report captions be changed to more accurately reflect, in DOD’s view, the contents of the report and to eliminate false impressions. Specifically, DOD recommended that we change one caption from “DLA’s Capabilities to Effectively Manage IT Investments Are Limited” to “DLA’s Capabilities to Effectively Manage IT Investments Should Be Improved.” DOD stated that this change is needed to recognize the fact that DLA has completed about 75 percent of the practices associated with stage 2 critical processes. We do not agree. As stated in our report, to effectively manage IT investments an agency should (1) have basic, project-level control and selection practices in place (stage 2 processes) and (2) manage its projects as a portfolio of investments (stage 3 processes). Although DLA has executed most of the key practices associated with stage 2 processes, the agency acknowledges that it has not implemented any of the stage 3 processes. Therefore, our caption as written describes DLA’s IT investment management capabilities appropriately. In addition, DOD recommended that we change the caption “DLA Lacks a Plan to Guide Improvement Efforts” to “DLA Lacks a Published Plan to Guide Improvement Efforts.” DOD stated that this change is needed because DLA has developed some elements of an implementation plan. We do not agree. Our point is that DLA did not have a complete process improvement plan, not that it has yet to publish the plan that it has. As we describe in the report, a complete plan should, at a minimum, (1) be based on a full assessment of process strengths and weaknesses, (2) specify measurable goals, objectives, milestones, and needed resources, (3) clearly assign responsibility and accountability for accomplishing well- defined tasks, and (4) be documented and approved by agency leadership. In contrast, DLA’s planning document was based on a preliminary assessment of only stage 2 critical processes and lacked several of the critical attributes listed above. Moreover, DOD stated in its comments that DLA has not completed a formally documented and prioritized implementation plan to resolve stage 2 and 3 practice weaknesses and has yet to complete the self-assessment and gap analysis necessary to define planned action items. Accordingly, it is clear that DLA has not satisfied the tenets of a complete plan, and thus our caption is accurate as written. DOD provided additional comments that we have incorporated as appropriate in the report. We are sending copies of this report to the chairmen and ranking minority members of the Subcommittee on Defense, Senate Committee on Appropriations; the Subcommittee on Readiness and Management Support, Senate Committee on Armed Services; the Subcommittee on Defense, House Committee on Appropriations; and the Subcommittee on Military Readiness, House Committee on Armed Services. We are also sending copies to the director, Office of Management and Budget; the secretary of defense; the under secretary of defense for acquisition, technology, and logistics; the deputy under secretary of defense for logistics and materiel readiness; and the director, Defense Logistics Agency. Copies will be made available to others upon request. If you have any questions regarding this report, please contact us at (202) 512-3439 and (202) 512-7351, respectively, or by e-mail at hiter@gao.gov and mcclured@gao.gov. An additional GAO contact and staff acknowledgments are listed in appendix II. In addition to the individual named above, key contributors to this report were Barbara Collier, Lester Diamond, Gregory Donnellon, Sabine Paul, and Eric Trout.
The Defense Logistics Agency (DLA) relies extensively on information technology (IT) to carry out its logistics support mission. This report focuses on DLA's processes for making informed IT investment decisions. Because IT investment management has only recently become an area of management focus and commitment at DLA, the agency's ability to effectively manage IT investments is limited. The first step toward establishing effective investment management is putting in place foundational, project-level control and selection processes. The second step toward effective investment management is to continually assess proposed and ongoing projects as an integrated and competing set of investment options. Accomplishing these two steps requires effective development and implementation of a plan, supported by senior management, which defines and prioritizes investment process improvements. Without a well-defined process improvement plan and controls for implementing it, it is unlikely that the agency will establish a mature investment management capability. As a result, GAO continues to question DLA's ability to make informed and prudent investment decisions in IT.
You are an expert at summarizing long articles. Proceed to summarize the following text: Employer-sponsored pension plans, in combination with Social Security and personal savings, provide millions of retirees and their families with retirement income. Employers can provide these benefits using two basic types of plans—defined benefit (DB) or defined contribution (DC) pension plans. For a DB plan, the employer determines retirement benefit amounts for individual employees using specific formulas that consider certain factors, such as age, years of service, and salary levels. Employers bear the full responsibility and risk of providing sufficient funding to guarantee that the benefits promised by the formulas will be paid. The amount an employer must contribute to a DB plan can vary from year-to-year depending on changes in areas such as workforce demographics or investment earnings. For a DC plan, the employer establishes an individual account for each eligible employee and generally promises to make a specified contribution to that account each year. Employee contributions are also often allowed or required. The employee’s retirement benefits depend on the total of employer and employee contributions to the account as well as the investment gains and losses that have accumulated at the time of retirement. Therefore, the employee bears the risk of whether the funds available at retirement will provide a sufficient level of retirement income. Private employers are not required to provide their employees with pension benefits; however, those employers that do so must meet certain minimum legal standards. The Employee Retirement Income Security Act of 1974 (ERISA) requires that private employers manage pension plan funds prudently and in the best interests of participants and their beneficiaries, that participants be informed of their rights and obligations, and that there be adequate disclosure of the plan’s terms and activities. For DB plans only, ERISA created a federal insurance program financed primarily by employer-paid premiums to guarantee the payment of pension benefits when an underfunded DB plan is terminated. Administrative expenses for pension plans include fees for plan design, payroll deductions, recordkeeping, trustee services, regulatory compliance, investment management, and employee communications. To provide the requested information, we used a research database of computerized IRS Form 5500 reports that is maintained by the Department of Labor (DOL). ERISA requires all private employers that sponsor pensions, regardless of their size or industry, to annually file a separate Form 5500 report for each of their pension plans. Each report is to include financial, participant, and actuarial data. Unlike other studies that tended to concentrate on the total number of each type of plan in existence, our study focused on the number of employers that used either or both types of pension plans. We did not independently verify the accuracy of the research database; however, both IRS and DOL check the data for accuracy and consistency. It is important to note that these checks cannot substitute for reviewing each employer’s original records—the data were accurate only to the extent that employers exercised appropriate care in completing their Form 5500 reports. As agreed with your office, we limited the scope of our review to include Form 5500 data for 1984 through 1993—the 10 most recent years for which the necessary data were available. The research database did not contain information on employers that did not offer any type of pension plan. We included only single-employer plans in our analyses, since the database did not indicate the number of companies that participated in multiemployer plans. For employers that offered both DC and DB plans, we could not determine whether these plans covered the same or different groups of employees. It is also important to recognize that the contribution and administrative expense data presented in this report are averages for large groups of employers. Thus, the data do not represent any individual employer’s pension plan arrangements or practices nor do they represent the maximum allowable employee and employer contribution rates for DC plans. Also, pension plan experiences in the entire private sector may not be generalizable to the federal government. Appendixes I and II provide more detailed information on our analyses of the Form 5500 data and the results obtained, respectively. To obtain insights on what factors employers may consider in deciding what types of pension plans to provide their employees, we reviewed the retirement-related literature included in the bibliography at the end of this report. We obtained written comments on a draft of this report from DOL officials. These comments are discussed at the end of this letter. We did our review in Washington, D.C., from February 1996 to July 1996 in accordance with generally accepted government auditing standards. Appendix I contains a more detailed discussion of our objectives, scope, and methodology. From 1984 to 1993, the number of private employers that sponsored single-employer pension plans increased from approximately 455,000 to almost 565,000. Over this same period, the percentage of all employers that offered only DC pension plans increased from 68 to 88 percent. The percentage of employers that offered only DB plans decreased from 24 to 9 percent, and the percentage that offered both DC and DB plans decreased from 8 to 3 percent. Figure 1 shows the change in the percentage of employers offering each combination of pension plans from 1984 to 1993. The increase in the percentage of employers sponsoring only DC plans occurred across companies of all sizes. In 1984, the majority of employers with fewer than 500 employees offered only DC plans, while the majority of employers with 500 or more employees sponsored DB plans—either alone or in combination with a DC plan. By 1993, the only employment-size categories where DB plans continued to be offered by the majority of companies were employers with 10,000 to 19,999 employees and 20,000 to 49,999 employees. Figures 2 and 3 show the percentage of employers offering the various types of pension plans by employer size for 1984 and 1993, respectively. The increase in the percentage of employers offering only DC plans occurred across all industries. For example, the number of employers in the services industry that sponsored pension plans increased from 183,908 in 1984 to 268,533 in 1993, and the percentage of those employers that offered only DC plans increased from 71 to 93 percent. Table 1 shows the numbers of employers that sponsored pension plans and the proportion of those employers that sponsored only DC plans by industry for 1984 and 1993. The DOL database was not readily amenable to tracking the pension plans of individual employers over the 1984 to 1993 time period. Accordingly, we did not determine the extent to which individual employers may have terminated and replaced their DB plans with DC plans. However, one study examined 11,950 employers that sponsored DB plans in 1985. For the 6,974 of these employers that filed a Form 5500 report in 1992, the study found that 1,449 (or about 21 percent) of the employers had terminated their DB plans and adopted or retained DC plans. Larger employers continued to use DB plans more extensively than smaller employers. From 1984 to 1993, an increasing proportion of the employers with 2,500 or more employees offered both DC and DB plans. In 1993, nearly half of these employers included a DB plan as part of their retirement package—with or without a supplementary DC plan. Table 2 shows the percentage of employers that offered only DC, DC and DB, or only DB plans by employer size for 1984 and 1993. In 1993, the average ratio of employer-to-employee contributions for employers that sponsored only DC plans was 1.8 (that is, employers contributed $1.80 for each $1.00 contributed by employees). The average ratio for employers that sponsored only DB plans was 19.7. For employers that sponsored both DC and DB plans, the ratios were 0.6 and 58.8, respectively. These results suggest that most private employers did not require employees to contribute to their DB plans. Furthermore, employers contributed proportionately more to DC plans that were designed to provide primary pension benefits than they did to DC plans that supplemented the benefits of a DB plan. From 1988 to 1993, employers provided a declining proportion of the total contributions for DC plans and an increasing proportion of the total contributions for DB plans, regardless of whether they offered one or both types of plans. The ratio of employer-to-employee contributions decreased from approximately 3.8 to 1.8 for employers that sponsored only DC plans and, with considerable fluctuation during the intervening years, increased from 19.1 in 1988 to 19.7 in 1993 for employers that sponsored only DB plans. Similarly, for employers that sponsored both DC and DB plans, the ratio decreased from 0.8 to 0.6 for the DC plans and increased from 20.1 to 58.8 for the DB plans. Figure 4 shows these ratios by the type of plans sponsored from 1988 to 1993. In 1993, the average reported administrative expense per plan participant was $103 for employers that sponsored only DC plans and $157 for employers that sponsored only DB plans. For employers that sponsored both DC and DB plans, the administrative expense per participant was $71 and $125, respectively. Therefore, a retirement benefits package that consisted of only a DC plan was the least expensive to administer, on average. These results also suggest that employers that sponsor both DC and DB plans may experience some administrative efficiencies compared with employers that offer only one type of pension plan. Although we did not analyze administrative expenses by employer size, the literature indicates that larger companies incur lower administrative expenses because of considerable economies of scale. From 1988 to 1993, the average reported administrative expense per participant remained fairly constant for both DC and DB plans.Consequently, growth in administrative expenses did not appear to explain why employers that already had a DB plan would shift to a DC plan. However, the lower administrative expense associated with DC versus DB plans might have been an influential factor for employers designing a retirement benefits package for the first time. Figure 5 shows the average administrative expense per participant for pension plans by the type(s) of plans employers offered from 1988 to 1993. The literature suggested various factors that may explain why employers might prefer to offer only DC plans to their employees. These factors included (1) changes in pension regulations and tax policy that may favor DC plans over DB plans, (2) increases in the stock and bond markets that may encourage employers to terminate DB plans to capture retirement fund assets that exceed plan liabilities, and (3) changes in workers’ preferences that are based on their expectations of short tenures with several employers. Since its enactment in 1974, Congress has passed many laws that amended ERISA and increased the complexity of pension regulations. For example, employers must comply with certain minimum funding requirements and maximum funding limits, limits on plan participation by highly compensated individuals or owners, and time limits on how long employees must work before being entitled to pension benefits. Since ERISA was enacted, the plan termination insurance premiums paid by employers with DB plans have increased from $1 to $19 for each plan participant and an additional variable premium is required for underfunded plans. Some studies indicated that the increasing complexity of pension regulations was more burdensome for DB plans compared with DC plans—particularly for smaller employers. Changes in tax policy may also have affected employers’ choice of DC plans over DB plans. For example, in 1978, Congress added section 401(k) to the IRS Tax Code that allowed employers to establish DC plans to which employers and/or employees could contribute on a pretax basis and could defer taxes on earnings until funds are withdrawn from the plan. The number of 401(k) plans offered by employers that sponsored only DC plans increased from 13,610 in 1984 to 155,384 in 1993. During the 1980s, a strong stock market and higher long-term interest rates contributed to an increase in the number of overfunded DB plans.Employers can terminate an overfunded plan by purchasing annuities to satisfy the plan’s current obligations and “reverting” any excess assets back to the company. Some employers use the excess assets for nonpension purposes, such as investment in plant and equipment or retirement of long-term debt. The literature indicated that employers frequently replace terminated DB plans with either a new DB plan or DC plan; however, employers are not legally required to do so. From 1975 to 1988, employers terminated an average of 6,500 DB plans each year, although not all of these terminations were the result of a reversion. The rising stock and bond markets were also credited with enhancing the popularity of DC plans with workers who saw their pension account balances increase dramatically. Employees in DC plans are immediately entitled to their own contributions and any earnings on those contributions if they change employers. Moreover, employees in DC plans are entitled to the contributions made by their employers as well as the earnings on those contributions after a minimum period of employment (no more than 7 years). The literature suggests that some workers may prefer DC plans because they are easier to understand and workers can take their pension benefits with them if they were to change jobs. Some of the factors that may influence private employers to choose one type of plan over another are not relevant to public employers, including the federal government. For example, because governments are not taxpaying entities, they are not influenced by opportunities to reduce federal taxes on their revenues. Furthermore, governments that finance their pension plans on a pay-as-you-go basis have no opportunity to revert excess pension assets for other purposes. Several studies suggested that the reduction in the number of employers with DB plans was not related to any particular policy considerations, but resulted largely because of shifts in the U.S. labor market. Specifically, during the 1980s, employment grew in the services industry where employers favored DC plans, while employment declined in the manufacturing industry where employers traditionally sponsored DB plans. Other studies indicate that many employers will continue to use DB plans as a human resource tool to attract workers with certain characteristics, reward long tenured employees, and achieve desired employee retention patterns. Furthermore, as the workforce ages and retirement becomes imminent for more older workers, employees may begin to place more value on DB plans as they evaluate the sufficiency of their DC accounts. In response to these considerations, an increasing number of employers are adopting hybrid retirement programs that combine the features of DC plans and DB plans. We requested comments on a draft of this report from the Secretary of Labor or his designee. In a letter dated September 3, 1996, the Assistant Secretary of Labor for Pension and Welfare Benefits provided Labor’s comments. The comments were of a technical nature and suggested that we make it more prominently clear that our results (1) reflect only those employers that sponsored single-employer pension plans, (2) may overstate total employer contributions and understate employee contributions as a result of inconsistent reporting on the Form 5500, and (3) reflect only those administrative expenses actually reported on the Form 5500. We clarified the report where necessary to reflect these comments. The Assistant Secretary also suggested that the report provide more information on the number of participants covered by only DC, DC and DB, or only DB plans. We agree that this information could be very useful. However, our study was designed with the employer as the unit of analysis and did not allow us to develop extensive information on pension participants. We are sending copies of this report to the Ranking Minority Member of the Subcommittee, the Chairman and Ranking Minority Member of the Senate Governmental Affairs Committee, the Secretary of Labor, and other interested parties. Copies will also be made available to others upon request. Major contributors to this report are listed in appendix III. If you have any questions, please call me at (202) 512-7680. In June 1995, the Chairman, Subcommittee on Civil Service, House Committee on Government Reform and Oversight, asked us to provide information on the use of defined contribution pension plans in the private sector. He said that such information would assist congressional decisionmakers as they consider the possible design of a retirement system for new federal hires. The objectives of our review were to determine how many private employers offered a retirement program consisting of (1) only defined contribution (DC) plans, (2) only defined benefit (DB) plans, or (3) a combination of the two types of plans; the average employer and employee contributions to the plans;the average administrative expenses charged to the plans; and the factors that may influence private sector employers when they decide to offer DC plans versus DB plans as a part of their employees’ retirement package. To accomplish the first three objectives, we used a research database of computerized IRS Form 5500 reports maintained by the Pension and Welfare Benefits Administration of the Department of Labor (DOL). Under the Employee Retirement Income Security Act of 1974, private employers must annually file a separate Form 5500 report with the Internal Revenue Service (IRS) for each of their pension plans. Each report is to contain financial, participant, and actuarial data. We did not independently verify the accuracy of the DOL database. However, IRS edits the reports by checking addition and consistency on financial and other record items and corresponds with filers to obtain corrected data before providing the computerized data to DOL. DOL further edits the Form 5500 data to identify problems, such as truncated or incorrect entries, before constructing its research database, which consists of (1) all plans with 100 or more participants and (2) a 10-percent sample that is weighted to represent the universe of all plans with fewer than 100 participants. As agreed with your office, we limited the scope of our review to include Form 5500 data for 1984 through 1993—the 10 most recent years for which the DOL database was available. We were unable to determine the number of current employees covered by plans with fewer than 100 participants; therefore, our analyses do not address the number of current employees covered by employers that sponsor only DC, both DC and DB, or only DB plans. The data shown in this report reflect averages for large groups of employers and do not represent any individual employer’s pension plan arrangements or practices. Pension plan experiences of the entire private sector may not be generalizable to the federal government. Moreover, there are no obvious or agreed-upon criteria to determine which private-sector industry type should be or is the most comparable to the federal government. We relied on a DOL programmer with extensive knowledge of the Form 5500 database to complete all of the analyses described in this appendix. To determine the number of employers that sponsored only DC plans, only DB plans, or both types of plans, we sorted the data for each year on the basis of (1) a unique employer identification number and (2) the indicated plan type included on each Form 5500 report. Because of incomplete data, we did not include “multiemployer” plans in our study. According to the Employee Benefit Research Institute (EBRI), multiemployer plans are generally DB plans; they represented 0.4 percent of all plans for 1992. We stratified our analyses by employer size and industry type using the same category breakouts included in DOL’s annual reports on private sector pensions. When we stratified the data, we included only those employers for which we could determine the number of employees or the appropriate industry, respectively, from the database. Table I.1 shows the number of employers with indeterminate employer size or industry category for each year in our study. According to the DOL programmer, IRS began editing the financial data included on the Form 5500 database in 1988; thus, the financial data for the years preceding 1988 were less reliable. Therefore, we only included the data for 1988 through 1993 for our analyses of employer and employee contributions and administrative expenses. To determine the average employer and employee contribution to plans sponsored by employers with only DC plans, only DB plans, or both types of plans, we divided the sum of all reported employer or employee contributions by the sum of all participants covered by the plans. We included only participants for those plans for which employer or employee contributions were reported. According to a DOL representative, plans that reported 100-percent employee participation tended to be ones where employers make automatic contributions to the plan on behalf of their employees. Therefore, we did not include these plans when computing employee contribution amounts. It is important to note that employers are not required to make contributions to DB plans each year—contribution amounts are determined on the basis of current actuarial assumptions and the market value of fund assets. To determine the average administrative expense per participant of plans sponsored by employers with only DC plans, only DB plans, or both types of plans, we divided the sum of all reported administrative expenses by the sum of all participants covered by the plans. We included only participants for those plans that employers reported administrative expenses. According to DOL representatives, employers generally report only those administrative expenses that are actually charged to the plans and exclude expenses taken directly out of employee contributions or investment returns. Therefore, the administrative expenses may be underreported in these data. To address the fourth objective—to identify factors that may influence private employers to sponsor DC rather than DB plans—we reviewed retirement-related literature that we identified using an on-line business periodical system and bibliographies from EBRI and Congressional Research Service publications. Table II.1: Number of Private Sector Employers, by Type of Pension Plan Offered (1984-1993) Table II.2: Number of Participants Covered by Private Pension Plans, by Type of Plan Offered (1984-1993) Table II.3: Number of Private Sector Employers, by Employer Size and Type of Pension Plan Offered (1984-1993) Type of pension plan(s) offered (continued) Type of pension plan(s) offered (continued) Type of pension plan(s) offered (continued) Table II.4: Number of Private Sector Employers, by Industry and Type of Pension Plan Offered (1984-1993) Type of pension plan(s) offered (continued) Type of pension plan(s) offered (continued) Type of pension plan(s) offered (continued) Type of plan(s) offered Note 1: This table includes only those employers that sponsored single-employer pension plans. Furthermore, the average contribution amounts were computed only for plans with reported employer and/or employee contributions. It is important to note that employers are not necessarily required to make contributions to DB plans each year—contribution amounts are determined on the basis of current actuarial assumptions and the market value of fund assets. Because contribution amounts are averages across all plans, they do not represent the maximum amounts employees are allowed to contribute nor the maximum employer matching contributions allowed by the plans. Note 2: Pension plan experiences of the entire private sector may not be generalizable to the federal government. Moreover, there are no obvious or agreed upon criteria to determine which private-sector industry type should be or is the most comparable to the federal government. Defined benefit (primary) supplementary) supplementary) Note 1: For employers that sponsored both DB and DC plans, the database that we analyzed categorizes the DB plan as “primary” and the DC plan as “supplementary,” with very few exceptions. When more than one DC plan is offered, the largest one is generally categorized as the first supplementary plan. Note 2: This table includes only those employers that sponsored single-employer pension plans. Furthermore, the average contribution amounts were computed only for plans with reported employer and/or employee contributions. It is important to note that employers are not necessarily required to make contributions to DB plans each year—contribution amounts are determined on the basis of current actuarial assumptions and the market value of fund assets. Because contribution amounts are averages across all plans, they do not represent the maximum amounts employees are allowed to contribute nor the maximum employer matching contributions allowed by the plans. Note 3: Pension plan experiences of the entire private sector are not generalizable to the federal government. Moreover, there are no obvious or agreed upon criteria to determine which private-sector industry type should be or is the most comparable to the federal government. Type of plan(s) offered Defined contribution (DC) and defined benefit (DB) Allen, S.G., R.L. Clark, and A.A. McDermed. “Pensions, Bonding, and Lifetime Jobs.” The Journal of Human Resources, (1992), pp. 463-481. Beller, D.J., and H.H. Lawrence. “Trends in Private Pension Plan Coverage.” Trends in Pensions. Washington, D.C.: U.S. Government Printing Office, 1992. Bloom, D.E., and R.B. Freeman. “Trends in Nonwage Inequality: The Fall in Private Pension Coverage in the United States.” AEA Papers and Proceedings, Vol. 82 (1992), pp. 539-545. Bodie, Z. “Pensions as Retirement Income Insurance.” Journal of Economic Literature, Vol. 28 (1990), pp. 28-49. Chang, A. “Explanations for the Trend Away From Defined Benefit Pension Plans.” Congressional Research Service Report for Congress, Washington, D.C.: 1991. Cornwell, C., S. Dorsey, and N. Mehrzad. “Opportunistic Behavior by Firms in Implicit Pension Contracts.” The Journal of Human Resources, Vol. 26 (1991), pp. 704-725. Even, W.E., and D.A. Macpherson. “Why Did Male Pension Coverage Decline in the 1980s?” Industrial and Labor Relations Review, Vol. 47 (1994), pp. 439-53. Foster, Ann C. “Employee Participation in Savings and Thrift Plans, 1993.” Monthly Labor Review (1996), pp. 17-22. “Fundamentals of Employee Benefit Programs.” Employee Benefit Research Institute, Washington, D.C.: 1990. Gale, W.G. “Public Policies and Private Pension Contributions.” Journal of Money, Credit, and Banking, Vol. 26 (1994), pp. 710-732. Gustman, A.L., O.S. Mitchell, and T.L. Steinmeier. “The Role of Pensions in the Labor Market: A Survey of the Literature.” Industrial and Labor Relations Review, Vol. 47 (1994), pp. 417-438. Gustman, A.L., and T.L. Steinmeier. “The Stampede Toward Defined Contribution Pension Plans: Fact or Fiction?” Industrial Relations, Vol. 31 (1992), pp. 361-369. Ippolito, R.A. “Toward Explaining the Growth of Defined Contribution Plans.” Industrial Relations, Vol. 34 (1995), pp. 1-20. Ippolito, R.A. and W.H. James. “LBOs, Reversions and Implicit Contracts.” The Journal of Finance, Vol. 47 (1992), pp. 139-167. Kruse, D.L. “Pension Substitution in the 1980s: Why the Shift Toward Defined Contribution?” Industrial Relations, Vol. 34 (1995), pp. 218-240. Meder, J.A. “Employers See Duty to Offer Pension Plan, But Ask Employees to Do More.” Employee Benefit Plan Review (1994), pp. 40-43. Papke, L.E. “Does 401(k) Introduction Affect Defined Benefit Plans?” Abstract, 1996. Paré, T.P. “Is Your 401(k) Plan Good Enough?” Fortune (1992), pp. 78-83. Petersen, M.A. “Pension Reversions and Worker-Stockholder Wealth Transfers.” The Quarterly Journal of Economics (1992), pp. 1033-1056. Schmitt, R. “Pension Issues: Challenges to Retirement Income Security.” Congressional Research Service Report for Congress, Washington, D.C.: 1993. Schmitt, R. “Private Pension Plan Standards: A Summary of ERISA.” Congressional Research Service Report for Congress, Washington, D.C.: 1995. Schmitt, R., and G. Falk. “Trends in Private Pension Plans: Is There Cause for Concern?” Congressional Research Service Report for Congress, Washington, D.C.: 1994. Schmitt, R., and J.R. Storey. “Pension Asset Reversions: Whose Money Is It?” Congressional Research Service Issue Brief, Washington, D.C.: 1989. Segal, D.J., and H.J. Small. “Are Defined Benefit Plans About to Come Out of Retirement?” Compensation and Benefits Review (1993), pp. 22-26. Silverman, C. “Pension Evolution in a Changing Economy.” Employee Benefit Research Institute, Washington, D.C.: 1993. Spencer, R.D. “Defined Benefit vs. Defined Contribution: Industry Leaders Debate Pros and Cons.” Employee Benefit Plan Review (1994), pp. 43-46. Stone, M. “A Financing Explanation for Overfunded Pension Plan Terminations.” Journal of Accounting Research, Vol. 25 (1987), pp. 317-326. Vaughn, R.L. “Defined Benefit Plans and the Reality Behind Misconceptions.” Pension World (1992), pp. 21-23. Yakoboski, P., et al. “Employment-Based Retirement Income Benefits: Analysis of the April 1993 Current Population Survey.” Employee Benefit Research Institute, Washington, D.C.: 1994. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the: (1) numbers and types of pension plans sponsored nationwide by private employers from 1984 to 1993; (2) proportions of total contributions made to these plans by employers and employees; (3) average administrative expense for these plans; and (4) reasons employers sponsor certain types of pension plans. GAO found that: (1) using a computerized database of reports employers have filed with the Internal Revenue Service, GAO found that in 1993, 88 percent of private employers with single-employer pension plans sponsored only defined contribution (DC) plans; (2) this represented a sizable increase over 1984 when 68 percent of private employers reported they had only DC plans; (3) from 1984 to 1993, the percentage of employers that offered only defined benefit (DB) plans decreased from 24 to 9 percent, and those employers offering both DC and DB plans decreased from 8 to 3 percent; (4) the growth in DC plans occurred across all employer sizes and industries; however, the percentage of employers with 2,500 or more employees that sponsored both DC and DB plans increased over the same period and nearly half of these employers continued to sponsor a DB plan in 1993; (5) the data showed that private employers generally did not require employees to contribute to DB plans and that employers provided a greater proportion of total contributions to DC plans that were the only plan offered to employees compared with DC plans that supplemented a DB plan; and (6) from 1988 to 1993, administrative expenses remained fairly constant for DC and DB plans, and the average reported administrative expense per participant was lowest for employers that offered only DC plans. GAO also found that: (1) its review of retirement literature revealed a variety of possible explanations for why employers might prefer DC over DB plans if they decide to sponsor only one type of plan; (2) these factors included increasingly complex regulations for DB plans, a surge in the number of employers terminating DB plans to acquire capital assets, and employees' growing preference for pension benefits that they can retain when they change jobs; (3) the literature also noted that an employment shift has occurred away from industries in which employers traditionally favored DB plans toward industries in which employers favored DC plans; and (4) because the literature primarily addressed factors that influence private-sector decisionmaking on pension plan design, these factors may or may not be relevant to the federal government and other public employers.
You are an expert at summarizing long articles. Proceed to summarize the following text: The mission of the MFO is to observe and report on Israeli and Egyptian compliance with the security aspects of the 1979 treaty of peace. The agreement established four security zones—three are in the Sinai in Egypt, and one is in Israel along the international border. The multinational force occupies checkpoints and conducts periodic patrols to observe adherence of the treaty parties to agreed force limitations and patrols the Strait of Tiran between the Gulf of Aqaba and the Red Sea to ensure the freedom of navigation. The agreed force limitations for the four zones (see fig. 1) are: Zone A: One Egyptian mechanized division containing up to 22,000 troops; Zone B: Four Egyptian border battalions manned by up to 4,000 personnel; Zone C: MFO-patrolled areas within Egypt, although civil police units with light weapons are also allowed; and Zone D: Up to four Israeli infantry battalions totaling up to 4,000 troops. The Department of State oversees U.S. participation in the MFO, nominates a U.S. citizen as the Director General, and helps recruit Americans to serve in the MFO civilian observer unit. MFO headquarters are located in Rome and the organization also maintains offices in Cairo and Tel Aviv to address policy and administration issues. The Force Commander—who is responsible for command and control of the force– and his multinational staff are located in the North Camp at El Gorah in the Sinai Peninsula. The U.S. infantry battalion and the coastal patrol unit are based in the South Camp near Sharm el Sheikh on the Red Sea (see fig. 2). The MFO’s annual operating budget of about $51 million is funded in equal parts by Egypt, Israel, and the United States. All parties pay their contributions in U.S. dollars. Currently, 11 countries deploy troops to the MFO. As of December 2003, the MFO military force consisted of 1,685 multinational troops, of which 687 were from the United States (see fig. 3). Colombia and Fiji also provide infantry battalions and Italy provides the coastal patrol unit. In addition, there is a civilian observer unit of 15 U.S. citizens that performs reconnaissance and verification missions. The chief observer and about half of the other observers temporarily resign from the State Department to fulfill 1- or 2-year MFO contract commitments; the other civilian observers are usually retired U.S. military personnel with renewable 2-year MFO contracts. (See app. II for details on the MFO work force.) Retired military personnel are often hired for their familiarity with military weapons and organizations, while State personnel are often hired for their diplomatic skills and experience in the region. All members need to become proficient in navigation, map reading, and driving in the Sinai, according to an MFO official. The MFO Director General must be a U.S. citizen that is nominated by the State Department and appointed by the parties for a 4-year renewable term. The Director General appoints the Force Commander for a 3-year term that can also be renewed. The Force Commander cannot be of the same nationality as the Director General. MFO’s other civilian employees are generally hired on 2-year contracts that can be renewed at the Director General’s discretion. In a 1995 report, we reported that the parties to the treaty and the U.S. government viewed the MFO as effective in helping maintain peace and in reducing certain costs. However, we found that State needed to provide greater oversight due to a lack of assurance regarding the adequacy of internal controls. The report noted that, unlike other international organizations, the MFO does not have a formal board of directors or independent audit committee to oversee audits. Our recommendations in 1995 included that State take steps to improve its oversight by examining MFO annual financial statements for discrepancies and having MFO’s external auditor periodically perform a separate audit of MFO internal controls that State was to review. State has implemented our recommendations except for examining MFO’s financial statements for discrepancies. State has developed but not completely fulfilled its operational and financial oversight responsibilities described in its guidelines for overseeing the MFO. These oversight responsibilities included evaluating MFO financial practices, conducting oversight visits of MFO operations, and recruiting staff for the civilian observers’ unit. We could not determine the full extent of the department’s compliance with its guidelines because it does not have sufficient documentation to describe the quality and range of its efforts. The Office of Regional Affairs within State’s Bureau of Near Eastern Affairs (NEA) is the single U.S. focal point for all MFO-United States government interaction and oversight. NEA’s guidelines called for State officials to review the external auditors’ reports and evaluate MFO financial practices. While reviews of the auditors’ reports were performed, the staff did not possess the accounting expertise to evaluate MFO financial practices and did not do so. NEA is exploring options for obtaining the necessary expertise; however, it has not finalized its approach for redressing this issue. According to the guidelines, oversight is also informally conducted through the transfer of U.S. government personnel to key MFO positions, including a U.S. civilian observer unit. While State has successfully recruited many civilian observers, it has had difficulty in consistently recruiting candidates with strong leadership capabilities for the chief position. Recruiting for the chief observer post remains a concern because many candidates at State seek higher priority posts, such as the U.S. Embassy in Cairo, to enhance their careers rather than seek an MFO position. In response to a February 2004 recommendation made by the State Office of the Inspector General (OIG), NEA agreed to form an advisory board to oversee MFO operations. In June 2004, the advisory board had its initial meeting and discussed options for making the chief observer position more attractive. The board has not fully developed the range of issues that it will address or established timelines for resolving these issues. NEA developed guidelines for conducting MFO oversight in 1995 and updated the guidelines in 2002; however, the guidelines did not require that NEA document its oversight efforts. The guidelines sought to ensure that (1) U.S. government agreements and foreign policy objectives were being met; (2) MFO personnel practices were appropriate and in accordance with MFO regulations; (3) MFO operations were in compliance with its regulations; and (4) MFO resources were spent appropriately, financial transactions were recorded accurately, and internal controls were adequate. We could not determine whether State fully complied with its oversight responsibilities because it did not have sufficient documentation to support the extent and quality of its oversight efforts. We reviewed documentation to support State’s efforts to provide oversight of MFO from 1996 to 2004. These documents recorded communication between MFO and State officials about daily activities and operations of the MFO. However, we found that this documentation did not fully describe State’s oversight efforts, the condition of MFO operations, State’s views on MFO policies/practices, or recommendations for improving MFO operations. As a result, we could not determine the extent and quality of NEA oversight activities. The maintenance of accurate and timely records document efforts undertaken, and reviews by management help ensure that management directives are carried out. Records are an integral part of an entity’s stewardship of government resources. In addition, documentation provides information so that oversight activities can be assessed over time. While State has met some aspects of the guidelines for overseeing MFO, it has not fully complied with its guidelines in other areas such as evaluating MFO’s financial practices. As discussed below, we reviewed the operational and financial oversight guidelines and State’s efforts for complying with them. NEA guidelines called for its officials to maintain regular communications with key MFO officials, discuss U.S. foreign policy issues with MFO, and participate in the annual MFO Trilateral Meeting between Egyptian, Israeli, and U.S. officials. We reviewed letters of correspondence, reports, cables, and e-mails documenting regular communications with MFO officials and State’s participation in the MFO Annual Trilateral Conferences of Major Fund Contributors. At the Trilateral, senior MFO officials discussed with delegates from Egypt and Israel information of interest to the United States. The U.S. delegate conveyed the U.S. position to the treaty parties and the MFO and discussed issues that ranged from routine matters relating to management and other administrative issues to major issues concerning MFO finances. NEA guidelines note that the transfer of U.S government personnel to key MFO positions—including the U.S. civilian observer unit (COU)—is an informal mechanism of U.S. oversight. While NEA has successfully recruited many candidates for the civilian observer positions, it has not consistently recruited candidates with the qualities that senior State officials regard as important for the chief civilian observer post. These qualities include having the capacity to exercise strong leadership and management skills in a predominantly male military culture in an isolated environment. Annually, NEA recruits U.S. government employees for about six 1-year observer positions and a chief observer who serves 2 years in the MFO. State reviews the applications for these posts, develops a “short list” from which the MFO Director General selects a candidate, provides input into the final selection, and recommends the candidate for the chief observer position. In recent years, according to the MFO and State officials, ineffective leadership in the chief observer position contributed to considerable turnover in the unit. A number of chiefs or interim chiefs were dismissed or transferred due to poor leadership capabilities. These problems resulted in low morale in the unit and to the early resignation of several observers. Recruiting for the leadership post remains a concern because many qualified candidates at State desire and accept higher priority posts to enhance their careers rather than seek MFO positions. According to a senior State official, the qualities that make a good chief observer— regional experience, including Arabic language skills, and managerial experience—are in demand at regional posts with higher priority staffing demands. The MFO Director General stated that he would like to broaden the pool of candidates and recruit from other sources for the leadership position if State could not provide a candidate with appropriate credentials. State officials oppose this approach, stating that the position was an important symbol of U.S. commitment and required an experienced Foreign Service Officer. According to senior State officials, the department is reviewing options, such as elevating the position to a more senior Foreign Service level, to make the position more attractive to Foreign Service Officers. However, a timeline for addressing this issue has not yet been established. NEA has fulfilled some of its financial oversight responsibilities; however, its staff lacked the expertise to perform many required tasks. The guidelines called for NEA to review MFO budgets and financial plans; analyze income, expenditures, and inflation rates; review and analyze annual audit and internal controls reports issued by the external auditor; and evaluate MFO financial and auditing regulations. NEA guidelines stated that the OIG would provide assistance in evaluating MFO financial and auditing regulations. While NEA officials reviewed budgets and financial plans, audits, and internal control reports, they did not evaluate the financial and auditing regulations of the MFO, review its accounting notes, or assess the potential financial impact that inflation rates had on the MFO budget request. NEA officials stated that its staff did not possess the needed accounting and auditing expertise to fulfill all of the financial oversight responsibilities and that the OIG has not provided accounting and auditing assistance to NEA since 1998. In June 2004, NEA officials stated that they were exploring options for obtaining the necessary accounting expertise to review MFO financial practices; however, they have not yet determined how they will redress this issue or a establish a timeframe for doing so. Leading practices indicate that personnel need to possess and maintain the skills to accomplish their assigned duties. Staff with the required skill could provide reasonable assurance that U.S. contributions are being used as intended and that financial reporting is reliable—including reports on budget execution, and financial statements. Public Law 97-132 authorized U.S. participation in the MFO and established a requirement that the President submit annual reports to Congress every January 15. The report is to describe, among other things, the activities performed by MFO during the preceding year, the composition of observers, the costs incurred by the U.S. government associated with U.S. troops participating in the MFO, and the results of discussions with Egypt and Israel regarding the future of MFO and its possible reduction or elimination. State has met the annual reporting requirement. NEA officials conducted biannual oversight visits to MFO headquarters and field locations as called for in the guidelines but did not document the results of those visits. In addition, the OIG reported that it found no trip reports that were prepared by NEA during that office’s 20 years of MFO oversight. The guidelines stated that the purpose of the visits is to observe MFO operations and conditions in the field and compare observed practices with published MFO regulations. Among other things, oversight visits were to include tours of MFO facilities, including offices, warehouses, check points, and facilities for U.S. soldiers; meetings with all key MFO and U.S. military officials; and meetings with members of the U.S. civilian observer unit. According to State officials, briefings were held afterwards to describe the visits but written reports of these visits were not completed. However, without the maintenance of accurate and timely records, it is difficult to determine whether management directives were appropriately carried out. In November 2003, State’s Inspector General conducted an internal review of NEA and made recommendations in February 2004 to improve NEA oversight. The OIG recommended that NEA transfer some of its oversight responsibilities from its Office of Regional Affairs to the Office of the Executive Director of NEA (NEA/EX). The OIG also recommended that NEA establish an advisory board to review MFO management practices and internal controls, including internal audits, and ensure the independence of these audits. NEA plans to give responsibility for the oversight of management/personnel issues to NEA/EX while the Office of Regional Affairs retains responsibility for the oversight of policy issues. NEA also agreed to form an oversight board to oversee MFO operations that is to be chaired by NEA/EX and include representatives from the OIG and the bureaus of Human Resources, International Operations, and Political-Military Affairs. The board, which met for the first time in mid- June 2004, discussed approaches to attracting candidates for the position of chief observer and other COU recruiting issues. The board has yet to determine its full range of responsibilities or scope of work. In addition, it has not yet established timelines for addressing these areas. MFO managers have made improvements to MFO’s personnel system but have not systematically updated the personnel system since 1985. For example, the Director General recently appointed a longer-serving civilian with personnel management expertise to replace the short-term military personnel officers serving short rotations on the MFO command staff. Moreover, leading personnel practices suggest that other aspects of the MFO personnel system could be reviewed and subsequently modified. For example, MFO regulations and procedures do not clearly provide for outside mediation or external avenues of appeal for MFO employee complaints involving discrimination or sexual harassment. In addition, the MFO has not addressed disparities in the representation of women in its workforce, especially in management positions, nor identified where barriers may operate to exclude certain groups and address these barriers. MFO’s current Director General has taken steps to update personnel policies to retain staff. In 2003, the Director General appointed a longer- serving civilian with personnel management expertise to replace the short- term military personnel officers serving short rotations on the MFO command staff. According to MFO documents and officials, this new manager for personnel in the Sinai provides continuity over personnel issues, takes a more active role in recruitment, has surveyed employees and acted on their concerns about safety and other quality of life issues, and is responsible for the equitable allocation of housing. Moreover, he has sought additional training to improve his effectiveness in this new role. Finally, MFO leadership has updated grievance procedures pertaining to sexual harassment complaints to boost employee confidence in the system and reemphasized to employees that they have zero tolerance for infractions of this policy. As the current Director General left in June 2004, his successor will have to demonstrate a similar commitment to these changes in personnel policy to ensure that they succeed. MFO management is taking steps to improve workforce planning. MFO managers stated that the personnel system was originally modeled in 1982 on State Department and U.N. systems. In addition, the MFO personnel manager stated that MFO managers have not undertaken a review of the personnel management rules since an outside consultant examined MFO personnel policies in 1985. However, MFO reviewed and updated some sections of its personnel manual in January 2004. MFO has also begun to make increased use of information technology to compare its future work requirements with its current human resources, and is using existing U.S. Army efficiency reviews of the U.S. contingent’s operations to suggest ways to restructure its own military staff (see app. III for excerpts from our model for strategic human capital management planning). To acquire, develop, and retain talent, MFO management has updated its recruitment practices to ensure that its new hires are both qualified and a good “fit” for the demanding work conditions in the Sinai. MFO uses professional recruiters to obtain civilians better suited to the MFO environment. MFO management has also updated its introductory materials, handbooks, and Web site to give prospective recruits a more comprehensive view of work requirements, benefits, and living conditions. MFO managers stated, however, that the “temporary” nature of the MFO mission precluded it from developing a career track for international staff. It does not, for example, provide the benefits that a career service track would offer, such as routine opportunities for promotion and pensions for long-serving employees. Nevertheless, the MFO has introduced incentives to retain long-serving staff, including pay increases normally worth 2 percent of salary for every employee who signs a contract extension and special nonmonetary service awards for 10-year and 20-year employees. The MFO has also introduced improved performance appraisals for new staff on their probationary period and at the end of their contract period. These appraisals include basic assessments of job skills, performance, leadership, communications, cost management, initiative, and adjustment to the work environment and document performance feedback sessions. Staff are allowed to read and comment on their appraisals. MFO, however, does not require its managers or staff to use a detailed formal appraisal to document annual performance reviews and feedback sessions. Instead, managers have the option of declaring that a staff member has performed satisfactorily. The new chief of personnel services stated that it was his intention to systematically collect employee feedback to help adjust MFO’s human capital approaches and workforce planning, but he had not yet developed any data collection instruments as of December 2003. Despite its efforts to improve its personnel management practices, the MFO has not addressed two challenges that leading practices indicate could adversely affect its ability to strategically manage its human capital resources more effectively. These challenges are (1) the degree to which its grievance procedures are subject to outside and neutral arbitration or other alternative dispute resolution mechanisms and (2) the gender imbalance in the international civilian workforce. Although the MFO employee grievance policy encourages early reporting and resolution at the lowest level practicable, it does not clearly provide for an independent avenue of appeal in cases of discrimination or sexual harassment. MFO’s policies against discrimination and harassment allow for the possibility of employees using outside mediators to resolve complaints when an internal inquiry or investigation determines that sexual harassment or discrimination has occurred. However, the decision to use mediation rests with the Force Commander or Contingent Commander, not the complainant. Furthermore, MFO procedures do not allow complainants to seek mediation or pursue appeal outside the MFO when an investigation results in a finding that harassment or discrimination has not occurred. In contrast, the Equal Employment Opportunity Commission calls for U.S. agencies to make alternative dispute mechanisms, such as mediators, available to complainants and U.S. antidiscrimination laws allow complainants to appeal their cases in court if necessary. We noted in past work that a one single model for international organizations’ grievance procedures does not exist because criteria such as the degree of independence of a grievance board or committee depend on the legal environments in which these organizations operate. Nevertheless, our analysis of leading practices in the World Bank and other organizations indicates that a lack of clear means for resolving such grievances could be a concern for an organization’s management because it could undermine employee confidence in the fairness of the personnel management system. For example, the U.S. government and the private sector employ alternative dispute resolution mechanisms such as arbitration, mediation, or management review boards to resolve discrimination complaints and other grievances in a cost-effective manner. Moreover, we noted that U.S. government agencies and international organizations have determined that access to alternative dispute mechanisms and providing an avenue for an independent appeal can enhance employee confidence in the entire human capital system. MFO’s current gender imbalance in management may also merit attention. The imbalance may indicate that there are obstacles to women attaining management positions that may need to be addressed. The United Nations, for example, determined that the gender balance of its professional workforce was problematic, particularly in the management of peace operations. To address this imbalance, the United Nations is trying to achieve a professional work force with a 50 percent gender balance. We examined MFO prepared documents that showed that women represented 29 percent of the workforce (31 out of 108 international and national civilian positions) and women filled only 8 percent of management positions (1 of 13 as of June 2004). In the United States, the Equal Employment Opportunity Commission would consider that such a gender disparity could be evidence of a differential rate for selection for women that warrants management attention. State and MFO managers have noted that there are mitigating circumstances that may explain the lower representation of women in MFO’s workforce. They stated a number of factors that might make the MFO posts in the Sinai an unattractive workplace for women: It is a predominantly male and military culture, there are few posts that allow for accompaniment by spouses, and it has no facilities for children. Nevertheless, these gender differences also exist at MFO locations in Rome, Tel Aviv, and Cairo, where these factors are not necessarily a concern. Leading practices among public organizations include evaluating the composition of their workforce, identifying differences in representation among groups, identifying where barriers may operate to exclude certain groups, and addressing these barriers. The MFO has taken steps to improve its financial accountability and its related financial internal controls over the past 9 years. It has also taken additional steps to improve its financial reporting to the State Department and to strengthen internal controls in response to recommendations we made to the MFO through the Department of State in 1995. Since then, the external auditors of its financial statements found no material weaknesses. The external auditors who reviewed MFO internal controls determined that the internal controls they tested were effective. However, internal control standards adopted by the MFO suggest that the MFO could do more to enhance the external audit function, particularly through the use of an independent audit committee to review the scope of activities of the internal and external auditors annually. MFO and some State officials stated that this concern has been addressed by the new audit and review mechanisms adopted by the MFO since our last report. Israeli and Egyptian officials stated that their governments are satisfied with the degree of financial oversight and control they exercise over the MFO. Nevertheless, officials from State’s OIG and senior managers within State’s NEA Bureau acknowledged that the bureau’s new MFO management advisory board needs to examine the issue of creating an external oversight board. The MFO has taken steps to improve financial accountability and strengthen internal controls. To keep the budget under $51 million and improve the efficiency of the organization by emulating leading commercial management practices, MFO has (1) adopted a business activity tracking software program to improve management visibility over financial activities and logistics management, and (2) hired a management review officer to identify cost savings through the reviews of management procedures and contracts. Although we have not performed any direct testing of the software, or assessed the role or performance of the management review officer, both initiatives appear to be positive steps for MFO. According to MFO staff, its adoption of a commercial business activity tracking software package in 2001 led to greater management oversight over all stages of procurement and other transactions and has strengthened internal controls. MFO officials state that this new system has built-in requirements for managerial approval at each step of the procurement process. Under this system, MFO procurement officers are assigned preset spending authority. Further, all procurement over $50,000 and any sole source contract over $30,000 requires the approval of the Director General. According to MFO officials, the visibility and control provided by this system have also simplified the external auditor’s task in conducting its latest review of internal controls. MFO officials and documents did not attribute any budget savings directly to the implementation of this new system. However, they stated that they were able to reduce the number of staff and centralize four procurement operations. MFO officials did not make available the results of any recent implementation testing, however, and noted that many of the key performance indicators the system will track are under development. In 2001, MFO hired a management review officer to identify cost savings through the reviews of management procedures, logistics contracts, and compliance with MFO requirements and controls. At the request of the Director General, this official performs some inspector general functions by conducting investigations on specific operations and accountability controls, and makes recommendations to improve procedures. According to MFO records and estimates, the MFO has conducted 19 “most efficient organization (i.e., leading practice) reviews” through January 2004. The management review officer’s recommendations contributed to more than $1.6 million in budgetary savings. All MFO special reviews and annual financial audits since 1995 have demonstrated to the satisfaction of the external auditor that MFO maintained sufficient financial accountability. First, in late 1995, its external auditor, Price Waterhouse, reviewed MFO’s internal control structure and made recommendations to strengthen them, which MFO agreed to adopt. Second, in 1996, MFO switched to a new external auditor, Reconta Ernst & Young, to conduct its annual financial audits. It issued unqualified or clean opinions on MFO’s financial statements between 1996 and 2004. Third, MFO commissioned the auditor to perform management compensation and benefits reviews in 1996, 1997, and 1999, which concluded that management received compensation and benefits substantially in compliance with MFO regulations. In 2000, however, the Director General terminated further compensation audits on the external auditor’s recommendation that concluded that these reviews duplicated the annual audit and other reviews. Fourth, MFO commissioned Reconta Ernst & Young to perform separate internal control reviews every 3 years beginning in 1998. Reports issued in 1998 and 2001 stated that its auditors assessed the MFO’s use of internal controls in relation to the criteria established in “Internal Control-Integrated Framework” issued by the Committee of Sponsoring Organizations (COSO) and found that the internal controls it tested were effective. The Treaty Protocol and MFO administrative and financial regulations provide the Director General responsibility for political, operational, and financial control issues pertaining to the organization. However, leading practices suggest that the MFO could better use independent input and oversight over external audits. The Director General selects and receives the reports of the external auditor. In addition, he can change MFO operations, policies, and procedures without review, consent, or approval from an oversight or senior management board. We previously reported that this level of authority is unique among international organizations, noting that other international organizations have an independent governing body above the chief executive to oversee and approve operations and finances. COSO internal control standards note that an effective internal control environment could depend in part on the attention and direction provided by oversight groups. These groups, such as an active and effective board of directors or audit committee, could enhance the audit function through their various review duties. Our standards for internal controls in the federal government similarly note the importance of independent audit committees or senior management councils as part of effective monitoring and audit quality assurance. MFO and some State officials stated that there is no need for an oversight board to provide this extra degree of assurance. They note that our concern about the Director General’s autonomy—and the potential for abuse of authority raised in our 1995 report—has been mitigated by the external auditor’s reviews of management compensation and internal controls, as well as steps the MFO has taken to improve financial accountability and strengthen internal controls. Moreover, these officials stated that the organization is too small to employ a full-time independent inspector general. Finally, Israeli and Egyptian officials said that their respective governments are satisfied with the degree of oversight that they exercise through formal annual meetings, informal daily contacts, and review of MFO financial reports. However, officials from State’s OIG and NEA acknowledge that a State oversight board could help ensure that the scope of work for audits are set independently from MFO management direction. Neither State nor its OIG has reviewed the scope of these external audits. The Inspector General concluded that, while State can only advise the MFO on these matters, the board is important because U.S. confidence in the integrity of the MFO is crucial to its continued support for the force. The MFO has maintained a flat budget of about $51 million for the past 9 years, but it faces a number of challenges that will make it difficult to continue operating within its current budget. In particular, the MFO must address the issue of replacing its antiquated fleet of helicopters by fiscal year 2006. DOD projects that replacing the fleet could cost about $18 million. As a result of this and other pressures on the budget, the costs of supporting the MFO are likely to increase if the MFO maintains its current level of operations. However, Israeli and Egyptian officials stated that their governments do not support increases in their contributions, and U.S. and MFO efforts to obtain support from other contributors have not succeeded. U.S. officials have yet to make a decision about increasing U.S. support to the MFO or adjusting its current cost-sharing arrangements with the MFO. In addition, the U.S. Army, State, and MFO officials have yet to agree on who should pay the increased costs associated with changes in the composition and pay scales of U.S. troops under current arrangements. MFO financial reports show that the organization has kept its budget at about $51 million between fiscal years 1995 and 2003. Contributions to MFO’s annual budget are paid by all parties in U.S. dollars. We reviewed MFO’s budget from fiscal years 1995 through 2002. We found that when adjusted using a U.S. dollar inflation rate, MFO’s budget has declined 12 percent between fiscal years 1995 and 2002 (see fig. 4). We also estimated the MFO’s budget in constant international dollars because MFO purchases goods and services in countries such as Egypt, Israel, and the United States, where the U.S. dollar has different purchasing power. Because similar goods are inexpensive in dollar terms when purchased in Israel and Egypt as compared to the United States, the purchasing power of MFO’s budget was significantly greater when measured in constant 1995 international dollars. This figure was $72 million in fiscal year 1995 and $69 million in fiscal year 2002. However, the MFO budget was $51 million in fiscal year 1995 and $45 million in fiscal year 2002 when measured in fiscal year 1995 dollars. Moreover, we found that, between fiscal years 1995 and 2002, the MFO budget declined only about 5 percent using constant 1995 international dollars as compared with the 12 percent decline in fiscal year 1995 U.S. dollars. This decline was partially offset because MFO was able to reduce the economic impact of U.S. dollar inflation by shifting more of its purchases to Egypt and Israel during this period. MFO increased its purchases in Egypt and Israel from 43 percent of its budget in 1995 to 54 percent in 2002 as measured in nominal U.S. dollars. When measured in international dollars, however, goods and services purchased from those two countries increased from an estimated 60 percent of the MFO budget in 1995 to almost 70 percent in 2002 (see app. IV for details on calculating MFO budget in international dollars). MFO has attained cost savings in recent years through better management oversight and reduction of inventory costs. As mentioned previously, the adoption of a commercial business activity tracking software package and the hiring of a management review officer in 2001 led to greater efficiencies in logistics and facilities management, vehicle maintenance, personnel, finance, and contracting. As a result, according to a senior MFO official, recommendations of the management review officer contributed to almost $1.7 million in savings. Moreover, according to a senior MFO official, more effective management of tracking of freight costs and services has contributed to a 46 percent reduction in total storage and freight costs between fiscal years 2002 and 2003, or a savings of $265,000. Furthermore, its projects to connect its two camps in the Sinai to the commercial Egyptian power grid is projected by MFO to save about $825,000 a year on electricity costs, once the North Camp project is completed in 2004. One of the key cost issues for the immediate future is the replacement of aging UH-1H Huey helicopters. The U.S. Army provides an aviation company with 10 UH-1H helicopters to the MFO to perform various mission-related tasks for the MFO. As of December 2003, the unit had about 97 associated Army personnel. According to DOD officials, U.S. Army plans call for the retirement of its entire UH-1H helicopter fleet by fiscal year 2006. Furthermore, Army officials stated that DOD has considered various options as replacements for the MFO helicopter fleet and is waiting for the Secretary of Defense’s decision on this matter. First, the Army is considering outsourcing its MFO aviation unit to a private contractor. This option would reduce U.S. military personnel participation in the MFO, but preliminary DOD estimates indicate that it would cost about $18 million in the first year and $13 million annually thereafter. Second, according to U.S. Army officials, Army is considering replacing the MFO Hueys with eight UH-60 Black Hawk helicopters (see fig. 5). These officials stated that MFO prefers the outsourcing option because there would be no need to upgrade hangar facilities and other infrastructure to support the Black Hawks, thereby limiting their financial obligation. Officials from Israel and Egypt stated that they would leave the decision to the United States. They do not, however, want to incur additional financial obligations. The need to replace aging infrastructure and fund new capital improvement projects will also require additional funding. According to U.S. military officers in the Sinai, the North Camp accommodations for the soldiers will need to be replaced over the next 2 to 5 years. A senior MFO official stated that the MFO has begun to consider replacing some of these accommodations and will be exploring several options in the near term. However, no plan has been finalized, and the official did not have cost estimates to provide as of March 2004. As part of U.S. efforts to reduce troop deployments throughout the world to better meet the demands of the war on terror—and the cost of these deployments—the United States has tried to obtain troop and financial contributions from other nations to reduce its MFO obligation, according to U.S. officials. To date, these efforts have not been successful. In 2003, the Department of State requested military contributions from more than 20 countries that would then enable the United States to draw down its forces. Five countries responded favorably, but only an offer by Uruguay to send additional transportation personnel to replace a U.S. Army transport company was considered feasible by the MFO. The increased Uruguayan deployment in July 2003 allowed the Army to draw down its MFO contingent by 74 troops. U.S. officials also requested financial contributions as part of this query, but other countries declined to provide this support. U.S. attempts to obtain increased financial contributions from Israel and Egypt have also not been successful. In addition to the annual U.S. financial contribution to the MFO of about $16 million, the United States incurs an annual expense for deploying several hundred troops to the MFO that averaged about $45 million annually from fiscal years 1995 through 2003. The cost of supplying U.S. troops to MFO has risen since fiscal year 1999, even though the number of U.S. troops has declined (see figs. 6 and 7). The increase is due to rises in salaries and in the amount of special pay provided to U.S. troops. The MFO agreed to compensate the U.S. Army for special pay categories and other allowances incurred when U.S. troops are deployed to the Sinai. However, in recent years, the U.S. Army has raised the rates for some cost categories and has created additional costs categories that did not exist at the time of the initial or revised cost-sharing arrangement. Currently, Army disagrees with State and MFO over who should pay these additional costs. The increased expense for supplying the MFO with U.S. troops is due primarily to a rise in troop salaries, which are paid by the Army, and changes in special pay categories such as foreign duty pay and family separation pay, which are partly paid for by the MFO. For example, salaries have increased because beginning in 2002, National Guard troops have been deployed instead of active duty soldiers. National Guard troops tend to have been in grade longer than active duty soldiers and are consequently paid more. The U.S. Army pays for the increases in troop salaries. (See app. V for details on the cost of U.S. participation in MFO between fiscal years 1995 and 2003.) The MFO and the United States agreed to share the costs of providing U.S. troops to the MFO in 1982 and revised these arrangements in 1994 and 1998. Under these agreements, the Army agreed to credit the MFO for the costs these troops would have normally incurred had they remained in the United States, including food and lodging, base support, and operations and maintenance costs. The MFO agreed to pay some of the additional costs incurred by the deployment of U.S. troops to the Sinai, including special pay categories and other allowances. In the revised 1998 arrangement, the U.S. Army and the MFO did not reach specific agreement on how Imminent Danger Pay would be shared. While the agreement increased rates for other special pay categories, these rates were less than those established in U.S. law. Army officials believe that MFO should pay these increased costs for supplying U.S. troops; however, MFO and State officials disagree with this position. According to an Army official, the Army will seek MFO reimbursements for special pay categories totaling $3.3 million for fiscal year 2004; an MFO official stated in June 2004 that the MFO protested this action to the Army and Department of State. In addition, Army officials stated that MFO should pay a greater share of the costs for sustaining National Guard troops while they are on duty at the MFO. Army officials reduced by $1 million the credit it will provide to the MFO for sustaining the U.S. infantry battalion in fiscal year 2004 because the formula it used to calculate the credit was out-of-date since National Guard battalions have been sent to the MFO in place of active duty units in recent years. In May 2004, U.S. Army and State officials met with MFO officials to discuss differences but did not present a unified U.S. government position on how the cost-sharing arrangement should be modified. The two parties to the treaty and the United States are satisfied that the MFO is effectively fulfilling its mission of helping to maintain peace between Egypt and Israel. MFO has maintained its peacekeeping operation with a multinational force in the Middle East, a troubled and unstable part of the world. The organization has modified several of its policies and practices to make them consistent with leading practices in financial management and personnel. There are, however, opportunities for the organization to further improve in these areas. MFO has made several changes to its operations even though its budget has been flat for the past 9 years. The organization has benefited greatly because it has increased the amount of goods and services purchased in Israel and Egypt where the purchasing power of the U.S. dollar had increased during that period. Despite these changes, MFO contributors may face increased budgetary challenges due to the possible replacement of MFO’s helicopter fleet. State is the organization charged with overseeing U.S. participation in the MFO and recruits State employees to fill key MFO positions. Nevertheless, State has not provided employees who possess the expertise to carry out many of its financial oversight responsibilities. In addition, MFO raised concerns about the leadership capabilities of some of the staff whom State recruited for the chief civilian observer post. Finally, since the MFO does not have an external oversight board, as do many international organizations, effective State oversight of MFO and agreement between the United States and the MFO on cost-sharing arrangements is essential to ensure that the cost of U.S. troop participation is equitably shared. While NEA has begun to address some of the issues that are stated below, it has not established timelines for their resolution. To promote improved oversight of the MFO and ensure that NEA redresses these issues, we recommend that the Secretary of State take the following four actions: resolve the recurring concern of finding qualified candidates for the chief of the civilian observer unit; ensure that staff with accounting expertise are available to carry out NEA’s financial oversight responsibilities for MFO and, if necessary, review the terms of MFO’s external audits to ensure that they are appropriate; direct the MFO management advisory board to monitor and document NEA’s compliance with its guidelines for overseeing the MFO; and work with Army officials to reconcile differences between Army and State views about the current MFO cost-sharing arrangements. The Department of State and MFO provided technical and written comments on a draft of this report (see apps. VI and VII). The Department of Defense provided oral comments and generally agreed with our findings. It also provided technical comments that we incorporated where appropriate. The Department of State agreed with three of the four recommendations and did not respond to one of the recommendations. State agreed with our conclusion that it had experienced problems in consistently recruiting chief observers with the necessary leadership skills and stated that the new State MFO Management Advisory Board is considering measures to encourage highly qualified State employees to fill the chief observer position. State agreed with our recommendation that staff with accounting expertise carry out NEA’s financial oversight responsibilities for MFO. However, State believes that the current NEA oversight regime provides the assurances necessary and its limited resources do not allow hiring additional accounting personnel to evaluate MFO’s financial practices. As a result, State plans to ask the OIG to periodically evaluate MFO’s accounting and financial practices. We do not agree that the current oversight regime provides the assurances necessary regarding MFO’s finances. We found that NEA did not perform several aspects of MFO financial management oversight—such as evaluating MFO financial practices— because of a lack of expertise among NEA staff. We agree, however, that having the OIG periodically review MFO accounting and financial practices is sufficient. Finally, State also agreed with our recommendation to work with Army to reconcile differences about current MFO cost sharing arrangements. State was not responsive to our recommendation to direct the MFO Management Advisory Board to monitor and document NEA’s compliance with its guidelines for overseeing MFO. State responded that it plans to supplement the annual report to Congress that describes its MFO oversight activities with quarterly reports to the newly formed advisory board. The OIG recommended that NEA establish an advisory board because it found that while NEA policy oversight was strong, its management and personnel oversight were not as satisfactory. While the board works to define its authority and responsibilities, it should ensure that NEA exercise more concerted oversight of MFO activities by complying with NEA guidelines and documenting its efforts for overseeing MFO. The MFO generally agreed with the report’s findings. The MFO welcomed the report’s recommendations for State to improve the recruiting process for the chief observer and for the U.S. government to develop a unified position regarding the Army’s claims for increased payments by the MFO. MFO also stated that it would consider our report’s findings regarding additional outside mediation or review mechanisms for complaints involving discrimination and sexual harassment. It also noted that it will also consider our findings of a perceived gender imbalance in the MFO workforce. The MFO took exception to our finding that, with few exceptions, MFO employees tend to stay in the same positions for which they were contracted. They stated that six headquarters’ employees had been promoted or transferred from other positions. However, the MFO personnel manual states that there is not a career path for employees due to the temporary nature of the organization. Moreover, MFO does not have systems in place that establish standard employment grades for its positions, requirements for competitive promotion opportunities, or advertise opportunities for promotion. We interviewed several long- serving staff in the field who stated that opportunities for advancement were not available and that they have remained in the position for which they were hired. Finally, MFO accepts that there are opportunities to improve its human resource management but noted that the adoption of U.S. government or U.N. human resource practices may entail significant costs and overhead for a small organization. We agree that organizations must be careful to consider their unique characteristics and circumstances when considering the applicability of human resources practices that we have identified in appendix III. MFO also disagreed with the factual accuracy of one of the numbers in appendix II. We made changes to reflect MFO corrections. We are sending copies of this report to other interested Members of Congress. We are also providing copies of this report to the Secretary of State, the Secretary of Defense, and the Director General of the Multinational Force and Observers. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8979 or christoffj@gao.gov, or Phyllis Anderson at (202) 512- 7364 or andersonp@gao.gov. In addition to the persons named above, B. Patrick Hickey, Lynn Cothern, Elizabeth Guran, and Bruce Kutnick made key contributions to this report. We (1) assessed the Department of State’s oversight responsibilities for U.S. participation in the MFO, (2) reviewed MFO’s personnel policies and practices, (3) examined MFO’s financial management and accountability, and (4) reviewed emerging budgetary challenges. We focused our audit work at MFO and State on activities and transactions starting in 1996 through 2004, the period subsequent to the prior GAO report. We visited MFO offices in Rome, Cairo, and Tel Aviv and force installations in the Sinai Peninsula. We also met with the Israeli and Egyptian Ministries of Foreign Affairs and Defense in Jerusalem and Cairo. To assess the Department of State’s oversight responsibilities, we reviewed the oversight guidelines developed by the Office of Regional Affairs of State’s Bureau of Near Eastern Affairs (NEA/RA) and supporting documentation including State’s Annual Reports to Congress, cables, MFO Director General Annual Reports for the Trilateral Conferences of Major Fund Contributors, MFO external auditors’ financial statement audit and internal control reports and turnover statistics of staff working in the civilian observer unit. We could not determine the full extent of State’s efforts because it did not document the nature, quality, and range of its oversight activities. We also met with State/NEA officials responsible for overseeing U.S. participation in the MFO to discuss the frequency, nature, and extent of State contact with MFO. We discussed the views of the Egyptian and Israeli governments on the MFO’s performance with military and foreign affairs officials from both countries. We interviewed NEA and MFO officials and current and former members of the Civilian Observer Unit to obtain an understanding of State’s recruiting efforts and interaction with the COU. We met with officials from State’s Office of Inspector General (OIG) to discuss their inspection of NEA, and we also reviewed relevant OIG reports. In addition, we assessed the status of State’s compliance with prior GAO recommendations. To review MFO’s personnel management system, we examined MFO personnel regulations, internal reports and briefings that described personnel policy changes, personnel statistics, performance appraisal forms, and other documentation on the organization’s personnel practices. We also examined leading human capital management policies and practices of public organizations to determine if MFO personnel regulations and policies relating to employee expectation setting, performance appraisals, employee grievance processes, alternative dispute resolution mechanisms, and sexual harassment policies followed the spirit of leading practices. We interviewed MFO officials, members of the civilian observer unit, MFO staff, and NEA officials to obtain their views on the organization’s personnel system. To examine MFO’s financial management and accountability, we reviewed the external auditors’ financial statement audits and internal control reports, other special reviews performed by the external auditor, and reports completed by State’s OIG. We also reviewed MFO management review officer’s reports, MFO financial regulations, and documentation on MFO’s recently installed financial management system. We discussed the scope and nature of the management review officer’s position and recent work with MFO officials. We interviewed NEA, DOD, MFO, Israeli, and Egyptian officials to determine their views on MFO’s financial management and the degree of accountability of the Director General. To report on some of the potential budgetary challenges MFO may face, we examined budget data and supporting documentation for fiscal years 1995 through 2003 provided by MFO, NEA, and the U.S. Army’s Office of Assistant Secretary of the Army for Financial Management and Comptroller. We discussed with DOD, State, and MFO officials trend data on costs and estimates for substituting U.S. Army UH-60 Black Hawk helicopters or a private contractor’s helicopter unit for the current MFO force of UH-1H Huey helicopters. We did not verify the accuracy or completeness of the estimates or verify the accuracy of the budgetary savings MFO officials associated with particular cost saving initiatives. To assess the reliability of the data on the costs associated with U.S. participation in the MFO, we (1) interviewed State, Army, and MFO officials about the sources of their data and the means used to calculate costs, (2) reviewed MFO’s annual financial reports and State’s annual report to Congress on the MFO, (3) traced U.S. Army’s reported costs for its contributions to the MFO back to the source documents, (4) traced the Army’s calculation of the costs associated with providing salaries to the soldiers stationed with the MFO—these salary costs constitute over 80 percent of the total costs of the U.S. Army contribution to the MFO– back to the DOD personnel composite standard pay and reimbursement rates for fiscal years 1999 through 2003, (5) performed tests on the data provided by the U.S. Army regarding the cost of U.S. participation in the MFO between 1999 and 2003 to check for obvious errors or miscalculations, and (6) reviewed the report of the MFO’s independent external auditor on State’s contributions. However, we did not audit the data and are not expressing an opinion on them. We determined that the data were sufficiently reliable for the purpose of reporting the total costs of U.S. participation in the MFO. We conducted our review from September 2003 to May 2004 in accordance with generally accepted government auditing standards. MFO has a small and varied professional civilian workforce of 108 international and local national staff located in Rome, Cairo, and Tel Aviv. Contractors provide an additional 59 expatriate support staff and 454 local workers. Eight of the 13 management-level employees are U.S. citizens. The international staff, including 14 U.S. citizens, support and direct the operations of 1,685 peacekeeping troops from 11 countries with unit or individual tours of duty varying between about 2 months and 1 year. A further 15 U.S. citizens serve in the civilian observer unit (COU): about half the observers, including the chief observer, temporarily resign from the State Department to fulfill 1- to 2-year contract commitments; the other half are civilian contractors, usually recruited from retired U.S. military personnel serving under renewable 2-year contracts. Table 1 provides details on MFO personnel locations, types, and numbers. MFO working conditions present challenges for management and staff. MFO workers have limited prospects for advancement or job mobility because the organization views itself as having a temporary mission. The international workforce has decreased by about 62 percent since 1982. With few exceptions, MFO employees tend to stay in the same positions for which they were contracted and its many long-serving workers in administrative positions lack opportunities to progress to higher positions. MFO managers stated that while MFO’s pay scales and other benefits help it successfully compete for staff with oil companies and other commercial international organizations in the region, it is challenging to find civilian employees with the ability to work successfully in the austere military atmosphere and isolated living environment in the Sinai. The main camp at El Gorah in the northern part of the Sinai is in a sparsely populated area with few amenities outside the camp. Only 17 military and civilian positions in the Sinai allow for accompaniment by a spouse, and facilities for children are lacking. Visits by family members are also very limited. The force’s personnel system reflects the “temporary” nature of the MFO’s mission; most international contractors serve under initial 2-year contracts that can be renewed at the discretion of the Director General. According to MFO documents, employment with the MFO is not a career service and initial employment with the MFO does not carry any expectation of contract renewal or extension. Several long-term employees stated that there are limited job progression opportunities with the MFO. Heightened concerns about terrorism since September 11, 2001, and ongoing violence in areas under Israeli control has led to significantly restricted opportunities for travel off the bases. U.S. and international public organizations have found that strategic workforce planning is essential to (1) aligning an organization’s human capital program with its current and emerging mission and programmatic goals and (2) developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. We have developed a strategic human capital management model based on leading practices to help U.S. and international public organizations assess their efforts to address the key challenges to developing a consistent and strategic approach to human capital management. We caution that agencies applying this model must be careful to recognize the unique characteristics and circumstances that make organizations different from one another and to consider the applicability of practices that have worked elsewhere to their own management practices. Our work has shown that the public organizations face four key human capital challenges that undermine agency efficiency. The model consists of four cornerstones designed to help public organizations address the challenges in the four areas—leadership; strategic human capital planning; acquiring, developing, and retaining talent; and results-oriented organizational culture. Each cornerstone is associated with two critical factors that an agency’s approach to strategic human capital planning must address. Moreover, for each of the eight critical success factors, the model describes three levels of progress in an agency’s approach to strategic human capital planning: Level 1: The approach to human capital is largely compliance-based; the agency has yet to realize the value of managing human capital strategically to achieve results; existing human capital approaches have yet to be assessed in light of current and emerging agency needs. Level 2: The agency recognizes that people are a critical asset that must be managed strategically; new human capital policies, programs, and practices are being designed and implemented to support mission accomplishment. Level 3: The agency’s human capital approaches contribute to improved agency performance; human capital considerations are fully integrated into strategic planning and day-to-day operations; the agency is continuously seeking ways to further improve its “people management” to achieve results. Figure 8 illustrates the critical success factors an organization in the second level of progress must address as it develops a strategic approach to managing its human capital. Individual performance management integrated with organizational goals; and management held accountable for achieving these goals through use of clearly defined, transparent, and consistently communicated performance expectations 1. People are assets whose value can be enhanced 2. An organization's human capital approaches should be designed, through investment. As with any investment, the goal is to maximize value while managing risk. implemented, and assessed by the standard of how well they help the organization achieve results and pursue its mission. The MFO receives dollar contributions from Egypt, Israel, and the United States and purchases goods and services from Egypt, Israel, the United States, and other countries. The MFO’s budget has remained flat at 51 million in nominal dollars between fiscal years 1995 and 2002, although it has declined about 12 percent over the same period when adjusted for U.S. inflation. However, MFO officials stated that they increased the purchasing power of its budget by shifting its purchases of goods and services away from the United States and other countries to relatively lower cost Egyptian and Israeli markets. As figure 9 demonstrates, MFO spending in Egypt and Israel rose from 43 to 54 percent of the budget between fiscal years 1995 and 2002. On average, the MFO spent 26 percent of its budget in Egypt and 23 percent in Israel in this period. By converting the MFO’s budget into international dollars, we are able to better assess the impact of these shifts to the lower cost Egyptian and Israeli markets on the overall purchasing power of the MFO budget. As table 2 demonstrates, expressing the MFO budget in international dollars reveals that: (1) the purchasing power of the budget—ranging between $72.3 million in fiscal year 1995 and $69 million in fiscal year 2002–was significantly higher than its nominal level of $51 million suggests; and (2) the real decline in the budget between fiscal years 1995 and 2002 was about 5 percent rather than 12 percent. Moreover, figure 10 demonstrates that MFO purchased a larger proportion of its goods and services in Egypt and Israel when calculated in international dollars than the nominal-dollar budget expenditures suggest—70 percent versus 54 percent for fiscal year 2002, for example. Also, it purchased a significantly greater percentage of its budget in Egypt than in Israel on average when calculated in international dollars during this period—46 percent versus 18 percent on average. An international dollar is equivalent to the amount of goods and services that 1 U.S. dollar can purchase in the United States. Two steps are required to convert an amount valued in local currency into international dollars: First, convert the local currency figure into U.S. dollars using the official Second, divide this dollar amount by the country-specific purchasing power parity (PPP) conversion factor to official exchange rate ratio. The PPP conversion factor converts into international dollars the cost of a basket of tradable and nontradable goods and services valued in local currency units (pounds in the case of Egypt and shekels in the case of Israel). The PPP conversion factor is the number of local currency units required to buy the same amount of goods and services in the domestic market that a U.S. dollar would buy in the United States. For example, a basket of goods that could be purchased in the United States for $1, equal by definition to 1 international dollar, could be bought in Egypt for 1.259 Egyptian pounds in 1995. Therefore, the PPP conversion factor is 1.259 Egyptian pounds per international dollar. In calendar year 1995, the official annual average exchange rate (based on monthly averages) was 3.392 Egyptian pounds per U.S. dollar. The ratio of the PPP conversion factor to the official exchange rate is 0.371. The nominal dollar amount the MFO spent in Egypt in fiscal year 1995 ($11.7 million as shown in table 2) is divided by the fiscal year ratio, to compute the international dollar amount of 32.2 million. The United States agreed in 1981 to provide one third of the annual MFO budget and provide a military contingent in support of the force. Annex II of the 1982 Exchange of Letters between the MFO Director General and the U.S. Secretary of State set the financial arrangements for the U.S. military contribution. The Memoranda of Understandings between the MFO and the Department of the Army established in 1994 and 1998 confirm additional understandings and procedures to supplement Annex II of the 1982 Exchange of Letters. Under the terms of these cost-sharing arrangements, U.S. costs to support the MFO has increased from a low of $55.8 million in fiscal year 1996 to $70.8 million in fiscal year 2003 as depicted in table 3—a 20 percent increase overall. While State Department’s contribution to the annual MFO budget has averaged about $16 million since fiscal year 1995, the number of U.S. military personnel participating in the MFO has declined 11 percent since then, as depicted in table 4 below. However the cost of U.S. military participation has risen approximately 25 percent between fiscal year 1996 and 2003. As depicted in table 3, a number of factors account for this increase in the total cost of U.S. commitments to the MFO over this period: Total troop salaries increased 27 percent, despite a decrease in the U.S. troop contingent between fiscal years 1995 and 2002. These salaries constitute over 80 percent of the cost of the total Army contribution. In FY 2002, the Army substituted Army National Guard forces for the regular Army personnel, contributing to a salary cost increase of 12 percent between fiscal years 2002 and 2003. National Guard troops tend to be older and have been in grade longer than regular Army forces and are consequently paid more. Across-the-board salary increases for all military forces is another factor contributing to rising military costs, according to Army officials. Special pay and allowances paid to U.S. soldiers participating in the MFO mission have increased nearly ten-fold since fiscal year 1995, going from about $300,000 to $3 million in fiscal year 2003. Under the March 1982 Exchange of Letters between the MFO Director General and the Secretary of State, the MFO agreed to pay for certain special allowances to U.S. military personnel participating in the MFO mission, including a Family Separation Allowance for married personnel and Foreign Duty Pay for enlisted personnel. The coverage and rates of these existing allowances has been expanded since then to include both enlisted men and officers and costs about $250 per soldier per month. Moreover, in fiscal year 1997, DOD began providing imminent danger pay to military personnel serving in Israel and Egypt. The current rate amounts to $225 per soldier per month. In fiscal year 2003, the imminent danger pay allowance constituted 78 percent of total DOD special pays provided to military personnel participating in the MFO. Reimbursement payments from DOD to the MFO increased 13 percent. Currently, the U.S. Army provides the MFO a credit or “offset” for certain costs associated with the support of U.S. forces. These costs are those which would normally have been incurred by the U.S. government for food and lodging, base support, and operations and maintenance for such units when stationed in the United States. MFO purchases of supplies from DOD decreased by about 60 percent. In fiscal year 1995, the MFO reimbursed DOD for the purchase of supplies, equipment, and rations totaling $4.3 million dollars. MFO has sought to replace DOD as a source of supply with lower cost local commercial vendors in recent years, limiting its purchases from DOD to medical supplies and certain helicopter parts. In fiscal year 2003, these purchases totaled about $1.8 million. The following are additional GAO comments on the Multinational Force and Observers letter dated July 9, 2004. 1. In its comments, MFO stated that the report does not mention a number of female employees occupying senior positions. Our analysis is based upon information obtained from an early 2004 report that lists all international and national staff by gender in management positions. There may have been some changes made to the data since that time. 2. In its comments, MFO stated that there were factual inaccuracies regarding the number and classification of civilians employees. GAO made changes based upon MFO technical comments and noted that these changes disagreed with data in the MFO 2004 Annual Report. MFO’s annual report notes that there are 636 civilians, while the information provided to us from MFO totaled 650.
Since 1982, the Multinational Force and Observers (MFO) has monitored compliance with the security provisions of the Egyptian-Israeli Treaty of Peace. The United States, while not a party to the treaty, contributes 40 percent of the troops and a third of MFO's annual budget. All personnel in the MFO civilian observer unit (COU) are Americans. GAO (1) assessed State's oversight of the MFO, (2) reviewed MFO's personnel and financial management practices, and (3) reviewed MFO's emerging budget challenges and U.S. MFO cost sharing arrangements. The State Department has fulfilled some but not all of its operational and financial oversight responsibilities for MFO, but lack of documentation prevented us from determining the quality and extent of its efforts. State has not consistently recruited candidates suited for the leadership position of the MFO's civilian observer unit, which monitors and verifies the parties' compliance with the treaty. State also has not evaluated MFO's financial practices as required by State's guidelines because they lacked staff with expertise in this area. However, State recently formed an MFO management advisory board to improve its oversight of MFO operations. MFO has taken actions in recent years to improve its personnel system, financial accountability, and internal controls. For example, it has provided incentives to retain experienced staff and taken steps to standardize its performance appraisal system. It has received clean opinions on its annual financial statements and on special reviews of its internal controls. MFO has also controlled costs, reduced its military and civilian personnel levels, and kept its budget at $51 million since 1995, while meeting mission objectives and Treaty party expectations. MFO faces a number of personnel, management, and budgetary challenges. For example, leading practices suggest its employees' access to alternative dispute resolution mechanisms for discrimination complaints, and the gender imbalance in its workforce, could be issues of concern. Moreover, MFO lacks oversight from an audit committee or senior management review committee to ensure the independence of its external auditors. Finally, MFO's budget is likely to increase because of costs associated with replacing its antiquated helicopter fleet. U.S. and MFO efforts to obtain support from other contributors generally have not succeeded. Army, State, and MFO officials have yet to agree who should pay the increased costs associated with changes in the composition and pay scales of U.S. troops deployed at MFO.
You are an expert at summarizing long articles. Proceed to summarize the following text: Mr. Chairman and Members of the Subcommittee: I am pleased to have this opportunity to discuss the administrative redress system for federal employees. The current redress system grew out of the Civil Service Reform Act of 1978 (CSRA) and related legal and regulatory decisions that have occurred over the past 16 years. The purpose of the redress system is to uphold the merit system principles by ensuring that federal employees are protected against arbitrary agency actions and prohibited personnel practices, such as discrimination or retaliation for whistleblowing. Today, as more voices are heard calling for streamlining or consolidating the redress system, I would like to address the question of how well the redress system is working and whether, in its present form, it contributes to or detracts from the fair and efficient operation of the federal government. My comments reiterate views first expressed in testimony in November 1995, with some updating based on work GAO has done since then. I have three points to make: First, because of the complexity of the system and the variety of redress mechanisms it affords federal employees, it is inefficient, expensive, and time-consuming. Second, because the system is so strongly protective of the redress rights of individual workers, it is vulnerable to employees who would take undue advantage of these protections. Its protracted processes and requirements divert managers from more productive activities and inhibit some of them from taking legitimate actions in response to performance or conduct problems. Further, the demands of the system put pressure on employees and agencies alike to settle cases—regardless of their merits—to avoid potential costs. Third, alternatives to the current redress system do exist—in the private sector and in some parts of the federal government. These alternatives, including a variety of less formal approaches collectively known as alternative dispute resolution, may be worth further study as Congress considers modifying the federal employee redress system. balance must be struck between individual employee protections and the authority of managers to operate in a responsible fashion. To the extent that the federal government’s administrative redress system is tilted toward employee protection at the expense of the effective management of the nation’s business, it deserves congressional attention. My observations today are based on a body of work examining how the redress system operates and how agencies deal with workplace disputes. We interviewed officials at the adjudicatory agencies, the Office of Personnel Management (OPM), the now defunct Administrative Conference of the United States, and a number of executive branch and legislative agencies; analyzed data on case processing provided by the adjudicatory and other agencies; and reviewed the redress system’s underlying legislation and other pertinent literature. In addition, my observations draw upon a symposium GAO held in April 1995 at the request of Senator William V. Roth, Jr., then Chairman of the Senate Governmental Affairs Committee, with participants from the governments of Canada, New Zealand, and Australia, as well as private sector employers such as Xerox, Federal Express, and IBM. The proceedings added to our awareness and understanding of current employment practices inside and outside the federal government. reviews agencies’ final decisions on complaints. The Office of Special Counsel (OSC) investigates employee complaints of prohibited personnel actions—in particular, retaliation for whistleblowing. For employees who belong to collective bargaining units and have their individual grievances arbitrated, the Federal Labor Relations Authority (FLRA) reviews the arbitrators’ decisions. While the boundaries of the appellate agencies may appear to be neatly drawn, in practice these agencies form a tangled scheme. One reason is that a given case may be brought before more than one of the agencies—a circumstance that adds time-consuming steps to the redress process and may result in the adjudicatory agencies reviewing each other’s decisions. Matters are further complicated by the fact that each of the adjudicatory agencies has its own procedures and its own body of case law. All but OSC offer federal employees the opportunity for hearings, but all vary in the degree to which they can require the participation of witnesses or the production of evidence. They also vary in their authority to order corrective actions and enforce their decisions. What’s more, the law provides for further review of these agencies’ decisions—or, in the case of discrimination claims, even de novotrials—in the federal courts. Beginning in the employing agency, proceeding through one or more of the adjudicatory bodies, and then carried to conclusion in court, a single case can take years. that has occurred only three times in 16 years—a three-member Special Panel is convened to reach a determination. At this point, the employee who is still unsatisfied with the outcome can file a civil action in U.S. district court, where the case can begin again with a de novo trial. The complexity of mixed cases has attracted a lot of attention. But two facts about mixed cases are particularly worth noting. First, few mixed cases coming before MSPB result in a finding of discrimination. Second, when EEOC reviews MSPB’s decisions in mixed cases, it almost always agrees with MSPB. In fiscal year 1994, for example, MSPB decided roughly 2,000 mixed case appeals. It found that discrimination had occurred in just eight. During the same year, EEOC ruled on appellants’ appeals of MSPB’s findings of nondiscrimination in 200 cases. EEOC disagreed with MSPB’s findings in just three. In each instance, MSPB adopted EEOC’s determination. One result of this sort of jurisdictional overlap and duplication is simple inefficiency. A mixed case appellant can—at no additional risk—have two agencies review his or her appeal. These agencies rarely differ in their determinations, but an employee has little to lose in asking both agencies to review his or her case. Just how much this multilevel, multiagency redress system costs is hard to ascertain. We know that in fiscal year 1994—the last year for which data on all four agencies are available—the share of the budgets of the four agencies that was devoted to individual federal employees’ appeals and complaints totaled $54.2 million (see table 1). We also know that in fiscal year 1994, employing agencies reported spending almost $34 million investigating discrimination complaints. In addition, over $7 million was awarded for complainants’ legal fees and costs in discrimination cases alone. But many of the other costs cannot be pinned down, such as the direct costs accrued by employing agencies while participating in the appeals process, arbitration costs, the various costs tied to lost productivity in the workplace, employees’ unreimbursed legal fees, and court costs. All these costs either go unreported or are impossible to clearly define and measure. Budget (millions $) Individual cases can take a long time to resolve—especially if they involve claims of discrimination. Among discrimination cases closed during fiscal year 1994 for which there was a hearing before an EEOC administrative judge and an appeal of an agency final decision to the Commission itself, the average time from the filing of the complaint with the employing agency to the Commission’s decision on the appeal was over 800 days. One reason it takes so long to adjudicate a discrimination case is that the number of discrimination complaints has been climbing rapidly. As shown in table 2, from fiscal years 1991 to 1994, the number of discrimination complaints filed increased by 39 percent; the number of requests for a hearing before an EEOC administrative judge increased by about 86 percent; and the number of appeals to EEOC of agency final decisions increased by 42 percent. Meanwhile, the backlog of requests for EEOC hearings increased by 65 percent, and the inventory of appeals to EEOC of agency final decisions tripled. One reason Congress placed employee redress responsibilities in several independent agencies was to ensure that each federal employee’s appeal, depending on the specifics of the case, would be heard by officials with the broadest experience and expertise in the area. In its emphasis on fairness to all employees, however, the redress system may be allowing some employees to abuse its processes and may be creating an atmosphere in which managing the federal workforce is unnecessarily difficult. As things stand today, federal workers have substantially greater employment protections than do private sector employees. While most large or medium-size companies have multistep administrative procedures through which their employees can appeal adverse actions, these workers cannot, in general, appeal the outcome to an independent agency. Compared with federal employees, their rights to take their employer to court are also limited. And even when private sector workers complain of discrimination to EEOC, they receive less comprehensive treatment than do executive branch federal workers, who, unlike their private sector counterparts, are entitled to evidentiary hearings before an EEOC administrative judge, as well as a trial in U.S. district court. proceed with an appeal to MSPB as if no investigation had ever been made. The OSC investigation, therefore, is not just cost-free to the employee, but risk-free as well. Discrimination is another kind of complaint to which the redress system gives fuller or more extensive protection than other complaints or appeals. Clearly, more administrative redress is available to employees who claim they have been discriminated against than to those who appeal personnel actions to MSPB. For example, workers who claim discrimination before EEOC—unlike those appealing a firing, lengthy suspension, or downgrade to MSPB—can file a claim even though no particular administrative action has been taken against them. Further, those who claim discrimination are entitled, at no cost, to an investigation of the matter by their agencies, the results of which are made part of the record. Further still, if they are unsatisfied after EEOC has heard their cases and any subsequent appeals, they can then go to U.S. district court for a de novo trial, which means that the outcome of the entire administrative redress process is set aside, and the case is tried all over again. What are the implications of the extensive opportunities for redress provided federal workers? Federal employees file workplace discrimination complaints at more than 5 times the per capita rate of private sector workers. And while some 47 percent of discrimination complaints in the private sector involve the most serious adverse action—termination—only 18 percent of discrimination complaints among federal workers are related to firings. Another phenomenon may be worth noting. Officials at EEOC and elsewhere have said that the growth since 1991 in the number of discrimination complaints by federal employees is probably an outgrowth of passage of the Civil Rights Act of 1991, which raised the stakes in discrimination cases by allowing complainants to receive compensatory damages of up to $300,000 and a jury trial in District Court. assistance in resolving a workplace dispute. We were also told that some file frivolous complaints to harass supervisors or to game the system. All sorts of matters become the subject of discrimination complaints, and they are accorded due process. Here are two examples, drawn from the newsletter Federal Human Resources Week: A male employee filed a formal complaint when a female co-worker with whom he had formerly had a romantic relationship “harassed him by pointedly ignoring him and moving away from him when they had occasion to come in contact.” Another claimed that he was fired in part on the basis of his national origin: “American-Kentuckian.” We are not in a position to judge the legitimacy of these complaints. We note, however, that EEOC’s rulings on the complainants’ appeals affirmed the agency’s position that there was no discrimination. We would also make the point that federal officials spent their time—and the taxpayers’ money—on these cases. At the employing agency level, the prospect of having to deal with lengthy and complex procedures can affect the willingness of managers to deal with conduct and performance issues. In 1991, we reported that over 40 percent of personnel officials, managers, and supervisors interviewed said that the potential for an employee using the appeal or arbitration process would affect a manager’s or supervisor’s willingness to pursue a performance action. At the adjudicatory agency level, one effect of complex and time-consuming redress procedures has been to spur the trend toward settlements. About two-thirds of the adverse action and poor performance cases at MSPB were settled in 1994 instead of being decided on their merits. Similarly, during the same period, about one-third of the discrimination complaints brought before EEOC were settled without a hearing. Employing agencies settle many more complaints before they ever get that far. the inclination to settle. Federal officials, in deciding whether or not to settle, must weigh the cost of settling against the potential loss of more taxpayer dollars and the time and energy that would be diverted from the business of government. There is some concern that policies encouraging the contending parties to compromise on the issues may conflict with the mission of the adjudicatory agencies to support the merit principles and may set troublesome precedents or create ethical dilemmas for managers.Further, there is concern that settlements may be fundamentally counterproductive, especially in discrimination complaints, where settlement policies may in fact encourage the filing of frivolous complaints. At a time when Congress and the administration are considering opportunities for civil service reform, looking in particular to the private sector and elsewhere for alternatives to current civil service practices, organizations outside the executive branch of the federal government may be useful sources for ideas on reforming the administrative redress system. In most private sector organizations, final authority for decisions involving disciplinary actions rests with the president or chief executive officer. Some firms give that authority to the personnel or employee relations manager. But others have turned to some form of alternative dispute resolution (ADR), especially in discrimination complaints. Many firms use mediators to resolve these matters. Some firms use outside arbitrators or company ombudsmen. Still others employ committees or boards made up of employee representatives and/or supervisors to review or decide such actions. We have not studied the effectiveness of these private sector practices, but they may provide insight for dealing with redress issues in a fair but less rigidly legalistic fashion than that of the federal redress system. include the use of mediation, dispute resolution boards, and ombudsmen. The use of ADR methods was called for under CSRA and underscored by the Administrative Dispute Resolution Act of 1990, the Civil Rights Act of 1991, and regulatory changes made at EEOC. Based not only on the fact that Congress has endorsed ADR in the past, but also that individual agencies have taken ADR initiatives and that MSPB and EEOC have explored their own initiatives, it is clear that the need for finding effective ADR methods is widely recognized in government. Our preliminary study of government ADR efforts, however, indicates that ADR is not widely practiced and that the ADR programs that are in place are, by and large, in their early stages. Most of these involve mediation, particularly to resolve allegations of discrimination before formal complaints are filed. Because ADR programs generally have not been around very long, the results of these efforts are sketchy, but some agencies claim that these programs have saved time and reduced costs. One example is the Walter Reed Army Medical Center’s Early Dispute Resolution Program, which provides mediation services. From fiscal year 1993 to fiscal year 1995, the number of discrimination complaints at the medical center dropped from 50 to 27—a decrease that Walter Reed officials attribute to the Early Dispute Resolution Program. Moreover, data from the medical center show that since the program began in October 1994, 63 percent of the cases submitted for mediation have been resolved. Walter Reed officials said that the costs of investigating and adjudicating complaints have been lessened, as has the amount of productive time lost on the part of complainants and others involved in the cases. Other areas that may be worth studying are those segments of the civil service left partially or entirely uncovered by the current redress system. For example, while almost all federal employees can bring discrimination complaints to EEOC, employees in their probationary periods, temporary employees, unionized postal workers, Federal Bureau of Investigation (FBI) employees, and certain other employees generally cannot appeal adverse actions to MSPB. In addition, FBI employees, as well as certain other employees, are not covered by federal service labor relations legislation and therefore cannot form bargaining units or engage in collective bargaining. What are the implications of the varying levels of protection on the fairness with which these employees are treated? Are there lessons here that might be applied elsewhere in the civil service? Finally, it should be noted that legislative branch employees are treated differently from those in the executive branch. For example, under the Congressional Accountability Act of 1995, since January 1996 congressional employees with discrimination complaints have been required to choose between two redress alternatives, one administrative and one judicial. The administrative alternative allows employees to appeal to the Office of Compliance, with hearing results appealable to a five-member board. The board’s decisions may then be appealed to the U.S. Court of Appeals for the Federal Circuit, which has a limited right of review. The other alternative is to bypass the administrative process and file suit in U.S. district court, with the opportunity to appeal the court’s decision to the appropriate U.S. court of appeals. The intent of this arrangement is to avoid the opportunity for the “two bites of the apple”—one administrative, one judicial—currently offered executive branch employees. It is too early to tell if the act will accomplish its purpose in this area, but Congress may find that once in operation, the new system may be instructive for considering how best to provide employee redress. Today, in the face of tight budgets and a rapidly changing work environment, the civil service is undergoing renewed scrutiny by the administration and Congress. In the broadest sense, the goal of such scrutiny is to identify ways of making the civil service more effective and less costly in its service to the American people. With so many facets of the civil service under review—including compensation and benefits, performance management, and the retirement system—no area should be overlooked that offers the opportunity for improving the way the government operates. To the extent that the federal government’s administrative redress system is tilted toward employee protections at the expense of the effective management of the nation’s business, it deserves congressional attention. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the administrative redress system for federal employees. GAO noted that: (1) the federal employee redress system is a complex and duplicative system that affords employees redress at three different levels; (2) the system is inefficient, expensive, and time-consuming because of its complexity and the variety of redress mechanisms available; (3) the system contains significant overlap, especially in mixed cases where two or more agencies review an appellant's decision; (4) redress system costs are difficult to determine because many direct costs are not reported and indirect costs are not measurable; (5) the most time-consuming cases involve discrimination complaints, which take an average of 800 days to reconcile; (6) the federal redress system provides its employees with far greater opportunities than do private-sector redress systems; (7) the federal system allows federal workers numerous appeals, evidentiary hearings, and district court trials; (8) federal workers file workplace discrimination complaints five times more often than do private-sector employees; (9) the current system is vulnerable to abuse and diverts managers attention from more productive activities, inhibits managers from taking legitimate actions against poor performers, and pressures employees and agencies to settle cases to contain costs; and (10) alternative dispute resolution offers some promising approaches to handling workers complaints, but these methods are underused and in the early stages of development.
You are an expert at summarizing long articles. Proceed to summarize the following text: As federal employees, USCP and other federal police officers are eligible to participate in one of the two federal retirement plans—the Civil Service Retirement System (CSRS) or the Federal Employees Retirement System (FERS). CSRS is available to employees entering federal service before 1984, while FERS is available to employees entering federal service on and after January 1, 1984. CSRS is a defined benefit plan— meaning that the employer promises a specified monthly benefit during retirement that is predetermined by a formula; in the case of CSRS, the benefit amount and eligibility depend on the employee’s earnings history, tenure of service, and age. The defined benefit plan is funded by both employee and agency contributions as well as additional contributions from the U.S. Treasury. CSRS covered employment is generally not considered covered employment for the purposes of Social Security; hence, CSRS covered employees do not also receive Social Security benefits. FERS is a retirement plan that provides benefits from three different sources: a defined benefit plan, Social Security, and TSP. As with CSRS, the defined benefit portion of FERS is funded by both employee and agency contributions, as well as additional contributions from the U.S. Treasury. Both federal retirement systems provide different levels of benefits depending on certain characteristics of covered employees. For example, under statutory and regulatory retirement provisions, federal employees who meet the retirement-related definitions of an LEO receive more generous retirement benefits under CSRS and FERS than non-LEO employees. Coverage under CSRS and FERS LEO definitional criteria generally include those personnel whose duties have been determined by the employing agency through an administrative process to be primarily the investigation, apprehension, or detention of individuals suspected or convicted of offenses against the criminal laws of the United States. The FERS definition of a LEO is more restrictive than the CSRS LEO definition in that it expressly includes a rigorous duty standard, which provides that LEO positions must be sufficiently rigorous such that “employment opportunities should be limited to young and physically vigorous individuals.” In general, neither LEO definitions under CSRS or FERS have been interpreted by OPM to cover federal police officers. Implementing OPM regulations for CSRS and FERS provide that the respective LEO regulatory definitions, in general, do not include an employee whose primary duties involve maintaining order, protecting life and property, guarding against or inspecting for violations of law, or investigating persons other than those who are suspected or convicted of offenses against the criminal laws of the United States, which are akin to the responsibilities of federal police officers. Federal police officers might also be treated, for retirement purposes, as “law enforcement officers” (that is, granted LEO-like status) under two additional scenarios. First, over the years, certain other federal police forces whose duties have not been determined by their employing agency to meet the LEO definitional criteria under the administrative process have been explicitly added to the CSRS or FERS statutory definitions so that they are considered LEOs for retirement purposes. Second, certain other federal police forces whose duties have not been determined by their employing agency to be within the scope of the definitional criteria of a LEO or explicitly added by amending statutory LEO definitions, have been provided retirement benefits similar to that of LEOs directly through legislation. Generally, federal LEOs (and officers with LEO-like status) have a higher benefit accrual rate than most other federal employees, albeit over a shorter period of time due to the mandatory retirement age for LEOs. Officers in these categories also contribute 0.5 percent more for these benefits than most other federal employees contribute—7.5 percent of pay for CSRS and 1.3 percent of pay for FERS. As shown in table 1, under both CSRS and FERS, statutory provisions provide for a faster accruing defined benefit pension for LEO and LEO-like personnel than that provided for most other federal employees. Also under FERS, federal police officers receiving LEO-like defined benefits are typically eligible for the same early and enhanced pension benefits as LEOs. For example, LEOs receive FERS cost-of-living adjustments beginning at retirement, even if retirement is earlier than age 62, instead of at age 62 when most other FERS retirees become eligible for these adjustments. Under FERS, LEOs also qualify for an unreduced early retirement benefit and may retire at age 50 with a minimum of 20 years of qualifying service, or at any age with at least 25 years of qualifying service, which are also more generous than the corresponding provisions for most other FERS participants. LEOs are also subject to mandatory retirement at age 57 with 20 years of service. They are also eligible to receive the special FERS supplement upon retirement that mimics the Social Security retirement benefits earned during federal government service. FERS retirees continue to receive the supplement until they reach age 62 and become eligible to collect Social Security. Police forces statutorily granted LEO-like status also typically receive these same benefits. The standard Social Security benefits apply to all federal LEOs. In addition to varying retirement benefits, federal police forces may also operate under different compensation systems. Some federal police forces are covered by OPM’s General Schedule (GS) basic pay plan (i.e., standard basic pay plan). According to OPM, standard governmentwide basic pay systems, including the GS system, are established under title 5 of the United States Code and most LEOs and other employees with arrest authority are covered by standard basic pay systems. Under a standard basic pay plan, OPM generally sets the basic pay ranges (grades) and pay increases (steps) within each grade for the positions, and federal police forces use these grades and steps to compensate their employees. On the other hand, some federal police forces are covered under non-standard basic pay plans authorized under separate legislation. Generally, under non-standard basic pay plans, federal police forces are authorized to, among other things, provide basic pay rates different from those specified in a standard basic pay plan and thus have the ability to offer higher minimum entry-level salaries than those provided to police officers under a standard pay system. USCP has enhanced retirement benefits and a higher minimum entry- level salary than most other federal police forces GAO reviewed. Also, it reported having a wider variety of protective duties such as routinely protecting members of Congress and buildings, and routinely using a variety of methods to carry out these duties, such as conducting entrance and exit screening and patrolling in vehicles, than most other police forces. However, USCP reported that its officers routinely engage in similar activities, such as intelligence operations, and have similar employment requirements for entry-level officers, such as being in good physical condition, as most other federal police forces. USCP and three other police forces—Park Police, Secret Service Uniformed Division, and Supreme Court Police—have enhanced retirement benefits, similar to those received by federal LEOs, where officers can retire after fewer years of service and their retirement annuities accrue faster than the other six federal police forces GAO reviewed. Specifically, police officers within these four police forces are authorized under CSRS and FERS to retire at age 50 with a minimum of 20 years of qualifying service and are subject to a mandatory retirement age of 57, with some exceptions. In 1988, the Park Police and the Secret Service Uniformed Division, both of which had not been determined by OPM and their employing agencies to be covered by the LEO definition, were explicitly added by statute to the FERS definition of a LEO so that they are considered LEOs for retirement purposes. Committee report language accompanying the 1988 legislation noted that “although these individuals are commonly thought to be law enforcement officers, OPM says they do not meet the FERS definition of ‘law enforcement officer’ under section 8401(17) and thus do not qualify for FERS law enforcement officer benefits.” The Committee report then provided that the 1988 legislation would ensure that these individuals will receive FERS law enforcement officer benefits. In comparison, rather than amending the statutory LEO definition, separate legislation in 1990 and 2000 provided the USCP and the Supreme Court Police, respectively, with enhanced retirement benefits similar to those received by LEOs. language accompanying the 2000 Supreme Court legislation, for example, explained that the new provision served to “bring the Supreme Court Police into parity with the retirement benefits provided to the United States Capitol Police and other federal law enforcement agencies.” Federal police officers at the remaining six police forces in our review receive standard federal employee retirement benefits. Pub. L. No. 101-428, 104 Stat. 928 (1990); Pub. L. No. 106-553, 114 Stat. 2762 (2000). those reported by the other federal police forces. With respect to USCP, for example, under its non-standard basic pay authority, the Capitol Police Board and the Chief of the Capitol Police set basic pay rates (both grades and steps) for USCP officers. Three other police forces (BEP Police, Pentagon Police, and Postal Security Force) with standard federal employee retirement benefits also operate non-standard basic pay plans, by statute, while the remaining three police forces—FBI Police, FEMA Police, and NIH Police—operate under the standard basic pay plans to compensate their officers. USCP and the three police forces with enhanced retirement benefits offered among the highest minimum entry-level salaries, ranging from $52,020 to $55,653, as shown in table 2. At $55,653, USCP and the Supreme Court Police offered the highest minimum entry-level salaries to their police officers. NIH Police and Postal Security Force offered the lowest minimum entry-level salaries among the 10 police forces, at $38,678 and $38,609, respectively. USCP reported routinely having a wider variety of duties than other federal police forces. These duties ranged from routinely protecting members of Congress to buildings. For example, USCP officials stated that their main focus is protecting life and property, and thus, in addition to routinely protecting members of Congress, they also protect members’ families throughout the entire United States, as authorized, as well as congressional buildings, parks, and thoroughfares. Conversely, the Postal Security Force reported having fewer duties, and the protective duties that it does have, including routinely protecting employees and buildings, are ones that all or most of the police forces, including the USCP, also have. Postal Security Force officials stated that their officers’ primary duty is routinely protecting the United States Postal Service buildings and mail processing facilities. Figure 1 identifies the reported routine protective duties of USCP and the nine federal police forces we reviewed. In addition to the routine protective duties listed above, some of these federal police forces, including the USCP, have shared jurisdiction with other non-federal police forces. For example, Park Police officials said that they have a shared understanding with the states of Maryland and Virginia to investigate homicides in federal parks within these states. Officials from USCP stated that they have statutory authority for extended jurisdiction which is shared with the Metropolitan Police Department of the District of Columbia (MPD). Additionally, BEP Police, FBI Police, and Pentagon Police officials stated that they have a memorandum of understanding (MOU) or cooperative agreement with MPD to patrol areas beyond their primary jurisdiction. For example, as shown in figure 2, as a result of the statutory authority for extended jurisdiction, USCP’s jurisdiction extends several blocks beyond the grounds of the U.S. Capitol complex. Section 1202 of Pub. L. No. 112-74, 125 Stat. (2011) provided, in general, that to the extent to which the Director of the National Park Service has jurisdiction and control over such specified area, such jurisdiction and control is transferred to the Architect of the Capitol. In turn, under 2 U.S.C. § 1961, Capitol Police jurisdiction over United States Capitol Buildings and Grounds includes, among other things, property acquired in the District of Columbia by the Architect of the Capitol. Police, Park Police, and Supreme Court Police also reported routinely using other methods to carry out their duties, such as counter- surveillance horse patrol, and standing post. All 10 of the police forces we reviewed reported routinely patrolling in vehicles and conducting entrance or exit screenings, and all except the NIH Police reported patrolling on foot. The NIH Police officials explained that their mission is protecting the NIH facility of about 347 acres, including biological-safety laboratories and responding to emergency calls, and thus, officers generally do not stand post but are out at the facility patrolling in vehicles. USCP also reported that it routinely engages in a variety of activities similar to those reported by some of the nine federal police forces in carrying out its protective duties. For example, USCP and seven of the other nine police forces reported routinely conducting specialized activities such as Special Weapons and Tactics (SWAT), K-9, or Containment and Emergency Response Team (CERT) activities. Also, USCP and eight of the other nine forces reported routinely conducting traffic control and responding to suspicious activities, in particular, suspicious packages and people. USCP officials indicated that suspicious packages within the Capitol complex have typically been items such as unattended backpacks that have not contained hazardous devices such as bombs. Also, the Pentagon police, in noting that the Pentagon is still a likely target since the terrorist attacks of September 11, 2001, cited incidents such as a March 2010 attack when a gunman tried to shoot his way through the entrance of the building. In addition, USCP and the Pentagon Police reported routinely responding to chemical, biological, radiological, and nuclear (CBRN) or hazardous material (HAZMAT) threats. Figure 4 summarizes the routine activities reported by each police force. USCP and most of the federal police forces in our review generally have similar employment requirements for their entry-level police officers, including eligibility requirements and requirements for the hiring process. All of the police forces require applicants to be a U.S. citizen, in good physical condition, have a valid driver’s license, and have no criminal history in order to be eligible for employment. Also, most federal police forces require some college experience or prior related experience, with the exception of USCP and Secret Service Uniformed Division, which require their officers to have a high school diploma or General Equivalency Diploma (GED) certificate. However, officials at USCP and Secret Service Uniformed Division stated that they also receive applicants with college or law enforcement experience. In addition, Postal Security Force officials stated that their officers are not required to have a high school diploma or GED certificate; however, they are required to have been employed by the Postal Service for at least 1 year, and the position that they held does not have to be law enforcement related. Most of the police forces that offer enhanced retirement benefits also differ from the other police forces in that they have a maximum age for applicants and require applicants to have good character and leadership skills. Figure 5 provides the eligibility requirements for each federal police force in our review. Federal police forces generally have similar requirements for their hiring processes. For example, as shown in figure 6, each police force requires applicants to have an interview, medical examination, background investigation, and training either pre or post hiring, primarily at the Federal Law Enforcement Training Center (FLETC). Furthermore, most of the federal police forces, including USCP, require applicants to complete a drug test and a written examination. USCP differs from the majority of police forces, however, in that it does not require its officers to obtain a security clearance, but it does require a psychological evaluation, polygraph test, and a complete background investigation. In addition to enhanced retirement benefits and a higher minimum entry- level salary, USCP has experienced lower attrition than six of the other nine federal police forces, and USCP reported that attrition was not a problem from fiscal years 2005 through 2010. Also, USCP and some of the other nine federal police forces reported that officers who voluntarily separate for reasons other than retirement do so for personal reasons or career advancement; few forces cited the desire for greater retirement benefits or better salary as a reason why officers leave. While USCP and seven of the other nine police forces said that human capital flexibilities were important tools for recruiting and retaining police officers, their use generally depends on need or budget, among other factors. From fiscal years 2005 through 2010, USCP’s average attrition rate was 6.5 percent compared to the other nine federal police forces, which ranged from 3.5 percent to just under 14 percent. Three of the other nine police forces—BEP Police, NIH Police and Park Police—had lower attrition rates than USCP, while the remaining six forces had higher attrition rates during the same period, as shown in figure 7. USCP as well as four other police forces reported that attrition was not a problem, and the most common explanation officials offered was the current economy. For example, USCP and BEP Police officials stated that with fewer jobs available in the economy, officers were remaining employed by their police forces. Specifically, from fiscal years 2005 through 2008, when the national unemployment rate was 4.9 percent, the average attrition rate among the police forces in our review was about 9.2 percent. However, during fiscal years 2009 and 2010, when the national unemployment rate was 9.1 percent—almost twice as high as the preceding 4 years—the combined average attrition rate for the police forces was lower, about 7.5 percent. The FBI Police was the only force with a higher attrition rate (17.9 percent) from fiscal years 2009 through 2010. Two of the 10 police forces that reported that attrition was a great problem—Secret Service Uniformed Division and the FBI Police—also had the highest attrition rates. The Secret Service Uniformed Division cited the high cost of living in the Washington, D.C. metropolitan area and challenging demands of the job as reasons why attrition was a problem. The FBI Police said better pay, positions, and benefits at other forces were reasons why attrition was a problem. Figure 8 illustrates federal police forces’ responses to our survey question on the extent to which attrition was a problem. Furthermore, USCP had no problem filling the vacant positions left by officers leaving the force as they are able to attract qualified applicants. USCP and three of the other nine police forces—Park Police, FEMA Police and FBI Police—reported that they had no difficulty attracting qualified applicants. Our analysis of USCP data indicates that from fiscal year 2006 through 2010, USCP attracted, on average, 27 qualified applicants for each available vacancy and maintained a vacancy rate of 2.6 percent. During that time, the other nine federal police forces had an average vacancy rate of 7.9 percent, ranging from 1.9 percent at BEP Police up to 24.4 percent at FEMA Police. Only two of the other nine forces—BEP Police and Supreme Court Police—had a lower vacancy rate with 1.9 percent and 2.2 percent, respectively. USCP officials cited the slow economy and a competitive salary as reasons why they believe that they have no problem attracting qualified applicants. On the other hand, FBI Police, which has the highest attrition rate among the 10 police forces, stated that it is able to attract a large pool of applicants due to the reputation of the agency. FBI officials said that applicants view the police officer position as an entry-level position with hopes of advancing within the agency. In the case of FEMA Police, officials reported that they had no difficulty attracting qualified applicants; however, they had the highest vacancy rate among the 10 police forces. Officials explained that from 2004 through 2010 FEMA Police was building its force and during that time management would periodically place a hold on hiring due to budget constraints. Federal police forces said that their police officers generally leave their forces either because of personal reasons or for better career advancement opportunities, and officers generally stay because of appreciation for the agency’s mission.other police forces indicated that most of their police officers leave for personal reasons, such as the desire to work closer to home. At the same time, five other police forces—Supreme Court Police, FBI Police, FEMA Police, Pentagon Police, and Postal Security Force—cited career advancement as the reason for officer attrition. Specifically, career advancement, as stated by the agencies, was either the acceptance of a higher level position at another agency or a transfer to an agency that has greater potential for future promotion. For example, our analysis shows that the majority of FBI’s voluntarily-separated officers transferred to different positions within the agency, such as an agent or intelligence analyst position. FBI Police officials said that applicants often view the police officer position as a stepping stone to advance to these positions. For example, USCP and three Furthermore, USCP and three other police forces reported that quality-of- life was one of the main reasons police officers stay with their forces, citing such underlying factors as the work environment and work-life balance. USCP said that pay and job security were two other main reasons that police officers remain employed by the force. Also, 6 of the 10 forces stated that agency mission was a key reason that officers stay with their agencies. Figure 9 summarizes the primary reasons that federal police force officials offered for why their officers leave or stay. While the USCP Labor Committee asserted that inadequate retirement benefits have contributed to attrition among USCP officers, USCP did not report retirement benefits as a reason why its officers left, as shown in figure 9. On the other hand, there were other police forces that identified inadequate retirement benefits as a reason for officer attrition—BEP Police, FBI Police, and Pentagon Police—which were among the police forces that offer standard, as opposed to enhanced, retirement benefits. However, our analysis suggests that the fact that a police force offers enhanced retirement benefits does not necessarily mean that it will have lower attrition compared to others police forces, and vice versa. For example, the Secret Service Uniformed Division offers enhanced retirement benefits, yet it had the second highest attrition rate among the federal police forces, whereas NIH Police offers standard retirement benefits and has one of the lowest attrition rates. Further, none of the police forces that offered enhanced retirement benefits cited those benefits as a reason why officers stayed at their police force. Although the difference in retirement benefits may not fully indicate why officers leave a police force, it may influence the timing of when officers leave. For all of the police forces with enhanced retirement benefits, a greater percentage of the officers who left—73 percent—did so within the first 5 years of service or after 20 years of service, compared to those forces with standard retirement benefits, where 54 percent of separating officers left either within the first 5 years of service or after 20 years of service. The Director of USCP Human Resources stated that if an officer stays with USCP beyond 5 years, that officer is likely to stay at least until the individual reaches early retirement, generally after 20 or 25 years of service. Figure 10 compares the timing of separation of police officers at police forces with enhanced retirement benefits to those with standard retirement benefits from fiscal years 2005 through 2010. As with greater retirement benefits, desire for a better salary was not cited by a majority of police forces as a reason why officers leave from or stay with their forces. As shown in figure 9, only 2 of the 10 forces—FBI Police and Postal Security Force—said that officers leave for better salaries, and 3 of the 10 forces—USCP, Park Police, and Pentagon Police—said that officers stay for better salaries. Also, federal police forces with higher minimum entry-level salaries did not always have lower attrition. For example, USCP and Secret Service Uniformed Division were among the highest paid federal police forces. USCP had among the lowest attrition, and Secret Service Uniformed Division had among the highest. Further, NIH Police, which offered one of the lowest minimum entry-level salaries, maintained the second lowest attrition from fiscal years 2005 through 2010, as displayed in table 3. Most of the police forces in our review stated that the use of human capital flexibilities was of at least some importance for recruiting and retaining officers. Five of the 10 federal police forces in our study, including USCP, reported that human capital flexibilities were important or very important to recruiting and retaining police officers, while two police forces—Postal Security Force and FEMA Police—stated that they were not important. The other forces—FBI Police, Park Police, and Supreme Court Police—reported that human capital flexibilities are somewhat or moderately important. Further, NIH Police was the sole police force that reported a human capital flexibility as one of the primary reasons that officers remained employed by their police force. Figure 11 identifies the federal police forces’ views on the importance of human capital flexibilities. USCP and the other police forces offered a variety of human capital flexibilities related to work-life balance, relocation and position classification, and recruitment and retention, among others. For example, all of the police forces, except the USCP, reported offering cash performance bonuses to their officers. USCP officials noted that they did not offer this particular flexibility because it was not necessary to recruit and retain officers. Conversely, in some cases, flexibilities that were available to police forces to use were not offered to police officers. For example, three police forces—USCP, Park Police and FEMA Police— reported that they did not use all of the recruitment and retention flexibilities available to them because they were not needed since they have a sufficient number of applicants. USCP and Park Police officials further stated that they did not offer these flexibilities due to budget constraints. Other human capital flexibilities were not offered because they were not available to police forces. Police forces generally reported not having some flexibilities available to their agencies because they had not requested that such flexibilities be made available, explaining that they did not need them as they were able to attract qualified applicants without offering more flexibilities. For example, the transportation subsidy was not available to Postal Security Force because, according to Postal Security Force officials, they did not need this flexibility to be made available to their force as they did not have difficulty in attracting applicants. Figure 12 provides information on federal police forces’ human capital flexibilities. Even though human capital flexibilities are intended to be a tool to recruit and retain employees, and most of the police forces considered them at least somewhat important, the police forces that offered a wider variety of human capital flexibilities did not always have lower attrition rates. For example, NIH Police and Secret Service Uniformed Division were the two forces that offered the widest variety of flexibilities. Yet, NIH Police had the second lowest attrition rate, and Secret Service Uniformed Division had the second highest attrition rate. While retirement benefits, pay, and use of human capital flexibilities could affect attrition, the extent to which they do so can vary for a given agency, and other factors—such as family issues and promotion opportunities, as previously discussed—could influence an employee’s decision to leave or remain with his or her employer. Therefore, when an agency is determining its strategy for recruiting and retaining qualified employees, assessing the extent to which attrition is a problem, and developing strategies that address the problem, will be important. The benefits of USCP officers retiring at the age of 57 under existing FERS provisions, if fully utilized by USCP officers, would meet retirement income targets generally recommended by some retirement experts. However, the level of benefits depends significantly on the level of employee TSP contributions. In 2010, the USCP Labor Committee presented six proposals that would enhance the current USCP benefit structure. Five of the six would increase existing costs; our review found that the other proposal, which urges the USCP Board to exercise its current authority by allowing officers to voluntarily remain on the job until age 60 rather than retire at 57, as mandated, would have a minimal impact on costs to the federal government and could improve officers’ retirement benefits. In June 2011 we reported that there was little consensus among experts about how much income constitutes adequate retirement income. The replacement rate is one measure some economists and financial advisors use as a guide for retirement planning; it is the percentage of pre- retirement income that is received annually in retirement. Our review showed that some economists and financial advisors considered retirement income adequate if the ratio of retirement income to preretirement income—the replacement rate—is from 65 to 85 percent. To illustrate the effect of current FERS provisions on retirement income, we analyzed retirement benefits for illustrative USCP workers hired at ages 22, 27, and 37, retiring at age 57, and making three different levels Overall, we found the of TSP contributions, as described in appendix I.total replacement rates for retirement at age 57 ranged from a low of about 54 percent for a worker hired at age 37 making no TSP contributions to 91 percent for a worker hired at age 22 making 10 percent TSP contributions, as shown in figure 13. A worker hired at age 27, which is the average age at which individuals are hired by USCP, retiring at age 57, and contributing 5 percent to TSP (and thereby getting the maximum employer match) would have a replacement rate of 75 percent, which would be in the middle of recommended replacement rate targets. Among our illustrative examples, only workers hired at age 37 or those who made no contributions to their TSP accounts would have replacement rates below 75 percent. Workers hired at age 37 may also have retirement income through prior employment. These are examples of individual workers, not households, since we had no basis for simulating the income and retirement benefits of spouses. Any benefits spouses received would add to household retirement income. These examples also assume there is no leakage from the TSP accounts in the form of TSP loans that are not repaid or lump-sum distributions that are not used as retirement income. Our analysis also shows that employee TSP contribution levels over the course of a career can make a significant difference to total retirement income. For workers hired at age 27, for example, increasing the contribution rate from 0 to 5 percent over the entire career would increase replacement rates at age 57 by 11 percentage points, bringing them from just below the recommended target range to the middle of it. In general, the longer a worker’s career, the more years they make contributions and earn investment returns, and the greater a difference the contribution rate makes. According to USCP data, in 2010, 12 percent of officers made no contributions to TSP, and another 10 percent contributed less than 5 percent of pay, thereby forgoing some portion of the full employer matching contribution, as shown in figure 14. However, the data suggest that workers typically do increase their contributions over time, and 54 percent of officers contribute more than 5 percent of pay. In 2010, the USCP Labor Committee provided selected members of Congress with six proposed changes to further enhance the current USCP benefit structure. None of the proposals included cost estimates, nor have CBO or OPM estimated the costs of any of the proposed changes. Based on our review, we found that five of the six proposals, if adopted, would increase costs and increase current pay and benefit disparities between USCP and other federal LEO and non-LEO groups. One proposal, which suggests that the USCP Board further exercise its current discretionary authority to allow officers to voluntarily remain on the job until age 60, would have a minimal effect on costs. Table 4 discusses each proposal and its potential effect on costs to the federal government and officers’ benefits. The sixth and final proposal suggests that the USCP Board exercise its authority to allow officers to remain employed until age 60. The Board currently has the discretionary authority to exempt officers with 20 years of service from the mandatory retirement age of 57 if an officer’s continued service is deemed to be in the public interest. According to USCP, the Board has approved 17 such exemptions since Sept. 30, 2006: 16 in 2008 and 1 in 2010. It is unclear how many current officers would be affected by this proposal. According to USCP data, the average age at which officers retired from 2005 through 2010 retired was 54—3 years before the mandatory retirement age of 57. The actual costs associated with this proposal would be contingent on the number of officers who chose to work longer. However, if the USCP Board deemed it to be in the public interest to allow more officers to voluntarily work past age 57, projections show a slight reduction in pension costs and a slight increase in payroll costs, largely offsetting each other and resulting in a minimal overall long-term cost impact. However, according to OPM, the savings actually realized by USCP directly due to reduced pension costs would be further minimized because the costs and savings would be distributed across the entire LEO population, under the cost allocation methodology used for FERS. In terms of USCP payroll costs, the later retirements would result in a less than 1 percent increase in total payroll throughout the projection period. This increase in payroll costs would largely offset the savings in pension costs, so that the overall net long-term cost effect to USCP of this proposal could be a very small or minimal increase, depending on the amount of pension costs allocated to USCP directly when distributed across the LEO population. In addition, the costs associated with paying agency matching contributions to officers’ TSP accounts would also be minimal since the total increase could not exceed 5 percent of the less than 1 percent increase in payroll costs. According to our analysis, retiring at age 60 instead of 57 could significantly increase retirement incomes—more through TSP contributions than through the FERS annuity. The effect of later retirement on the FERS basic annuity is fairly predictable; under the FERS LEO provisions, the benefit formula provides 1 percent of final average pay for each year of additional service after 20 years. The effect on Social Security benefits would be relatively small, but could vary somewhat depending on whether USCP officers continued to work in Social Security covered employment after retiring from USCP. Retiring later has the greatest effect on the TSP component of retirement income for those who contribute to TSP. USCP officers would increase the number of years they make TSP contributions, receive the agency match and earn investment returns and reduce the number of years that they would draw down their TSP accounts in retirement. Still the size of that effect depends on the level of lifetime TSP contributions. As shown in figure 15, taking all three FERS components into account, retiring at age 60 instead of 57 would increase total replacement rates by as little at 4 percentage points for workers making no TSP contributions and by as much as 10 percentage points for workers contributing 10 percent of pay to TSP. Moreover, taking all three FERS components into account, employee TSP contribution levels over the course of a career can make more of a difference to retirement income than 3 additional years of service. In the case of workers hired at age 22 and contributing a constant 5 percent of wages to TSP, retiring at age 60 instead of 57 increases total replacement rates by 8 percentage points from 83 percent to 91 percent. In contrast, increasing the employee contribution rate from 0 to 5 percent over the entire career would increase replacement rates by 14 percentage points if retiring at age 57. We provided a draft of this report for review and comment to USCP and the nine other federal police forces included in this review; the USCP Labor Committee; and OPM. USCP and four other federal police forces— Secret Service Uniformed Division, Pentagon Police, FBI Police, and Postal Security Force—did not provide written comments to be included in this report, but provided technical comments, which we incorporated as appropriate. In emails received January 3 and 4, 2012, HHS and DOI liaisons, respectively, stated that their departments, including NIH Police and Park Police, had no comments on the report. In an email received January 9, 2012, the DHS liaison confirmed that the FEMA Police had no comments on the report. In emails received January 10, 2012, the BEP Police and Supreme Court Police liaisons stated their agencies had no comments on the report. We received comment letters from DHS, OPM, and the USCP Labor Committee, which are reproduced in appendices II, III, and IV, respectively. In commenting on this report, DHS stated that it was pleased with GAO’s recognition of its efforts to develop, implement, and deploy human capital flexibilities. DHS also noted that the report does not contain any recommendations for DHS. In its letter, OPM made several comments regarding one of the proposals that we analyzed in the report—the proposed increase in the mandatory retirement age. OPM stated that the cost savings actually realized by USCP from raising the mandatory retirement age for USCP personnel would be small because the estimated reductions in annual pension costs would be spread across all LEO-employing agencies under the cost allocation methodology used for FERS. We revised our report to clarify this point. OPM also states that increasing the mandatory retirement age is unnecessary since LEO retirement benefits provide a higher annuity rate in order to make early retirement at age 57 economically feasible and inconsistent with other retirement provisions that provide enhanced accrual rates for USCP in comparison to other, non-LEO federal employees. We are not taking a position on whether or not to raise the mandatory retirement age for USCP personnel in this report. Rather, the report provides information on some of the possible effects of doing so, namely that it could increase retirement security at a minimal cost. This report also shows that, generally, the effect of greater employee participation in TSP can provide a larger boost in post-retirement income than the effect of working 3 additional years on the defined benefit portion of retirement income. Finally, we recognize there are many other factors to take under consideration when making such policy decisions, including workforce planning needs; retirement trends across other agencies, industries and occupations; and broader workforce trends in employee health and longevity. OPM also provided technical comments, which we incorporated as appropriate. In its letter, the USCP Labor Committee stated that, even though our report indicates that child care is available to USCP officers, to its knowledge, USCP does not have a child care program. It is the case that USCP, itself, does not offer a child care program; however, according to USCP officials, USCP police officers have access to child care through the House and Senate Child Care Centers. We revised our report to clarify this point. The letter also provided commentary on several of their proposals that extended beyond the scope of our review. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to USCP; the nine other federal police forces included in this review; the USCP Labor Committee; and OPM. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Eileen Larence at 202-512-8777 or by e-mail at LarenceE@gao.gov or Charles Jeszeck at 202-512-7215 or by e-mail at JeszeckC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To understand how the United States Capitol Police (USCP) compares to other federal police forces with regard to retirement benefits, compensation, duties, employment requirements, attrition, human capital flexibilities, and costs associated with the proposed benefit enhancements, we addressed the following questions: (1) How does the USCP compare to other federal police forces in the Washington, D.C. metropolitan area with respect to retirement benefits, minimum entry-level salary, duties, and employment requirements? (2) How does attrition at USCP compare to other federal police forces, and how if at all, have USCP and other federal police forces used human capital tools to recruit and retain qualified officers? (3) What level of retirement income do current USCP benefits provide and what costs are associated with the proposed benefit enhancements? For the first and second objectives, we identified other federal police forces that were potentially comparable to USCP based on (1) prior work on federal uniformed police forces, (2) inclusion in the Office of Personnel Management’s (OPM) occupational series for police officers (0083), and (3) the number of officers located in the Washington, D.C. metropolitan area or who receive Washington, D.C. locality pay.information, we selected nine federal police forces whose officers are part of, or functionally equivalent to, the 0083 occupational series and who have at least 50 officers who are located in the Washington, D.C. metropolitan area or receive Washington, D.C. locality pay, as listed in Based on this table 5.to OPM’s 0083 occupational series—or police series—for which an individual’s primary duties involve the performance or supervision of law enforcement work in the preservation of the peace; the prevention, detection, and investigation of crimes; the arrest or apprehension of violators; and the provision of assistance to citizens in emergency situations, including the protection of civil rights. Also, the primary duty station for the approximately 1,800 USCP officers is Washington, D.C. We excluded military police forces because our review is focused on civilian federal police forces which have a civilian retirement benefit system as opposed to a military retirement benefit system. We also excluded police forces for intelligence agencies because, unlike other executive branch police forces, they do not report their human capital data to Central Personnel Data File (CPDF). In addition to the contacts named above, Kristy Brown, Assistant Director, Kim Granger, Assistant Director, Jonathan McMurray, Analyst-in-Charge, and Su Jin Yon, Analyst-in-Charge, managed this assignment. James Bennett, R.E. Canjar, Geoffrey Hamilton, Lara Miklozek, Christopher Ross, Rebecca Shea, Ken Stockbridge, Roger Thomas, and Frank Todisco made significant contributions to this report. Nicole Harkin, Jeff Jensen, Susanna Kuebler, Sara Margraf, Amanda Miller, and Gregory Wilmoth also provided valuable assistance.
The Washington, D.C. metropolitan (DC metro) area is home to many federal police forces, including the United States Capitol Police (USCP), which maintain the safety of federal property, employees, and the public. Officials are concerned that disparities in pay and retirement benefits have caused federal police forces to experience difficulties in recruiting and retaining officers. In 2010, the USCP Labor Committee proposed six changes to enhance the USCP benefit structure. GAO was asked to review USCP’s pay and retirement benefits and compare them to other federal police forces in the DC metro area. GAO (1) compared USCP to other forces with respect to retirement benefits, minimum entry-level salary, duties, and employment requirements; (2) compared attrition at USCP to other forces, and determined how, if at all, USCP and other forces used human capital flexibilities (e.g., retention bonus); and (3) determined what level of retirement income USCP benefits provide and the costs associated with the proposed benefit enhancements. GAO chose nine other federal police forces to review based on prior work, inclusion in the Office of Personnel Management (OPM) police occupational series, and officer presence in the DC metro area. GAO analyzed laws, regulations, OPM data from fiscal years 2005 through 2010, and human capital data from the 10 police forces. GAO also surveyed the 10 forces. USCP and the Office of Personnel Management generally agreed with our findings and provided technical comments, which GAO incorporated as appropriate. USCP generally has enhanced retirement benefits, a higher minimum starting salary, and a wider variety of protective duties than other federal police forces in the DC metro area that GAO reviewed, but has similar employment requirements. Even though USCP, Park Police, Supreme Court Police, and Secret Service Uniformed Division are federal police forces, they provide enhanced retirement benefits similar to those offered by federal law enforcement agencies that have additional investigative duties. These enhanced benefits allow their officers to retire early and accrue retirement pensions faster than other federal police forces. USCP and these three forces also offered among the highest minimum entry-level salaries—ranging from $52,020 to $55,653—than the other six forces GAO reviewed, which had minimum entry-level salaries ranging from $38,609 to $52,018. USCP reported routinely having a wider variety of duties than most other forces. These duties ranged from routinely protecting members of Congress to protecting buildings. USCP and most of the forces generally have similar employment requirements, such as being in good physical condition. USCP’s attrition rate is generally lower than the majority of the federal police forces in our review; and USCP and seven of the other nine police forces considered human capital flexibilities to be at least of some importance to recruiting and retaining qualified officers, but use of these flexibilities generally depends on recruiting needs, among other factors. From fiscal years 2005 through 2010, USCP had the fourth lowest attrition rate (6.5 percent) among the 10 police forces GAO reviewed; the attrition rates for the nine other forces ranged from 3.5 percent to just under 14 percent. Officials from USCP and four other forces GAO reviewed stated that, currently, attrition is not a problem because of the challenging economy. For example, officials from USCP and Bureau of Engraving and Printing Police stated that their officers want to retain their jobs in the challenging economy. In addition, USCP and other forces said that when their officers do leave the force, they generally do so either because of personal reasons or for better career advancement opportunities, and officers generally stay for reasons such as good working environment or appreciation for the agency’s mission. The extent to which retirement benefits, pay, and use of human capital flexibilities affect attrition can vary among forces given other factors—such as family issues—that could influence an employee’s decision to leave or remain with his or her employer. If fully utilized, benefits for USCP officers who retire at the age of 57 under existing provisions generally would be within the range of retirement income targets suggested by some retirement experts. However, the level of benefits depends significantly on the level of employee retirement contributions. In 2010, the USCP Labor Committee presented six proposals that would enhance the current USCP benefit structure. GAO’s analysis shows that five of the six would increase existing costs, GAO’s review found the other proposal, which urges the USCP Board to exercise its current authority to allow officers to voluntarily remain employed until age 60 rather than retire at age 57, as mandated, would have only a minimal impact on USCP costs and could increase officers’ retirement income.
You are an expert at summarizing long articles. Proceed to summarize the following text: In October 1992, the Congress established SAMHSA to strengthen the nation’s health care delivery system for the prevention and treatment of substance abuse and mental illnesses. SAMHSA has three centers that carry out its programmatic activities: the Center for Mental Health Services, the Center for Substance Abuse Prevention, and the Center for Substance Abuse Treatment. (See table 1 for a description of each center’s purpose.) The centers receive support from SAMHSA’s Office of the Administrator; Office of Program Services; Office of Policy, Planning, and Budget; and Office of Applied Studies. The Office of Program Services oversees the grant review process and provides centralized administrative services for the agency; the Office of Policy, Planning, and Budget develops the agency’s policies, manages the agency’s budget formulation and execution, and manages agencywide strategic and program planning activities; and the Office of Applied Studies gathers, analyzes, and disseminates data on substance abuse practices in the United States, which includes administering the annual National Survey on Drug Use and Health—a primary source of information on the prevalence, patterns, and consequences of drug and alcohol use and abuse in the country. In fiscal year 2003, SAMHSA’s staff totaled 504 full-time-equivalent employees, a decrease from 563 in fiscal year 1999. Thirteen of the employees were in the Senior Executive Service, and the average grade of SAMHSA’s general schedule workforce was 12.5—up from 11.7 in fiscal year 1999. In addition, 25 of the employees were members of the U.S. Public Health Service Commissioned Corps. SAMHSA’s program staff are almost evenly divided among its three centers (see fig. 1), and all are located in the Washington, D.C., metropolitan area. SAMHSA’s budget increased from about $2 billion in fiscal year 1992 to about $3.1 billion in fiscal year 2003. SAMHSA uses most of its budget to fund grant programs that are managed by its three centers. (See fig. 2.) In fiscal year 2003, 68 percent of SAMHSA’s budget funded the Substance Abuse Prevention and Treatment Block Grant ($1.7 billion) and the Community Mental Health Services Block Grant ($437 million). The remaining portion of SAMHSA’s budget primarily funded other grants; $74 million (2.4 percent) of its fiscal year 2003 budget supported program management. SAMHSA’s major activity is to use its grant programs to help states and other public and private organizations provide substance abuse and mental health services. For example, the substance abuse block grant program gives all states a funding source for planning, carrying out, and evaluating substance abuse services. States use their substance abuse block grants to fund more than 10,500 community-based organizations. Similarly, the mental health block grant program supports a broad spectrum of community mental health services for adults with serious mental illness and children with serious emotional disorders. In December 2002, SAMHSA released for public comment its initial proposal for how it will transform the substance abuse and mental health block grants into performance partnership grants. In administering the block grants, the agency currently holds states accountable for complying with administrative and financial requirements, such as spending a specified percentage of funds on particular services or populations. According to SAMHSA’s proposal, the new grants will give states more flexibility to meet the needs of their population by removing certain spending requirements. At the same time, the grants will hold states accountable for achieving specific goals related to the availability and effectiveness of mental health and substance abuse services. For example, SAMHSA has proposed that it would waive the current requirement that a state use a certain percentage of its substance abuse block grant funds for HIV services if that state can show a reduction of HIV transmissions among the population with a substance abuse problem. The Children’s Health Act of 2000 required SAMHSA to submit a plan to the Congress by October 2002 describing the flexibility the performance partnership grants would give the states, the performance measures that SAMHSA would use to hold states accountable, the data that SAMHSA would collect from states, definitions of the data elements, obstacles to implementing the grants and ways to resolve them, the resources needed to implement the grants, and any federal legislative changes that would be necessary. In addition to the block grants that SAMHSA awards to all states, the agency awards grants on a competitive basis to a limited number of eligible applicants. These discretionary grants help public and private organizations develop, implement, and evaluate substance abuse and mental health services. In fiscal year 2003, the agency funded 73 discretionary grant programs, the largest of which was the $98.1 million Children’s Mental Health Services Program. This program helps grantees integrate and manage various social and medical services needed by children and adolescents with serious emotional disorders. Discretionary grant applications submitted to SAMHSA go through several stages of review. When SAMHSA initially receives grant applications, it screens them for adherence to specific formatting and other administrative requirements. Applications that are rejected—or screened out—at this stage receive no further review. Applications that move on are reviewed on the basis of their scientific and technical merit by an initial review group and then by one of SAMHSA’s national advisory councils. The councils, which ensure that the applications support the mission and priorities defined by SAMHSA or the specific center, must concur with the scores given to the applications by the initial review group. On the basis of the ranking of these scores given by the peer reviewers and on other criteria posted in the grant announcement, such as geographic location, SAMHSA program staff decide which grant applications receive funding. Center directors and grants management officers must approve award decisions that differ from the ranking of priority scores, and SAMHSA’s administrator approves all final award decisions. SAMHSA’s oversight of its block and discretionary grants consists primarily of reviews of independent audit reports, on-site reviews, and reviews of grant applications. SAMHSA’s Division of Grants Management provides grant oversight, which includes reviewing the results of grantees’ annual financial audits that are required by the Single Audit Act. In general, these audits are designed to determine whether a grantee’s financial statements are fairly presented and grant funds are managed in accordance with applicable laws and program requirements. Furthermore, SAMHSA is statutorily required to conduct on-site reviews to monitor block grant expenditures in at least 10 states each fiscal year. The reviews examine states’ fiscal monitoring of service providers and compliance with block grant requirements, such as requirements to maintain a certain level of state expenditures for drug abuse treatment and community mental health services—referred to as maintenance of effort. In addition, SAMHSA project officers—grantees’ main point of contact with SAMHSA—monitor states’ compliance with block grant requirements through their review of annual block grant applications. For example, in the substance abuse block grant application, states report how they spent funds made available during a previous fiscal year and how they intend to obligate funds being made available in the current fiscal year; project officers review this information to determine if states have complied with statutory requirements. For discretionary grants, project officers monitor grantees’ use of funds through several mechanisms, including quarterly reports, site visits, conference calls, and regular meetings. The purpose of monitoring both block and discretionary grants is to ensure that grantees achieve program goals and receive any technical assistance needed to improve their delivery of substance abuse and mental health services. SAMHSA has partnerships with every HHS agency and 12 federal departments and independent agencies that fund substance abuse and mental health programs and activities. For example, within HHS, the Centers for Disease Control and Prevention and the Health Resources and Services Administration have responsibility for improving the accessibility and delivery of mental health and substance abuse services, and the National Institutes of Health funds research on numerous topics related to substance abuse and mental health. The Departments of Education, Housing and Urban Development, Justice, and Veterans Affairs fund substance abuse and mental health initiatives to help specific populations, such as children and homeless people. In addition, the White House Office of National Drug Control Policy is responsible for overseeing and coordinating federal, state, and local drug control activities. Specifically, the office gives federal agencies guidance for preparing their annual budgets for activities related to reducing illicit drug use. It also develops substance abuse profiles of states and large cities, which contain statistics related to drug use and information on federal substance abuse prevention and treatment grants awarded to that state or city. SAMHSA has operated without a strategic plan since October 2002. Although agency officials are in the process of drafting a plan that covers fiscal years 2004 through 2009 and expect to have it ready for public comment in the fall of 2004, they do not know when they will issue a final strategic plan. As part of its strategic planning process, which began in fiscal year 2002, SAMHSA developed three long-term goals for the agency—promoting accountability, enhancing service capacity, and improving the effectiveness of substance abuse and mental health services. SAMHSA’s management has also identified 11 priority issues to guide the agency’s activities and resource allocation and 10 priority principles that agency officials are to consider when they develop policies and programs related to these issues. (See table 2 for a list of SAMHSA’s priority issues and priority principles.) For example, when SAMHSA develops grant programs to increase substance abuse treatment capacity—a priority issue—staff are to consider the priority principle of how the programs can be implemented in rural settings. To ensure that the priority issues play a central role in the work of its three centers, SAMHSA established work groups for all the priority issues that include representation from at least two centers. The work groups are to make recommendations to SAMHSA’s leadership about funding for specific programs and to develop cross- center initiatives. Although SAMHSA officials consider the agency’s set of priority issues and priority principles a valuable planning and management tool, it lacks important elements that a strategic plan would provide. For example, SAMHSA’s priorities do not identify the approaches and resources needed to achieve the long-term goals; the results expected from the agency’s grant programs and a timetable for achieving those results; and an assessment of key external factors, such as the actions of other federal agencies, that could affect SAMHSA’s ability to achieve its goals. Without a strategic plan that includes the expected results against which the agency’s efforts can be measured, it is unclear how the agency or the Congress will be able to assess the agency’s progress toward achieving its long-term goals or the adequacy and appropriateness of SAMHSA’s grant programs. Such assessments would help SAMHSA determine whether it needs to eliminate, create, or restructure any grant programs or activities. The priority issue work groups are developing multiyear action plans that could support SAMHSA’s strategic planning efforts, because the plans are expected to include measurable performance goals, action steps to meet those goals, and a description of external factors that could affect program results. SAMHSA officials expect to approve the action plans by June 30, 2004, and include them as a component of the draft strategic plan. SAMHSA’s strategic workforce planning efforts lack key strategies to ensure appropriate staff will be available to manage the agency’s programs. Specifically, SAMHSA has not developed a detailed succession strategy to prepare for the loss of essential expertise and to ensure that the agency can continue to fill key positions. In addition, the agency has not fully developed hiring and training strategies to ensure that its project officers can administer the proposed performance partnership grants. SAMHSA has, however, taken steps to improve project officers’ expertise for managing the current block grants and to increase staff effectiveness by improving the efficiency of its work processes. While SAMHSA recently implemented a performance management system that links staff expectations with the agency’s long-term goals, other aspects of the system do not reinforce individual accountability. SAMHSA’s strategic workforce planning lacks key elements to ensure that the agency has staff with the appropriate expertise to manage its programs. The goal of strategic workforce planning is to develop long-term strategies for acquiring, developing, and retaining staff needed to achieve an organization’s mission and programmatic goals. SAMHSA is implementing a strategic workforce plan—developed for fiscal years 2001 through 2005—that identifies the need to strategically and systematically recruit, hire, develop, and retain a workforce with the capacity and knowledge to achieve the agency’s mission. SAMHSA developed the plan to improve organizational effectiveness and make the agency an “employer of choice,” and the plan calls for development of an adequately skilled workforce and efficient work processes. (See app. II for additional information on SAMHSA’s strategic workforce plan.) The plan specifically outlines the need to engage in succession planning to prepare for the loss of essential expertise and to implement strategies to obtain and develop the competencies that the agency needs. SAMHSA did not include a succession strategy in its strategic workforce plan, and the agency has not yet developed such a strategy. As we have previously reported, succession planning is important for strengthening an agency’s workforce by ensuring an ongoing supply of successors for leadership and other key positions. SAMHSA officials told us the agency has begun to engage in succession planning. They also noted that recent retirement and attrition rates have been moderate—about 5 percent and 10 percent, respectively, in fiscal year 2003—and that the agency’s small size allows them to identify those likely to retire and to fill key vacancies as they occur. However, the proportion of SAMHSA’s workforce eligible to retire is expected to rise from 19 percent in fiscal year 2003 to 25 percent in fiscal year 2005, and careful planning could help SAMHSA prepare for the loss of essential expertise. Another shortcoming in SAMHSA’s strategic workforce planning is that the agency has not fully developed hiring and training strategies to ensure that its project officers will have the appropriate expertise to manage the proposed performance partnership grants. The changes in the block grant will alter the relationship between SAMHSA and the states, requiring project officers to negotiate specific performance goals and monitor states’ progress towards these goals. SAMHSA’s block grant reengineering team found that, to carry out these responsibilities, project officers will need training in performance management; elementary statistics; and negotiation, advocacy, and mediation. SAMHSA expected to have a training plan by late May 2004, but has not established a firm date by which the training will be provided. As SAMHSA develops the training plan, it will be important for the agency to consider how it will implement and evaluate the training, including how it will assess the effect of the training on staff’s development of needed skills and competencies. In addition, the reengineering team recommended that the agency use individualized staff development plans for project officers to ensure that they acquire necessary skills. SAMHSA expects to have the individual development plans in place by the end of fiscal year 2004. The team also recommended that the agency develop new job descriptions to recruit new staff. SAMHSA has developed job descriptions that identify the responsibilities all project officers will have to meet and is using those descriptions in its recruitment efforts. SAMHSA has initiated efforts to improve the ability of project officers to assist grantees with the current block grants. For example, SAMHSA officials told us that the agency has made an effort to hire more project officers with experience working in state mental health and substance abuse systems. The agency is also expanding project officers’ training on administrative policies and procedures and is planning to add a discussion of block grant procedures to its on-line policy manual. These efforts should help respond to the block grant reengineering team’s finding that project officers require additional training in substance abuse prevention and treatment and block grant program requirements. They should also help address the concerns of state officials who told us that project officers for the block grants have not always had sufficient background in mental health or substance abuse services or have provided confusing or incorrect information on grant requirements. For example, one state received conflicting information from its project officer about the percentage of its substance abuse block grant that it was required to spend for HIV/AIDS services. Similarly, according to another state official, a project officer provided unclear guidance on how to submit a request to waive the mental health block grant’s maintenance of effort requirement, which resulted in the state having to resubmit the request. To meet the goal in its workforce plan of increasing staff effectiveness, SAMHSA is taking steps to improve the agency’s work processes. For example, agency officials expect to reduce the amount of time and effort that staff devote to preparing grant announcements by issuing 4 standard grant announcements for its discretionary grant programs, instead of the 30 to 40 issued annually in previous years. SAMHSA officials estimate that the 4 standard announcements will encompass 75 to 80 percent of the agency’s discretionary grants and believe they will improve the efficiency of the grant award process. In addition, SAMHSA officials told us that while most new award decisions have been made at the end of the fiscal year, they expect that this consolidation will allow the agency to issue some awards earlier in the year. SAMHSA has adopted a new performance management system for its employees that is intended to hold staff accountable for results by aligning individual performance expectations with the agency’s goals—a practice that we have identified as key for effective performance management. SAMHSA is aligning the performance expectations of its administrator and senior executives with the agency’s long-term goals and priority issues and then linking those expectations with expectations for staff at lower levels. As a result, SAMHSA’s senior executives’ performance expectations are linked directly to the administrator’s objectives, and all other employees have at least one performance objective that can be linked to the administrator’s objectives. For example, objectives related to implementing the four new discretionary grant announcements are included in the 2003 performance plans of the appropriate center directors, branch chiefs, and project officers. In contrast, other aspects of SAMHSA’s performance management system do not reinforce individual accountability for results. SAMHSA’s performance management system does not make meaningful distinctions between acceptable and outstanding performance—an important practice in a results-oriented performance management system. Instead, staff ratings are limited to two categories, “meets or exceeds expectations” or “unacceptable.” SAMHSA managers told us that few staff receive an unacceptable rating and that using a pass/fail system can make it difficult to hold staff accountable for their performance. Moreover, this type of system may not give employees useful feedback to help them improve their performance, and it does not recognize employees who are performing at higher levels. In addition, SAMHSA’s performance management system does not assess staff performance in relation to specific competencies. Competencies define the skills and supporting behaviors that individuals are expected to exhibit in carrying out their work, and they can provide a fuller picture of an individual’s contributions to achieving the agency’s goals. SAMHSA’s strategic workforce plan includes a description of the competencies that staff need, including technical competencies related to data collection and analysis, co-occurring disorders, and service delivery. However, these competencies have not been incorporated into the agency’s performance management system to help reinforce behaviors and actions that support the agency’s goals. SAMHSA jointly funds grant programs with other federal agencies and departments, often through agreements that enable funds to be transferred between agencies. While these interagency agreements can streamline the grant-making process, SAMHSA’s lengthy procedures for approving them have delayed the awarding of grants. SAMHSA officials told us that they recently implemented policies to expedite the approval process. In addition to jointly funding programs, SAMHSA shares mental health and substance abuse expertise and information with other federal agencies and departments. Grantees with whom we spoke identified opportunities for SAMHSA to better coordinate with its federal partners to disseminate information about effective practices to states and community-based organizations. SAMHSA frequently collaborates with other federal agencies and departments to jointly fund grant programs that support a range of substance abuse and mental health services. (See table 3 for examples of jointly funded programs.) For example, for the $34.4 million Collaborative Initiative to Help End Chronic Homelessness, SAMHSA, the Health Resources and Services Administration, the Department of Housing and Urban Development, and the Department of Veterans Affairs provide funds or other resources related to their own programs and the populations they generally serve. SAMHSA’s funds are directed toward the provision of substance abuse and mental health services for homeless people. Many of SAMHSA’s joint funding arrangements use interagency agreements to transfer funds between agencies, which allow grantees to receive all of their grant funds from a single federal agency or department (see table 4). For example, Safe Schools, Healthy Students grantees receive all of their funds from the Department of Education, even though SAMHSA also supports this program. SAMHSA officials told us that interagency transfers create fewer funding streams and make the process less confusing to grantees. While transferring funds can streamline the grant process, SAMHSA’s system for approving interagency agreements has been inefficient. Before the funds are transferred, the agencies involved must approve an interagency agreement describing the amount of money being transferred and how it will be used. Officials from the Departments of Justice and Education told us that SAMHSA’s approval process was lengthy and resulted in agreements being completed at the last minute. The Department of Education found that it took SAMHSA more than 70 days to approve the 2003 Safe Schools, Healthy Students interagency agreement— a period that SAMHSA estimated was about 40 days longer than in previous years. SAMHSA officials told us that the approval process was complicated by the lack of a clear policy identifying the SAMHSA management officials who needed to review and approve the agreements. In March 2004, SAMHSA implemented new policies that clarify the process for reviewing and approving agreements and the responsibilities of specific SAMHSA officials. At that time, SAMHSA also began to track the time it takes for the agency to review and approve interagency agreements. It is too early to know how SAMHSA’s new policies will affect the efficiency of the approval process. SAMHSA provides its expertise and information on substance abuse and mental health to other federal agencies and departments and collaborates with them to share information with states and community-based organizations. For example, officials from the Health Resources and Services Administration told us that in coordinating health care and mental health services for people who are homeless, they use SAMHSA’s knowledge of community-based substance abuse and mental health providers who can work with primary care providers. Also, the Office of National Drug Control Policy uses data from SAMHSA’s National Survey on Drug Use and Health to determine the extent to which it has achieved its goals and objectives. This survey also provides data to support HHS’s Healthy People 2010’s substance abuse focus area. Several grantees told us that SAMHSA and the National Institutes of Health could better collaborate to ensure that providers have information about the most effective ways to deliver substance abuse and mental health services. Recognizing the importance of such a partnership, the two agencies recently initiated the Science to Service initiative, which is designed to better integrate the National Institutes of Health’s research on effective practices with the services funded by SAMHSA. For example, in fiscal year 2003, SAMHSA and the National Institutes of Health funded a grant to help states more readily integrate effective mental health practices into service delivery in their states. In addition, grantees recommended that SAMHSA better coordinate with the Departments of Education and Justice to disseminate information about effective practices to states and community-based organizations. For example, a state official told us that SAMHSA and the Department of Education do not ensure that their processes for evaluating substance abuse prevention programs result in comparable sets of model programs. The two agencies evaluate programs using different criteria and rate some prevention programs differently. SAMHSA reported that it may be appropriate for agencies to have different criteria because each agency must have the ability to tailor its criteria to meet the specific goals of its grant programs. A SAMHSA official acknowledged, however, that SAMHSA and the Departments of Education and Justice are discussing how they can refine their criteria for evaluating prevention programs and better communicate the results to grantees. Officials from state mental health and substance abuse agencies and community-based organizations identified opportunities for SAMHSA to better manage its block and discretionary grant programs. They cited concerns with SAMHSA’s grant application processes, site visits, and the availability of information on technical assistance. SAMHSA plans to transform its block grants into performance partnership grants in fiscal years 2005 and 2006, and the agency, along with the states, is preparing for the change. However, state officials are concerned that SAMHSA has not finalized the performance data that states would report under the proposed performance partnership grants. In addition, SAMHSA has not completed the plan it must send to the Congress identifying the data reporting requirements for the states and any legislative changes needed to implement the performance partnership grants. Officials from states and community-based organizations told us that SAMHSA could improve administration of its grant programs, citing concerns related to the agency’s grant application review processes, site visits to review states’ compliance with block grant requirements, and the availability of information on technical assistance opportunities. In some instances, SAMHSA has begun to respond to these issues. Grantees we talked to expressed concern that SAMHSA rejects discretionary grant applications without reviewing them for merit if they do not comply with administrative requirements. SAMHSA told us that of the 2,054 fiscal year 2003 applications it received after January 3, 2003, 393—19 percent—were rejected in this initial screening process. Of the 14 grantees we interviewed, 4 told us that SAMHSA rejected 1 of their 2003 grant applications without review and a fifth had 5 applications rejected. Grantees told us that this practice does not enable applicants to obtain substantive feedback on the content of their applications. They also said that SAMHSA’s practice of waiting to notify applicants of the rejection until it notifies all applicants of funding decisions—near the start of the next fiscal year—impedes their fiscal planning. In response to concerns over the number of grant applications it rejected on administrative grounds in fiscal year 2003, SAMHSA has changed the way it will screen fiscal year 2004 applications. On March 4, 2004, SAMHSA announced revised requirements that are intended to simplify and expedite the initial screening process for discretionary grants. For example, SAMHSA will no longer automatically screen out applicants because their application is missing a section, such as the table of contents. Instead, the agency will consider whether the application contains sufficient information for reviewers to consider the application’s merit. In addition, SAMHSA will allow applicants more flexibility in the format of their application. Instead of focusing exclusively on specific margin sizes or page limits, SAMHSA will consider the total amount of space used by the applicant to complete the narrative portion of the application. SAMHSA expects that under the new procedures it will screen out significantly fewer applications. However, some applications continue to be rejected for administrative reasons and will not receive a merit review. In another change, a SAMHSA official told us that it would begin to notify applicants within 30 days of the decision if their application is rejected. State officials told us that the length and complexity of the mental health and substance abuse block grant applications create difficulties for both states and project officers. They described the block grant applications as confusing, repetitive, and difficult to complete. Furthermore, officials in five states told us that SAMHSA project officers may not be using the information states provide in the block grant application as well as they could, especially the narrative portion. For example, one state official received questions from the project officer about the state’s substance abuse activities for women and children that could have been answered by reading the narrative section of the application. State officials suggested that project officers could more easily use the information states provided if the application were streamlined and included only the information most important to SAMHSA. They suggested that SAMHSA make these changes when it converts the block grants to performance partnership grants. SAMHSA officials told us they will not know whether the applications can be streamlined until they finalize the format of the performance partnership grants. To allow center staff to retrieve information more quickly from the current substance abuse block grant application, the Center for Substance Abuse Prevention and the Center for Substance Abuse Treatment began to use a Web-based application in spring 2003. The Web-based application allows the centers to retrieve information collected from the substance abuse block grant applications and more quickly develop reports analyzing data across states, such as the number of states in compliance with specific block grant requirements. State officials told us that SAMHSA’s site visits to review states’ compliance with block grant requirements do not always allow the agency to adequately review their programs. For example, officials in three states told us that the length of these visits—often 3 to 5 days—is too short for SAMHSA to fully understand conditions in the state that affect the provision of services. Officials in two of these states said 3-day site visits did not provide reviewers with enough time to visit mental health care providers in the more remote parts of the state and observe how they respond to local service delivery challenges. A SAMHSA official told us that 3-day site visits are generally adequate for most states, but states are able to request a longer visit. The official acknowledged that SAMHSA could better communicate this flexibility to states. Officials from eight states said the technical assistance they received from SAMHSA and its contractors was helpful; officials from five states told us that the agency could improve its dissemination of information about what assistance is available to grantees. For example, one state official suggested that SAMHSA provide more information on its Web site about what assistance is available or has been requested by other states. He said that making this information available is especially important because there is high staff turnover at the state level, and relatively new staff may have little knowledge about what SAMHSA offers. Several state mental health officials commented that SAMHSA’s substance abuse block grant has a more structured technical assistance program than the mental health block grant and is able to offer more assistance opportunities. SAMHSA officials noted that the substance abuse block grant program has more funds and staff to devote to the provision of technical assistance. SAMHSA’s Center for Substance Abuse Treatment, for example, has a separate program branch to manage technical assistance contracts. This center is in the process of creating a list of documents that grantees developed with the help of technical assistance contractors—such as a state strategic plan for providing substance abuse services—so that other states can use them as models. To prepare for the mental health and substance abuse performance partnership grants—which SAMHSA plans to implement in fiscal years 2005 and 2006, respectively—SAMHSA has worked with states to develop performance measures and improve states’ ability to report performance data. Specifically, SAMHSA identified outcomes for which states would be required to report performance data. SAMHSA asked states to voluntarily report on performance measures related to these outcomes in their fiscal year 2004 block grant applications and the agency provided states with funding to help them make needed changes to their data collection and reporting systems. Over fiscal years 2001 and 2002, SAMHSA awarded 3- year discretionary grants of about $100,000 per year to state mental health and substance abuse agencies to develop systems for collecting and reporting performance data. State officials told us they used the grants in a variety of ways, such as to train service providers to report performance data. Substance abuse and mental health agency officials we talked to told us that their states have made progress in preparing to report on performance measures, but that their states would need to make additional data system changes before they could report all of the data that SAMHSA has proposed for the performance partnership grants. For example, officials from three states told us that they were still unprepared to report data that would come from other state agencies—such as information on school attendance obtained from the state’s education system. In addition, several state officials told us they have been unable to complete their preparations because they are waiting for SAMHSA to finalize the data it will require states to report. For example, a state mental health director told us that the lack of final reporting requirements has contributed to a delay in the implementation of the state’s new information management system. Similarly, officials from a state substance abuse agency told us that without SAMHSA’s final requirements, the state agency is limited in its ability to require substance abuse treatment providers to change the way they report performance data. In addition, the Congress may need to make statutory changes before SAMHSA can implement the performance partnership grants, but SAMHSA has not given the Congress the information it sought on what changes are needed or on how the agency proposes to implement the grants— including the final data reporting requirements for the states. In 2000, the Congress directed SAMHSA to submit a plan containing this information by October 2002. SAMHSA submitted this plan to HHS for internal review on April 12, 2004, after which the plan must receive clearance from the Office of Management and Budget. SAMHSA could not tell us when it expects to submit the plan to the Congress. SAMHSA’s leaders are taking steps to improve the management of the agency, but key planning tools are not fully in place. SAMHSA has been slow to issue a strategic plan, which is essential to guide the agency’s efforts to increase program accountability and direct resources toward accomplishing its goals. Furthermore, while SAMHSA is in the process of implementing its strategic workforce plan, the agency’s workforce planning efforts lack important elements—such as a detailed succession strategy—to help SAMHSA prepare for future workforce needs. Because future retirements and attrition could leave the agency without the appropriate workforce to effectively carry out its programs, it would be prudent for SAMHSA to have a succession strategy to help it retain institutional knowledge, expertise, and leadership continuity. In addition, SAMHSA has not completed plans to ensure that its workforce has the appropriate expertise to manage the proposed performance partnership grants, which would represent a significant change in the way SAMHSA holds states accountable for achieving results. These grants would require new skills from SAMHSA’s workforce. Therefore, it is important for SAMHSA to complete hiring and training strategies to ensure that its workforce can effectively implement the grants. SAMHSA cannot convert the block grants to performance partnership grants until it gives the Congress its implementation plan, which was due in October 2002. The Congress needs the information in SAMHSA’s plan for its deliberations about legislative changes that may be needed to allow SAMHSA to implement the performance partnership grants. In addition, the plan’s information on the performance measures SAMHSA will use to hold states accountable is needed by the states as they prepare to report required performance data. If SAMHSA does not promptly submit this plan, states may not be ready to submit all needed data by the time SAMHSA has planned to implement the grants—in fiscal years 2005 and 2006—and SAMHSA may not have the legislative authority needed to make the mental health and substance abuse prevention and treatment block grant programs more accountable and flexible. Finally, as SAMHSA makes efforts to increase program accountability, it is in the agency’s interest to fund state and local programs that show the most promise for improving the quality and availability of prevention and treatment services. Although SAMHSA has made changes that should reduce the number of discretionary grant applications rejected solely for administrative reasons—such as exceeding the specified page limitation— some applications are still not reviewed for merit because of administrative errors. Allowing applicants to correct such errors and resubmit their application within an established time frame could help ensure that reviewers are able to assess the merits of the widest possible pool of applications and could increase the likelihood of SAMHSA’s funding the most effective mental health and substance abuse programs. We recommend that, to improve SAMHSA’s management of its programs, promote the effective use of its resources, and increase program accountability, the Administrator of SAMHSA take the following four actions: Develop a detailed succession strategy to ensure SAMHSA has the appropriate workforce to carry out the agency’s mission. Complete hiring and training strategies, and assess the results, to ensure that the agency’s workforce has the appropriate expertise to implement performance partnership grants. Expedite completion of its plan for the Congress providing information on the agency’s proposal for implementing the performance partnership grants and any legislative changes that must precede their implementation. Develop a procedure that gives applicants whose discretionary grant application contains administrative errors an opportunity to revise and resubmit their application within an established time frame. We provided a draft of this report to SAMHSA for comment. Overall, SAMHSA generally agreed with the findings of the report. (SAMHSA’s comments are reprinted in app. III.) SAMHSA said that it already has efforts under way to address each of the report’s key findings and recommendations, and that it endorses the value the report places on strategic planning, workforce planning, and collaboration with federal, state, and community partners. SAMHSA indicated that it will continue to engage in a strategic planning process and said that its priority issues and principles are central to this process. As we had noted in the draft report, SAMHSA commented that it expects to complete and approve the action plans developed by each of its priority issue work groups by June 30, 2004. SAMHSA also said that it would update its draft strategic plan to include summaries of the action plans, and then disseminate the draft for public comment, submit it to HHS for clearance, and publish the final plan. Our draft report stated that SAMHSA did not want to issue its strategic plan before HHS issued the new departmental strategic plan. In its comments, SAMHSA noted that HHS published its strategic plan in April 2004 and that this was no longer an issue affecting SAMHSA’s schedule for publishing its plan. In its comments, SAMHSA also stated that it places a high priority on the development of a succession plan. SAMHSA said that it is preparing for an anticipated increase in the agency’s attrition rate over the next several years and is reviewing the pool of staff eligible to retire to identify the skills and expertise that could be lost to the organization. While SAMHSA is beginning to engage in succession planning, it has not developed a detailed succession strategy. We have made our recommendation more specific to communicate the need for SAMHSA to develop such a strategy. In response to our recommendation that SAMHSA complete hiring and training strategies to ensure that the agency’s workforce has the appropriate expertise to implement performance partnership grants, SAMHSA said that it is addressing the need for its workforce to have the appropriate expertise. For example, SAMHSA indicated that it has initiated efforts to identify training needed by current staff and to ensure that new staff have needed skills. However, we believe it is important for SAMHSA to fully develop both hiring and training strategies to ensure that it has the appropriate workforce in place when it implements performance partnership grants. In response to our recommendation to develop a procedure to allow applicants to correct administrative errors in discretionary grant applications, SAMHSA commented that its new screening procedures have yielded a substantial increase in the percentage of applications that will be reviewed for merit. As a result, SAMHSA believes our recommendation is premature and said that it plans to evaluate the results of the revised procedures before making any additional changes. While early evidence indicates that the new procedures are reducing the proportion of applications rejected for administrative reasons, these procedures have not eliminated such rejections. Because it is important for reviewers to be able to assess the merits of the widest possible pool of applications, we believe it would be beneficial for SAMHSA to develop the procedure we are recommending without delay. Finally, in response to the report’s discussion of the performance partnership grants, SAMHSA commented that it will continue its efforts to increase accountability in its block grant and discretionary grant programs. SAMHSA said that the proposed fiscal year 2005 mental health and substance abuse block grant applications contain outcome measures that the agency expects to use to monitor grant performance. However, these applications have not been finalized, and the draft applications indicate that several of the performance measures are still being developed. It is important for SAMHSA to give the Congress its plan for implementing the performance partnership grants so that the Congress can consider any legislative changes that might be necessary to implement the grants and SAMHSA can more fully hold states accountable for achieving specific results. SAMHSA also provided technical comments. We revised our report to reflect SAMHSA’s comments where appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. We are sending copies of this report to the Secretary of Health and Human Services, the Administrator of SAMHSA, appropriate congressional committees, and other interested parties. We will also make copies available to others who are interested upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (312) 220-7600 or Helene Toiv, Assistant Director, at (202) 512-7162. Janina Austin, William Hadley, and Krister Friday also made major contributions to this report. In performing our work, we obtained documents and interviewed officials from the Substance Abuse and Mental Health Services Administration (SAMHSA). While we reviewed documents related to SAMHSA’s strategic planning and to its performance management system, we did not perform a comprehensive evaluation of SAMHSA’s management practices. We also reviewed the policies and procedures the agency uses to oversee states’ and other grantees’ use of block and discretionary grant funds. We interviewed officials from SAMHSA’s Office of the Administrator; Office of Policy, Planning, and Budget; Office of Program Services; Office of Applied Studies; Center for Mental Health Services; Center for Substance Abuse Prevention; and Center for Substance Abuse Treatment. To determine how SAMHSA collaborates with other federal agencies and departments, we interviewed officials from the Department of Education, the Department of Justice, and the Department of Health and Human Services’ Centers for Disease Control and Prevention, Health Resources and Services Administration, and National Institutes of Health. After reviewing lists of collaborative efforts provided by SAMHSA’s centers, we selected these agencies because each one is involved in a collaborative effort with each of SAMHSA’s three centers. Within these agencies, we identified collaborative initiatives that involve interagency committees, data sharing, interagency agreements, and other joint funding arrangements. We interviewed and obtained documentation related to these initiatives from federal agency officials who were directly involved in them. We also interviewed officials from the Centers for Medicare & Medicaid Services because Medicaid is the largest public payer of mental health services and officials from the Indian Health Service, which provides substance abuse and mental health services to tribal communities. We interviewed officials from the White House Office of National Drug Control Policy, which coordinates federal antidrug efforts. To determine how SAMHSA collaborates with state grantees, we interviewed officials from state mental health and substance abuse agencies. We interviewed mental health agency officials in California, Colorado, Connecticut, Mississippi, and South Dakota, and substance abuse agency officials in Iowa, Massachusetts, Montana, Texas, and Virginia. We selected these states on the basis of variation in their geographic location, the size of their fiscal year 2003 mental health or substance abuse block grant award, the number of discretionary grant awards they received in fiscal year 2002, and their involvement in SAMHSA initiatives to improve states’ ability to report mental health and substance abuse data. To gain a better understanding of SAMHSA’s collaborative efforts, we interviewed officials from community-based organizations that received discretionary grants from each of SAMHSA’s centers. We selected the largest discretionary grant programs available to community-based organizations from the Center for Substance Abuse Treatment (the Targeted Capacity Expansion: HIV Program) and the Center for Mental Health Services (the Child Traumatic Stress Initiative). We selected the Center for Substance Abuse Prevention’s Best Practices: Community- Initiated Prevention Intervention Studies—the center’s second largest discretionary grant program available to community-based organizations—to provide a variety of SAMHSA’s priority issues. We also selected one grant that was jointly funded by SAMHSA and the Health Resources and Services Administration (the Collaboration to Link Health Care for the Homeless Programs and Community Mental Health Agencies). (See table 5.) For each of the four grant programs, we selected one community-based organization that received grant funds in fiscal year 2001 or 2002 and that was located in 1 of the 10 states we selected. To obtain additional information about SAMHSA’s collaboration with state agencies and other grantees, we interviewed representatives of the National Association of State Alcohol and Drug Abuse Directors, the National Association of State Mental Health Program Directors, and the Community Anti-Drug Coalitions of America. These organizations represent, respectively, state substance abuse agencies, state mental health agencies, and community-based substance abuse prevention organizations. We also interviewed representatives of the National Alliance for the Mentally Ill and the National Council on Alcoholism and Drug Dependence, because those organizations represent consumers of mental health services and substance abuse services, respectively. We conducted our work from July 2003 through May 2004 in accordance with generally accepted government auditing standards. SAMHSA has a strong leadership and management capacity, a clearly defined role as a national leader in substance abuse and mental health services, and a well-structured organization to support its mission. SAMHSA has effective and efficient processes and methods for accomplishing its mission and optimizing its workforce. SAMHSA strategically invests in its workforce by putting the right people in the right place at the right time. SAMHSA systematically recruits, selects, and hires talented employees and continuously re- recruits them by creating a great place to work and by developing the competencies needed to achieve its mission. Strategies Ensure that SAMHSA has a cross- functional executive leadership team that works together to guide the organization toward achieving its mission. Improve the development, review, and management of discretionary grants. Change the size, scope, and distribution of the workforce of SAMHSA. Improve the publication clearance process. Anticipate competency needs and strategically close competency gaps where needed. Develop a clear and compelling multiyear strategy that is dynamic, aligned with the organizational mission, and linked to the performance of each organizational component and employee. Examine the block and formula grants process to create a more efficient and streamlined process. Continue to enhance a systematic approach to recruiting skilled talent in a tight labor market. Establish a new system for responding to external requests. Continue to enhance a systematic approach to retaining existing expertise. Create an organizational structure that maintains the strengths of the current system, focuses on quality, and increases flexibility and capacity. Continue to enhance customer- focused and effective infrastructure at SAMHSA. Enhance the design and implementation of a systematic approach to developing the workforce. Develop a systematic performance management system to align individual effort with strategic imperatives. Implement a technology tool to provide SAMHSA with workforce profile data for managing its workforce.
The Substance Abuse and Mental Health Services Administration (SAMHSA) is the lead federal agency responsible for improving the quality and availability of prevention and treatment services for substance abuse and mental illness. The upcoming reauthorization review of SAMHSA will enable the Congress to examine the agency's management of its grant programs and plans for converting its block grants to performance partnership grants, which will hold states more accountable for results. GAO was asked to provide the Congress with information about SAMHSA's (1) strategic planning efforts, (2) efforts to manage its workforce, and (3) partnerships with state and community-based grantees. SAMHSA has not completed key planning efforts to ensure that it can effectively manage its programs. The agency has operated without a strategic plan since October 2002, and although SAMHSA officials are drafting a plan, they do not know when it will be completed. SAMHSA developed long-term goals and a set of priority issues that provide some guidance for the agency's activities, but they are not a substitute for a strategic plan. In particular, they do not identify the approaches and resources needed to achieve the agency's long-term goals and the desired results against which the agency's programs can be measured. SAMHSA also has not fully developed strategies to ensure it has the appropriate staff to manage the agency's programs. Although the proportion of SAMHSA's staff eligible to retire is increasing, the agency has not developed a detailed succession strategy to prepare for the loss of essential expertise and to ensure that the agency continues to have the ability to fill key positions. In addition, the proposed performance partnership grants will change the way SAMHSA administers its largest grant programs, but the agency has not completed hiring and training strategies to ensure that its workforce will have the skills needed to administer the grants. Finally, SAMHSA's system for evaluating staff performance does not distinguish between acceptable and outstanding performance, and the agency does not assess staff performance in relation to specific competencies--practices that would help reinforce individual accountability for results. SAMHSA has opportunities to improve its partnerships with state and community-based grantees. For example, grantees objected to SAMHSA's practice of rejecting discretionary grant applications that do not comply with administrative requirements--such as those that exceed page limitations--without reviewing them for merit. Rejecting applications solely on administrative grounds potentially prevents SAMHSA from supporting the most effective programs. SAMHSA's recent changes to the review process should reduce such rejections, but have not eliminated them. State officials are also concerned that SAMHSA has not finalized the performance data that states would be required to report under the proposed performance partnership grants. To comply, states will need to change their data systems, but they cannot complete these changes until SAMHSA finalizes the requirements. The Congress directed SAMHSA to submit a plan by October 2002 describing the final data reporting requirements and any legislative changes needed to implement the grants, but SAMHSA has not yet completed the plan. This delay could prevent the agency from meeting its current timetable for implementing the mental health and substance abuse performance partnership grants in fiscal years 2005 and 2006, respectively.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Stryker family of vehicles consists of 10 eight-wheeled armored vehicles mounted on a common chassis that provide transport for troops, weapons, and command and control. Stryker vehicles weigh on average about 19 tons—or 38,000 pounds, substantially less than the M1A1 Abrams tanks (68 tons) and the Bradley Fighting vehicle (33 tons), the primary combat platforms of the Army’s heavier armored units. The C-130 cargo aircraft is capable of tactical, or in-theater, transport of one Stryker vehicle; the Army’s Abrams tank and Bradley Fighting vehicle exceed the C-130 aircraft’s size and weight limits. The Army’s original operational requirements for Stryker vehicles included (1) the capability of entering, being transportable in, and exiting a C-130 aircraft; (2) the vehicle’s combat capable deployment weight must not exceed 38,000 pounds to allow C-130 transport of 1,000 miles; and (3) the Stryker vehicles must be capable of immediate combat operations after unloading. The Army’s most current operational requirements for Stryker vehicles required the same vehicle weight and C-130 transport capabilities without reference to C-130 transport of 1,000 miles. The Army has similar operational requirements for its Future Combat Systems’ vehicles. The Army’s April 2003 Operational Requirements document for the Future Combat Systems requires the vehicles’ essential combat configuration to be no greater than 38,000 pounds and have a size suitable for C-130 aircraft transport. A memorandum of agreement between the Air Force and the Army issued in 2003, set procedures allowing C-130 transport of 38,000-pound Stryker vehicles aboard Air Force aircraft, but required that the combined weight of the vehicles, other cargo, and passengers shall not exceed C-130 operational capabilities, which vary based on mission requirements, weather, airfield conditions, among other factors. Eight of the 10 vehicle configurations are being acquired production ready—meaning they require little engineering design and development work prior to production. Two of the 10 vehicle configurations, the Mobile Gun System and the NBC Reconnaissance vehicle, are developmental vehicle variants—meaning that a substantial amount of design, development, and testing is needed before they can go into production. Table 1 provides descriptions of the ten Stryker vehicles. Three of the vehicles are shown in figures 1 to 3. The Army selected one light infantry brigade and one mechanized infantry brigade at Fort Lewis, Washington, to become the first two of six planned Stryker brigades. The first of these brigades, the 3rd Brigade, Second Infantry Division, became operational in October 2003, at which time the Brigade was deployed to Iraq. The second of the two Fort Lewis brigades became operational in May 2004, and plans are for it to deploy to Iraq in late 2004. The Army plans to form four more Stryker brigades from 2005 through 2008. The planned locations of the next four brigades are Fort Wainwright/Fort Richardson, Alaska; Fort Polk, Louisiana; Schofield Barracks, Hawaii; and a brigade of the Pennsylvania Army National Guard. Acquisition of the eight Stryker production vehicle configurations is about two-thirds complete with about 68 percent of the over 1,800-planned production vehicles ordered, and a low rate of production for the two developmental Strykers is scheduled for September 2004. Estimated program costs have increased because of, among other reasons, increases in the Army’s estimate for related military construction, such as for the cost of building new Stryker vehicle maintenance facilities. However, the Army does not yet have reliable estimates for the Stryker’s operating costs, such as for vehicle maintenance, because of limited peacetime operational experience with the vehicles. The Army is pursuing three acquisition schedules for the Stryker production and developmental vehicles. Since the November 2000 Stryker vehicle contract award, the Army has ordered 1,231 production vehicles—about 68 percent—of the 1,814 production vehicles the Army plans to buy for the six Stryker brigades. Of the 1,231 vehicles ordered, 800 have been delivered to the brigades, including all of the production vehicles for the first two Stryker brigades. The Army is currently fielding Stryker production vehicles for the third of the six planned brigades. The third brigade is to be fielded in Alaska. Thus far, the Army has bought limited quantities of the developmental vehicle variants—8 Mobile Gun System and 4 NBC Reconnaissance vehicles—as prototypes and for use in testing at various test sites around the country. Of 238 Mobile Gun Systems the Army plans to buy overall, current plans are to buy 72 initially upon approval for low-rate initial production scheduled for September 2004. The Army plans low-rate initial production of 17 NBC Reconnaissance vehicles also in September 2004. The Mobile Gun System is not scheduled to reach a full production decision until September 2006 at the earliest, while the NBC Reconnaissance vehicle is not scheduled to reach its full production decision until 2007. Table 2 below shows the status of Stryker vehicle acquisition as of April 2004. The Stryker vehicle program’s total costs increased, in then-year dollars, from the original November 2000 estimate of $7.1 billion to the December 2003 estimate of $8.7 billion—or about 22 percent. The increases occurred primarily due to revised estimates for the associated cost of military construction, such as that needed to upgrade maintenance and training facilities for a Stryker brigade, but were also due to lesser increases in procurement and research, development, test, and evaluation (RDT&E) costs for the vehicles—which together grew by about 8 percent from the original November 2000 estimate. In then-year dollars, the estimated cost of military construction accounted for the largest increase in the Stryker program’s cost estimate. In December 2003, the Army increased its estimate for military construction by about $1.01 billion over the original November 2000 estimate, from $322 million to $1.3 billion. (See table 3.) As in all major Department of Defense acquisition programs, military construction costs are included in the program’s total costs. According to the Army, the military construction cost estimate increased because the December 2003 estimate reflects (1) the identification of all five sites scheduled to receive Stryker brigades and (2) the total cost of upgrading or building maintenance and training facilities at these installations to accommodate a Stryker brigade. When the original estimate was made, only one site had been identified to receive a Stryker brigade and that estimate identified just the cost of maintenance facility upgrades. The Stryker vehicle’s procurement costs increased by about $390 million. The largest factor in the increase of procurement costs was the higher than originally estimated costs of procuring add-on reactive armor, including the additional costs to equip six Stryker brigades with add-on armor, instead of four brigades as originally planned. Also, the cost of RDT&E increased about $138 million, from $508 million to $645.6 million. Most of the RDT&E cost increase is attributable to revised estimates for the cost of test and evaluation, development, and system engineering for the developmental vehicles. The average acquisition cost per vehicle increased by about $0.79 million, from $3.34 million to $4.13 million. The program costs and average acquisition cost per vehicle estimates reflect a reduction in the number of Strykers planned from 2,131 to 2,096. (See table 3 above.) The Army does not have reliable estimates of Stryker vehicle operating costs because, with the first Stryker brigade’s deployment to Iraq, it lacks sufficient peacetime operational experience with the vehicles. The Army considers 3 years of actual peacetime operational cost data to be sufficient for reliable estimates. Since none of the production vehicles have 3 years of peacetime operating experience, reliable operating cost estimates will not be available until 2005 at the earliest. With the Mobile Gun System and NBC Reconnaissance vehicles still in development, it will be several years before these vehicles are fully fielded and sufficient data are available for reliable estimates of their operating costs. According to the Army, current Stryker vehicle operating cost estimates, shown in table 4 below, are engineering estimates based in part on operating costs for another vehicle in the Army’s inventory—the M-113 armored personnel carrier. The estimates assume peacetime operations. Vehicle operating costs include the cost for maintenance, repair, and the cost of consumable and repairable parts. The Army calculates vehicle cost per mile by tracking vehicle mileage and the actual costs of consumable or replaceable parts used. However, the short time frame from fielding the first Stryker brigade’s production vehicles—May 2002 through January 2003—and the brigades’ deployment to Iraq in October 2003, limited the amount of time and miles the vehicles were in peacetime service. Similarly, fielding of Stryker vehicles for the second brigade was completed in January 2004. While the Army collected operational cost and mileage data for both brigades, there were insufficient actual operating costs and miles on the vehicles to make reliable estimates. Consequently, until the Army can collect more actual peacetime operating cost data for the production vehicles, it will not be able to determine actual vehicle operating costs and make reliable operating cost estimates for these vehicles. Similarly, reliable operating cost estimates for the Mobile Gun System and NBC Reconnaissance vehicle will not be available until after 2006 when they are scheduled to begin full production and fielding. According to Army and OSD test reports, the tested Stryker production vehicles met operational requirements with certain limitations and, overall, support the key operational capabilities and force effectiveness of the Stryker Brigade Combat Team. The separate developmental testing schedules of the Mobile Gun System and NBC Reconnaissance vehicles have been delayed, resulting in delays in meeting planned production milestone dates. Delay in the Mobile Gun System’s development was due in part to shortfalls in meeting performance requirements of the vehicle’s ammunition autoloader system. The NBC Reconnaissance vehicle’s development schedule was delayed pending OSD approval of an updated technology readiness assessment for the vehicle and its nuclear, biological, and chemical sensor systems. Following the Army’s completion of live-fire tests and evaluation for seven production vehicles in February 2004 and its ongoing test evaluation of the eighth, the Army stated that the Stryker production vehicles met operational requirements, with limitations; and OSD approved full production. The Army’s System Evaluation Report for the Stryker production decision concluded that overall, the Stryker family of vehicles is effective, suitable, and survivable, and supports the key operational capabilities and force effectiveness of the Stryker Brigade Combat Team. The report concluded that the Stryker production vehicle configurations met operational requirements with limitations. For example, in the area of lethality, the report noted that four Stryker vehicle configurations have a remote weapons station that provides effective protective and supporting fires for dismounted maneuver. However, limitations of the remote weapons station’s capability to provide accurate and continuous fires at night and while moving reduce its effectiveness and lethality. Similarly, while the Stryker vehicles contribute to force protection and meet survivability requirements, there are inherent and expected survivability limitations as in any armored vehicle system. Table 5 lists some of the operational requirements of the vehicles and excerpts of selected performance capabilities and limitations from the Army’s Stryker system evaluation report. The OSD Director, Operational Test and Evaluation, found that six Stryker production vehicles are operationally effective for employment in small- scale contingency operations and operationally suitable with certain limitations. OSD found that the Engineer Squad vehicle is not operationally suitable because of poor reliability. However, in its March 2004 Stryker acquisition decision, OSD determined that the operational capabilities provided by the Engineer Squad vehicle supported its continued production in light of planned fixes, operational work-arounds, and planned follow-on testing. It also determined that corrective actions are needed to address survivability and ballistic vulnerability limitations of the vehicles, such as ensuring basic armor performance and reducing exposure of Stryker personnel. Although developmental testing is ongoing, the development and testing schedule of the Mobile Gun System has been delayed, resulting in more than a 1-year delay in meeting planned production decision milestone dates, with initial limited production to start in September 2004. The delay in the Mobile Gun System’s development was due in part to shortfalls in meeting performance requirements of the vehicle’s ammunition autoloader system. At the time of our review, the Mobile Gun System was undergoing additional testing to find a fix for the autoloader, in preparation for a low- rate production decision. The Mobile Gun System is scheduled for production qualification testing through July 2004, production verification testing starting in October 2005, and live-fire test and evaluation starting in November 2005 through September 2006. The Army’s earlier Mobile Gun System acquisition schedule was to complete developmental testing and have a low-rate initial production decision in 2003 and begin full production in 2005. Current Army plans are to buy limited quantities of Mobile Gun System vehicles upon OSD approval of low-rate initial production planned for September 2004. A full-rate production decision for the Mobile Gun System is currently scheduled for late in 2006. The Mobile Gun System has a 105mm cannon with an autoloader for rapidly loading cannon rounds without outside exposure of its three- person crew. The principal function of the Mobile Gun System is to provide rapid and lethal direct fires to protect assaulting infantry. The Mobile Gun System cannon is designed to defeat bunkers and create openings in reinforced concrete walls through which infantry can pass to accomplish their missions. According to the Army’s Stryker Program Management Office, the autoloader system was responsible for 80 percent of the system aborts during initial Mobile Gun System reliability testing because of cannon rounds jamming in the system. As of February 2004, the Army was planning additional testing and working with the autoloader’s manufacturer to determine a solution. A functioning autoloader is needed if the Mobile Gun System is to meet its operational requirements because manual loading of cannon rounds both reduces the desired rate of fire and requires brief outside exposure of crew. In its March 2004 Stryker acquisition decision, OSD required the Army to provide changes to the Mobile Gun System developmental exit criteria within 90 days, including the ability to meet cost and system reliability criteria. Although its developmental testing is also ongoing, the development schedule of the NBC Reconnaissance vehicle has also been delayed, and its production is now scheduled to occur about two years later than planned. The delay was primarily due to additional time needed to develop and test the vehicle’s nuclear, biological, and chemical sensor systems. As a result, low-rate initial production, previously scheduled for December 2003, will not occur until September 2004. A full-rate production decision, which had previously been scheduled for June 2005, will not occur until July 2007. In its March 2004 Stryker acquisition decision, OSD required the Army to provide within 90 days an updated technology readiness assessment for the NBC Reconnaissance vehicle and its nuclear, biological, and chemical sensor systems. At that time, OSD will make a determination as to whether the vehicle is ready for production. Although the Army demonstrated during training events that Stryker vehicles can be transported short distances on C-130 aircraft and unloaded for immediate combat, the average 38,000 pound weight of Stryker vehicles, other cargo weight concerns, and less than ideal environmental conditions present significant challenges in using C-130s for routine Stryker transport. Similar operational limits would exist for C-130 transport of the Army’s Future Combat Systems because they are also being designed to weigh about 38,000 pounds. In addition, much of the mission equipment, ammunition, fuel, personnel, and armor a Stryker brigade would need to conduct a combat operation might need to be moved on separate aircraft, increasing the numbers of aircraft or sorties needed to deploy a Stryker force, adding to deployment time and the time it would take after arrival to begin operations. Yet, the Army’s weight requirement and C-130 transport requirements for the vehicles, and information the Army provided to Congress in budget documents and testimony, created expectations that Stryker vehicles could be routinely transported by C-130 aircraft within an operational theater. In a December 2003 report on the first Stryker Brigade’s design evaluation, we reported that the Stryker Brigade demonstrated the ability to conduct tactical deployments by C-130 aircraft. At the National Training Center in April 2003, we observed the brigade conduct a tactical movement by moving a Stryker infantry company with its personnel, supplies, and 21 Stryker vehicles via seven C-130 aircraft flying 35 sorties from Southern California Logistics Airfield to a desert airfield on Fort Irwin about 70 miles away. Figure 4 shows a Stryker vehicle being offloaded from a C-130 at the National Training Center. A team from the Department of Defense’s (DOD) Office of the Director for Operational Test and Evaluation and the Army’s Test and Evaluation Command also observed the Stryker vehicle’s deployment and recorded the weight of the vehicles and the total load weight onboard the aircraft. The average weight for the eight production vehicle configurations was just less than 38,000 pounds, while the total load weight—including a 3- days’ supply of fuel, food, water, and ammunition—averaged more than 39,100 pounds. Table 6 shows the weight of eight-production vehicles and their total load weight recorded at the time of the April 2003 National Training Center deployment. We noted in our December 2003 report, however, that while the tactical deployment of Stryker vehicles by C-130 aircraft was demonstrated, the Army had yet to demonstrate under various environmental conditions, such as high temperature and airfield altitude, just how far Stryker vehicles can be tactically deployed by C-130 aircraft. The weight of Stryker vehicles presents significant challenges for C-130 aircraft transport because, as a general rule U.S. Air Force air mobility planning factors specify an allowable C-130 cargo weight of about 34,000 pounds for routine flight. With most Stryker vehicles weighing close to 38,000 pounds on board, the distance—or range—that a C-130 aircraft could fly is significantly reduced when taking-off in high air temperatures or from airfields located in higher elevations. In standard, or nearly ideal, flight conditions—such as day-time, low head-wind, moderate air temperature, and low elevation—an armored C-130H with a cargo payload of 38,000 pounds can generally expect to fly 860 miles from takeoff to landing. Furthermore, according to a Military Traffic Management Command’s Transportation Engineering Agency study of C-130 aircraft transportability of Army vehicles, a C-130’s range is significantly reduced with only minimal additional weight, and ideal conditions rarely exist in combat scenarios. The C-130 aircraft’s range may be further reduced if operational conditions such as high-speed takeoffs and threat-based route deviations exist because more fuel would be consumed under these conditions. Even in ideal flight conditions, adding just 2,000 pounds onboard the aircraft for associated cargo such as mission equipment, personnel, or ammunition reduces the C-130 aircraft’s takeoff-to-landing range to 500 miles. In addition, the more than 41,000-pound weight of the Mobile Gun System would limit the C-130 aircraft’s range to a maximum distance of less than 500 miles. Figure 5 shows the affects of cargo weight on an armored C-130H aircraft’s flight range in nearly ideal flight conditions. The addition of armor to the Strykers would pose additional challenges. With removable armor added to Strykers, the vehicles will not fit inside a C-130. To provide interim protection against rocket-propelled grenades, the Stryker vehicles of the brigade that deployed to Iraq in October 2003, were fitted with Slat armor weighing about 5,000 pounds for each vehicle (see fig. 6). By 2005, the Army expects to complete the development of add-on reactive armor—weighing about 9,000 pounds per vehicle—for protection against rocket-propelled grenades. With either type of armor installed, a Stryker vehicle will not fit inside a C-130 aircraft cargo bay. Regardless, with the added weight of the armor even in ideal flight conditions, the aircraft would be too heavy to take off. Furthermore, according to the Army Test and Evaluation Command’s Stryker System Evaluation, in less than favorable flight conditions, the Air Force considers routine transport of the 38,000-pound cargo weight of a Stryker vehicle on C-130 aircraft risky, and such flight may not be permitted under the Air Force’s flight operations risk management requirements if other transport means are available. In two theaters where U.S. forces are currently operating—the Middle East and Afghanistan, high temperatures and elevation can reduce C-130 aircraft range if carrying a 38,000-pound Stryker vehicle. Table 7 shows the reduced C-130 aircraft transport range due to daytime average summer temperatures of more than 100 degrees Fahrenheit in Iraq and high temperatures and elevations in Afghanistan. From two locations in Afghanistan (Bagram at 4,895 feet elevation and Kabul at 5,871 feet elevation) during daytime in the summer, a C-130 with a Stryker vehicle on board would not be able to take off at all. In winter from these same locations, its flight range would be reduced to 610 miles departing from Bagram and to 310 miles departing from Kabul. These same weight concerns would also apply to the Army’s Future Combat Systems vehicles, which according to the Army’s operational requirements should be no larger than 38,000 pounds and be transportable by a C-130. Additionally, the Mobile Gun System, expected to weigh over 41,000 pounds, is probably too heavy to transport a significant distance via C-130 aircraft. Furthermore, the C-130 aircraft cannot transport many of a Stryker brigade’s vehicles at all. Stryker vehicles make up a little more than 300 of the over 1,000 vehicles of a Stryker brigade, and many of the brigade’s support vehicles, such as fuel trucks, are too large or heavy for C-130 transport. Because a C-130’s range is limited by weight and a Stryker’s weight exceeds limits for routine C-130 loading, a tactical movement of significant distance of a Stryker brigade via C-130 aircraft in less than ideal conditions could necessitate moving much of the mission equipment, ammunition, fuel, personnel, and armor on separate aircraft. Such use of separate aircraft for moving Stryker vehicles and associated equipment, personnel, and supplies increases the force closure, or deployment, time and might limit the deployed forces’ ability to be capable of immediate combat operations upon arrival—one of the Army’s key operational requirements for the Stryker vehicles—because aircraft would arrive at different times and potentially different locations. In combination, a 38,000-pound Stryker vehicle, and the associated equipment, personnel, or armor that would have to be transported on separate aircraft are likely to increase the number of aircraft or sorties that would be needed to deploy a Stryker force. For example, if a decision were made to use a Stryker’s add-on armor for a tactical mission, at about 9,000 pounds for each vehicle’s armor, it would take at least one additional C-130 aircraft sortie to transport the armor for about four vehicles. Or, because of potential limits of the availability of C-130 lift assets, the size of a Stryker force and number of Stryker vehicles that could be tactically deployed would have to be reduced. At the National Training Center in April 2003, we observed, upon landing, an infantry company unload the vehicles from the C-130 aircraft, reconfigure them for combat missions, and move onward to a staging area. All Stryker variants except one reconfigured into combat capable modes within their designated time standard. Once reconfigured, units of the Stryker brigade also demonstrated the ability to conduct immediate combat operations. However, this was a short-range movement with only seven aircraft and did not require fitting armor on the vehicles. In an operational mission, depending on the size of the Stryker force deployed, using separate C-130 aircraft for transporting vehicles and associated people and equipment could significantly increase force deployment time because of the increased numbers of aircraft sorties needed. Upon arrival, it would also increase the time needed to reconfigure and begin operations because the vehicles, equipment, and personnel on different aircraft might arrive at different times or at different airfield locations. In addition, if a decision were made to use add-on armor for a mission, the armor would need to be installed after arrival, adding an average of about 10 hours per vehicle in reconfiguration time to install the armor. The capability of transporting Stryker vehicles on C-130 aircraft, despite its challenges and limitations, is a major objective of the Army’s transformation to a lighter more responsive force. As such, the Army’s weight and C-130 transport requirements for the vehicles, as well as information the Army provided to Congress, created expectations that Stryker vehicles could be routinely transported within an operational theater by C-130 aircraft. For example, in several congressional hearings since 2001, senior Army leadership testified that Stryker vehicles would be capable of transport by C-130 aircraft. In addition, annual budget justifications, which the Army submits to Congress for Stryker vehicle acquisition, highlight the C-130 transport capability of Stryker-vehicle- equipped Brigade Combat Teams. During our review, Army officials acknowledged the significant challenges and limitations of meeting expectations for transporting Stryker vehicles— and beyond 2010, the Future Combat Systems—on C-130 aircraft in terms of limited flight range, the size force that could be deployed, and the challenges of arriving ready for combat. The officials, however, believe that the capability to transport Stryker vehicles or the Future Combat Systems’ vehicles on C-130 aircraft, even over short distances, offers the theater combatant commanders an additional option among other modes of intratheater transportation—such as C-17 aircraft, sealift, or driving over land—for transporting Stryker brigades and vehicles in tactical missions. In addition, the officials believe that the ability to transport elements of a Stryker brigade as small as a platoon with four Stryker vehicles— as a part of an operational mission of forces moving by other means, greatly enhances the combatant commander’s war-fighting capabilities. In less than 4 years from the November 2000 Stryker vehicle contract award, the Army is well under way in fielding the eight production vehicle configurations, and Stryker vehicles are already in use in military operations in Iraq. However, program costs have increased, largely because of the cost of military construction related to Stryker vehicle needs, and delays in developing and testing the two remaining variants will delay their fielding and use. Furthermore, although the Army has successfully demonstrated that Stryker vehicles can be transported on C-130 aircraft during training events, routine use of the C-130 for airlifting Stryker vehicles, for other than short-range missions with limited numbers of vehicles, would be difficult in theaters where U.S. forces are currently operating. Therefore, the intended capability of Stryker brigades to be transportable by C-130 aircraft would be markedly reduced. The Army’s operational requirements and information the Army provided to Congress created expectations that a Stryker vehicle weight of 38,000 pounds—and a similar weight for Future Combat System vehicles—would allow routine C-130 transport in tactical operations. Consequently, congressional decision makers do not have an accurate sense or realistic expectations of the operational capabilities of Stryker vehicles and Future Combat Systems. We recommend that the Secretary of Defense, in consultation with the Secretary of the Army and the Secretary of the Air Force, take the following two actions: 1. Provide to Congress information that clarifies the expected C-130 tactical intratheater deployment capabilities of Stryker brigades and Stryker vehicles and describes probable operational missions and scenarios using C-130 transport of Stryker vehicles that are achievable, including the size of a combat capable C-130 deployable Stryker force; describes operational capability limitations of Stryker brigades given the limits of C-130 transport; and identifies options for, and the feasibility of, alternative modes of transportation—such as C-17 aircraft— for transporting Stryker brigades within an operational theater. 2. Provide the Congress similar clarification concerning the operational requirements and expected C-130 tactical airlift capabilities of Future Combat System vehicles, considering the limits of C-130 aircraft transportability. In commenting on a draft of this report, the Department of Defense partially concurred with our recommendations. The department also provided technical comments, which we incorporated in the report where appropriate. DOD concurred that operational requirements for airlift capability for brigade transport need clarification and stated that the ongoing Mobility Capabilities Study, scheduled for completion in the spring of 2005, will include an assessment of the intratheater transport of Army Stryker Brigade Combat Teams and address the recommendations of this report. In responding to our recommendation to provide information to Congress concerning C-130 transport of Stryker-equipped brigades, the department partially concurred and stated that the Army has studied C-130 transportability in depth. While we agree that the Army has studied C-130 transportability of Stryker vehicles—including the limitations that we point out in this report—their comments provide no assurance that this information will be provided to Congress, and we believe Congress needs this type of information to have an accurate sense of the operational capabilities of Stryker brigades. The department also partially concurred with our recommendation to provide to Congress similar clarification concerning the operational requirements and expected C-130 tactical airlift capabilities of Future Combat System vehicles, considering the limits of C-130 aircraft transportability. The department noted in its response that the Army is currently considering many factors, including C- 130 tactical airlift capability limits, as it reviews Future Combat Systems Unit of Action capability requirements. The department also stated that the Mobility Capabilities Study would include intratheater transport of Army units of action—the Army’s Future Combat Systems-equipped future force. Given the ongoing congressional interest in the implications of the Army’s requirements for C-130 transport of Stryker vehicles and Future Combat System ground vehicles, we agree that the information the Congress would need, if addressed in the Mobility Capabilities Study and provided to Congress, would meet the intent of our recommendations. With the Mobility Capabilities Study not scheduled for completion until the spring of 2005, we will assess at that time the adequacy of the study’s assessment of intratheater transport of Army Stryker- and Future Combat System- equipped units. The Senate Armed Services Committee has directed GAO to monitor DOD’s processes used to conduct the Mobility Capabilities Study, and to report on the adequacy and completeness of the study to the congressional defense committees no later than 30 days after the completion of the study. The appendix contains the full text of the department’s comments. To determine the current status of Stryker vehicle acquisition and the latest Stryker vehicle program and operating cost estimates, we analyzed documents on Stryker vehicle acquisition plans, contract performance requirements, and costs and interviewed officials from the Army Program Executive Office/Stryker Program Management Office, Warren, Michigan. To determine Stryker program costs, we reviewed the DOD approved December 2003 Selected Acquisition Report (SAR) and interviewed Stryker Program Management Office officials. For our analysis of Stryker vehicle-operating costs, we reviewed the Army’s mileage cost estimates and the Army’s methodology for calculating costs per mile. We did not verify source information the Army used in its calculations. To determine the status and results of Stryker vehicle tests, we reviewed the results of Stryker vehicle developmental and survivability testing from the Army Test and Evaluation Command, Alexandria, Virginia, and the Army Developmental Test Command, Aberdeen Proving Ground, Maryland. We also reviewed the U.S. Army Test and Evaluation Command, Army Evaluation Center’s Stryker System Evaluation Report and OSD Director, Operational Test and Evaluation’s Operational Test and Evaluation and Live Fire Test and Evaluation Report for the Stryker family of vehicles. To determine the ability of C-130 aircraft to transport Stryker vehicles within a theater of operations, we reviewed a Military Traffic Management Command’s, Transportation Engineering Agency study of the C-130 aircraft’s range and payload capabilities and interviewed U.S. Army, Air Force and Transportation Command officials. We notified U.S. Central Command of our objective to review plans for C-130 aircraft transport of Stryker vehicles within the command’s area of operations, but Central Command officials determined that this was an Army issue, rather than a combatant command’s issue. Our review was conducted from July 2003 through June 2004 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairmen and Ranking Minority Members of other Senate and House committees and subcommittees that have jurisdiction and oversight responsibilities for DOD. We are also sending copies to the Secretary of Defense and the Director, Office of Management and Budget. Copies will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8365, or Assistant Director, George Poindexter, at (202) 512-7213. Major contributors to this report were Kevin Handley, Frank Smith, and M. Jane Hunt. Defense Acquisitions: The Army’s Future Combat Systems’ Features, Risks, and Alternatives. GAO-04-635T. Washington, D.C.: April 1, 2004. Military Transformation: The Army and OSD Met Legislative Requirements for First Stryker Brigade Design Evaluation, but Issues Remain for Future Brigades. GAO-04-188. Washington, D.C.: December 12, 2003. Issues Facing the Army’s Future Combat Systems Program. GAO-03- 1010R. Washington, D.C.: August 13, 2003. Military Transformation: Realistic Deployment Timelines Needed for Army Stryker Brigades. GAO-03-801. Washington, D.C.: June 30, 2003. Military Transformation: Army’s Evaluation of Stryker and M-113A3 Infantry Carrier vehicles Provided Sufficient Data for Statutorily Mandated Comparison. GAO-03-671. Washington, D.C.: May 30, 2003. Army Stryker Brigades: Assessment of External Logistic Support Should Be Documented for the Congressionally Mandated Review of the Army’s Operational Evaluation Plan. GAO-03-484R. Washington, D.C.: March 28, 2003. Military Transformation: Army Actions Needed to Enhance Formation of Future Interim Brigade Combat Teams. GAO-02-442. Washington, D.C.: May 17, 2002. Military Transformation: Army Has a Comprehensive Plan for Managing Its Transformation but Faces Major Challenges. GAO-02-96. Washington, D.C.: November 16, 2001. Defense Acquisition: Army Transformation Faces Weapons Systems Challenges. GAO-01-311. Washington, D.C.: May 21, 2001. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
In its transformation to a more responsive and mobile force, the Army plans to form 6 Stryker Brigade Combat teams equipped with a new family of armored vehicles known as Strykers. The Stryker--which provides transport for troops, weapons, and command and control--was required by the Army to weigh no more than 38,000 pounds and be transportable in theater by C-130 cargo aircraft arriving ready for immediate combat operations. The Army plans to equip its future force with a new generation of vehicles--Future Combat Systems--to also be transportable by C-130s. GAO was asked to assess (1) the current status of Stryker vehicle acquisition, including the most current Stryker vehicle program and operating cost estimates; (2) the status and results of Stryker vehicle tests; and (3) the ability of C-130 aircraft to transport Stryker vehicles within a theater of operations. This report also addresses the transportability of the Army's Future Combat Systems on C-130 aircraft. The acquisition of the Stryker vehicles is about two-thirds complete; with about 1,200 of 8 production vehicle configurations ordered and 800 delivered to units. In addition, limited quantities of two developmental vehicles--the Mobile Gun System and the Nuclear, Biological, and Chemical Reconnaissance vehicle prototypes--have also been ordered for testing. Stryker program costs have increased about 22 percent from the November 2000 estimate of $7.1 billion to the December 2003 estimate of $8.7 billion. Total program costs include acquisition costs--procurement, research, development, and test and evaluation--as well as military construction costs related to Strykers. The Army does not yet have reliable estimates of the Stryker's operating costs because of limited peacetime use to develop data. As of June 2004, testing of the eight production Strykers was mostly complete, with the vehicles meeting Army operational requirements with limitations. However, development and testing schedules of the two developmental Strykers have been delayed, resulting in an over 1-year delay in meeting the vehicles' production milestones and fielding dates. While the Army has demonstrated the required transportability of Strykers by C-130 aircraft in training exercises, in an operational environment, the Stryker's average weight of 38,000 pounds--along with other factors such as added equipment weight and less than ideal flight conditions--significantly limits the C-130's flight range and reduces the size force that could be deployed. These factors also limit the ability of Strykers to conduct combat operations immediately upon arrival as required. With the similar maximum weight envisioned for Future Combat System vehicles intended for the Army's future force, the planned C-130 transport of those vehicles would present similar challenges.
You are an expert at summarizing long articles. Proceed to summarize the following text: From the passage of the Social Security Act in 1935 until the welfare reform law of 1996, the immigration status of those lawfully admitted for permanent U.S. residence did not preclude these individuals from eligibility for welfare benefits. Welfare reform changed this by substantially restricting pre-reform and new immigrants’ access to federal means-tested benefits. Table 1 details the program eligibility changes for immigrants under the major federal welfare programs. As a result of these changes, pre-reform immigrants remain eligible for some benefits. New immigrants are ineligible for federal benefits during their first 5 years of U.S. residency, until they become naturalized citizens, or unless they have an immigration status excepted from the restrictions. The welfare reform law allows states to decide whether pre-reform immigrants retain eligibility for federal TANF and Medicaid and whether new immigrants can apply for these programs after a mandatory 5-year bar. As originally passed, the welfare reform law generally eliminated immigrants’ eligibility for SSI and food stamps. The Balanced Budget Act of 1997 reinstated SSI eligibility for pre-reform immigrants already receiving benefits and allowed pre-reform immigrants who are or become blind or disabled to apply for benefits in the future. New immigrants, however, generally cannot receive SSI and food stamp benefits unless they meet certain exceptions or become citizens. These exceptions appear in table 1, which shows that the exception of allowing benefits to those who can be credited with 40 work quarters only applies to new immigrants with 5 years of U.S. residency. The welfare reform law also specifies federal programs from which an immigrant cannot be barred. The recent legislative change has restored food stamp eligibility, effective November 1, 1998, to pre-reform immigrants receiving benefits or assistance for blindness or disability, younger than 18, or aged 65 and older as of August 22, 1996. The law also restores eligibility to certain Hmong or Highland Laotian tribe entrants lawfully residing in the United States, regardless of their date of entry, and extends the eligibility period for refugees and asylees from 5 to 7 years after entering the country. In addition to restricting immigrants’ eligibility for welfare benefits, the 1996 welfare reform law revised requirements for those sponsoring immigrants’ entry into the United States. Under welfare reform, an immigrant sponsored by a relative must have the sponsor sign an affidavit of support promising to provide financial assistance if needed. In addition, to better ensure that sponsors will be financially able to help the immigrants they have sponsored, the new law requires that sponsors have incomes equal to at least 125 percent of the federal poverty level for the number of people that they will support, including themselves, their dependents, and the sponsored immigrant and accompanying family members. Moreover, to address concerns about the enforceability of affidavits of support executed before welfare reform, the new law specifies that each affidavit must be executed as a legally binding contract enforceable against the sponsor by the immigrant, the U.S. government, or any state or locality that provides any means-tested public benefit. The affidavit is enforceable until the sponsored immigrant naturalizes, is credited with 40 work quarters, permanently leaves the country, or dies. In addition to requiring legally enforceable affidavits, the law extends a sponsor’s responsibility to support immigrants by lengthening the time a sponsor’s income is attributable to a new immigrant if the immigrant applies for welfare benefits. Some federal programs previously mandated this attribution, called deeming; however, the sponsor’s income was generally included for only the first 3 or 5 years of an immigrant’s residency. The law now requires states to deem a sponsor’s income in federal means-tested programs until the immigrant becomes a citizen or can be credited with 40 work quarters. The welfare reform law also gives states the option of adding deeming requirements to state and local means-tested programs. The new support and deeming requirements are intended to ensure that immigrants rely on their sponsors rather than public benefits for aid, that the sponsors have the financial capacity to provide aid, and that sponsors are held accountable for helping immigrants they have agreed to support. This way, unless a sponsor suffers a financial setback, an immigrant should be less likely to need or receive public benefits. In addition, the welfare reform law requires states to implement new procedures to verify an alien’s status when determining eligibility for federal public benefits. The states have 2 years after the Immigration and Naturalization Service (INS) issues final regulations to ensure that their verification procedures comply with the regulations. The procedures include verifying individuals’ status as citizens or aliens, which information the states use in determining individuals’ eligibility for federal public welfare benefits, including grants, contracts, or loans provided by a federal agency or appropriated U.S. funds. INS responds to inquiries by federal, state, and local government agencies seeking to verify or determine citizenship or immigration status. Almost all states decided to continue providing TANF and Medicaid benefits for pre-reform immigrants and to provide these benefits to new immigrants after 5 years of U.S. residency. Fewer states offer assistance comparable with TANF and Medicaid to new immigrants during the mandatory 5-year federal bar. Some of these state programs, however, limit benefits to certain categories of immigrants or impose certain requirements such as living in the state for 12 months before applying for benefits. States have the option of continuing TANF and Medicaid benefits to pre-reform immigrants and providing these benefits to new immigrants after 5 years of U.S. residency. Almost all states and the District of Columbia are continuing TANF for both groups. Forty-nine states and the District of Columbia are continuing federal Medicaid benefits for these immigrants. Wyoming is the only state to discontinue Medicaid for all immigrants. Immigrants no longer eligible for the full scope of Medicaid benefits, however, continue to be eligible for emergency services under Medicaid. About a third of the states provide state-funded temporary assistance to needy families, medical assistance, or both to new immigrants during their 5-year bar from federal programs. Six of the 10 states where most immigrants reside provide assistance to those no longer eligible for TANF and Medicaid. California, Maryland, Massachusetts, and Washington provide both state-funded cash and medical assistance, while New Jersey and Virginia provide medical assistance. Some of these state programs impose deeming requirements similar to the federal program rules and state residency requirements. In addition, some states restrict medical assistance to immigrant children, pregnant women, or to those in residential care before a specific date. Maryland, for example, provides medical assistance to pregnant women and children, and Virginia provides benefits to children. In the states we visited, we observed a range of these types of benefits available to immigrants. California, where more than 35 percent of the nation’s immigrants live, provides both state-funded cash and medical assistance to new immigrants during their 5-year bar from federal benefits. New Jersey provides state-funded medical assistance to new immigrants, although it has proposed changes to state legislation to limit the scope of medical assistance benefits to emergency services only. In Washington, new immigrants may obtain state-funded cash or medical assistance after meeting a 12-month residency requirement and the state-imposed federal deeming requirements. Officials of Washington state noted that it included the residency requirement to address concerns about the state attracting immigrants from other states and becoming a welfare magnet state for immigrants. Before welfare reform, SSI provided a monthly cash benefit to needy individuals who were aged, blind, or disabled whether they were immigrants or citizens. Although welfare reform ultimately retained SSI eligibility for most pre-reform immigrants, it barred new immigrants from receiving SSI benefits until they become citizens or are categorized as excepted from the restrictions. Few states are replacing SSI benefits with new state-funded programs; however, many states have cash assistance programs available to those no longer eligible for SSI. The Social Security Administration (SSA) prepared to terminate benefits for almost 580,000 immigrants before the welfare reform law was amended to continue SSI benefits for pre-reform immigrants already on the rolls and to provide benefits in the future for those pre-reform immigrants who are or become blind or disabled. Pre-reform immigrants not already receiving SSI will no longer qualify for benefits solely on the basis of advanced age. Approximately 20,000 pre-reform noncitizens, however, do not meet the law’s definition of “qualified alien” and will therefore lose their SSI benefits in 1998 unless they adjust their immigration status to an eligible class. According to SSA, the noncitizens scheduled to lose their benefits were categorized as Permanently Residing Under the Color of Law (PRUCOL). Although few states are providing state-funded benefits to specifically replace SSI benefits, most states have general assistance programs through which some immigrants who have lost SSI and those who are no longer eligible may obtain aid. General assistance is one of the largest structured state or local programs providing assistance to the needy on an ongoing basis. According to a 1996 Urban Institute report, 41 states or localities within those states and the District of Columbia, including the 10 states where most immigrants reside, provided such programs.23, 24 Under welfare reform, however, states have the option of limiting the eligibility of immigrants for state-funded public benefits, including general assistance. The following nine states do not have state or local general assistance programs: Alabama, Arkansas, Louisiana, Mississippi, Oklahoma, South Carolina, Tennessee, West Virginia, and Wyoming. State General Assistance Programs - 1996, Urban Institute (Washington, D.C.: Oct. 1996). Information for this report was gathered before the passage of the welfare reform law. General assistance benefits are generally lower than federal cash assistance and vary by state in the populations served. In California, where counties fund and administer these programs, benefits range from $212 to $345 per month, which is considerably lower than the average monthly SSI benefit of $532 for California’s immigrants. In addition, the groups of individuals who may apply for general assistance range from all financially needy people to needy families with children and the disabled, elderly, unemployable, or a combination of these groups. In Washington, immigrants ineligible for SSI who are 18 or older and incapable of gainful employment for at least 90 days may receive assistance through the state’s General Assistance-Unemployable program; however, new immigrant children with disabilities who might have been eligible for SSI under previous law are ineligible for this program. These benefits, which average $339 per month in Washington, are less than the state’s average SSI benefit of $512 per month. On the basis of our analysis of information compiled by the National Immigration Law Center, few states have programs to specifically replace SSI benefits for new immigrants. Two states, Hawaii and Nebraska, offer state-funded benefits to disabled, blind, and elderly immigrants specifically to replace SSI benefits for which they are no longer entitled. Colorado offers cash assistance to elderly immigrants no longer eligible for SSI. With the continuation of TANF, Medicaid, and SSI benefits to pre-reform immigrants, the largest federal benefit loss for most immigrants is the termination of food stamps. At the time of our review, some states had created state-funded programs that were replacing benefits for about one-quarter of those estimated to no longer be eligible for federal food stamps nationwide. Fewer states offer such benefits to new immigrants. States’ responses to the most recent legislative change restoring eligibility to some of the pre-reform immigrants are not yet known. This group of immigrants consists mostly of children, the disabled, and the elderly— those groups who were most often targeted in the state-funded programs. Besides funding replacement food assistance programs, many states have increased funding for emergency food providers such as food banks. The states and immigrant advocacy groups contacted for our prior study,however, expressed concern that the limited emergency food assistance may be insufficient to meet the needs of immigrants who lost their eligibility for food stamps. The year following welfare reform, an estimated 940,000 of the 1.4 million immigrants receiving food stamps lost their eligibility for receiving benefits, according to the U.S. Department of Agriculture (USDA). Those no longer eligible would have otherwise received about $665 million in federal food stamps during fiscal year 1997. Almost one-fifth of those no longer eligible were immigrant children. USDA determined that most of those who remained eligible did so because they became citizens or met the exception of having 40 or more work quarters. The most recent legislation (P.L. 105-185) restores federal food stamp eligibility, effective November 1, 1998, to 250,000—mostly children, the disabled, and the elderly—of the estimated 820,000 immigrants no longer eligible for food stamps in fiscal year 1999, according to USDA. About 70 percent of the 820,000 immigrants remain ineligible for food stamps. At the time of our review, 14 states representing almost 90 percent of immigrants nationwide receiving food stamps in 1996 were replacing food stamp benefits with state-funded benefits to a portion of immigrants no longer eligible. State appropriations for these programs totaled almost $187 million for 1998. Eight states are purchasing federal food stamps, four states are issuing food stamp benefits through their electronic benefit transfer (EBT) system, and two states developed their own food voucher or cash assistance programs. Most of these programs’ benefit levels and eligibility criteria, with the exception of immigrant status, reflect the federal Food Stamp program and were implemented immediately after federal benefit terminations on September 1, 1997. According to our 1997 survey, the majority of the remaining states are not replacing or are not planning to replace the terminated food stamp benefits for legal immigrants. Table 2 provides more detailed information on these programs. Instead of setting up an entirely new state food assistance program, Washington was the first of eight states to contract with USDA to purchase federal food stamps with state funds. A provision in the Emergency Supplemental Appropriations Act of 1997 (P.L. 105-18) made it possible for the states to purchase federal food stamp coupons to provide nutrition assistance to individuals, including immigrants, made ineligible for federal food stamps. According to Washington state officials, allowing the states to purchase federal coupons saves the states the expense of creating their own voucher programs and makes the program more seamless to recipients and grocery store merchants. States are required to pay USDA the value of the benefits plus the costs of printing, shipping, and redeeming the coupons. The majority of the states replacing lost federal food stamps, however, allow eligibility only to certain immigrant categories. According to state-reported participation rates, about one-quarter of immigrants who no longer qualify for federal food stamps participate in state-funded food assistance programs. Most of these state programs target immigrants generally considered most vulnerable, such as children under age 18, the disabled, and the elderly—those aged 65 and older. California, with the largest population of immigrants, chose to provide state-funded food stamps to pre-reform immigrants younger than 18 or those aged 65 and older—about 56,000 of the estimated 151,700 immigrants whose federal benefits were terminated. The state-funded food stamp programs generally target the same groups whose eligibility for federal food stamp benefits has been restored. States’ responses to the restoring of these benefits, such as changing eligibility for state-funded programs, are unknown at this time. Like most pre-reform immigrants, new immigrants are also restricted from receiving federal food stamps. Currently, 6 of the 14 states with food stamp replacement programs—Connecticut, Florida, Maryland, Massachusetts, Minnesota, and Washington—allow eligibility to some new immigrants. Two of these states, however, limit food assistance to those living there as of 1997. At the time of our review, officials in these states could not determine the specific number of immigrants receiving state- funded benefits that were new immigrants. Although most states have no program specifically designed to replace federal food stamps for immigrants, they do provide temporary food assistance through emergency programs and local food banks or pantries. For example, the states match a level of federal funds for emergency food providers through The Emergency Food Assistance Program (TEFAP).Many states, anticipating the increased demand for food assistance by immigrants, increased funding to food banks and emergency food providers. Colorado, for example, appropriated $2 million in 1998 for a new program to provide emergency assistance, including food, to immigrants. In addition to state-funded efforts, one locality we reviewed was providing funds to local food banks. In 1997, San Francisco added $186,000 to the local food bank budget to set up three or four new food distribution sites in highly populated immigrant communities. Immigrants no longer eligible for federal food stamp benefits received notice by mail of these new distribution centers that told them to present their letters to one of the distribution sites to receive food on a weekly basis. Local officials told us that the food supply would last recipients 3 to 5 days. According to our 1997 study, some localities are working with local organizations to plan for the expected increase in the need for food assistance. Organization officials fear their resources may be insufficient to meet needs of individuals no longer eligible for food stamps. These officials do not believe their organizations can replace the long-term assistance that federal food stamps provided. Furthermore, in a study conducted by the U.S. Conference of Mayors, most surveyed cities reported that immigrants’ requests for emergency food assistance increased by an average of 11 percent in the first half of 1997. Although concerns exist about the impact of benefit restrictions for immigrants, such as the discontinuance of food stamps, no major monitoring efforts are required or planned in the states we visited or at the federal level. Moreover, a recent study for the U.S. Commission on Immigration Reform identified that the states with large immigrant populations had no comprehensive plans for monitoring the impact of welfare reform eligibility changes on immigrants. In addition, many immigrant advocacy groups we interviewed expressed concern about states’ and localities’ ability to meet immigrants’ income, food, and medical needs. Some advocacy groups noted they were conducting studies to measure the impact of federal restrictions on those affected. In addition to the federal and state programs already discussed, at least 12 states help immigrants through statewide naturalization assistance programs, according to information from the National Immigration Law Center. Helping immigrants gain citizenship offers them the ability to keep or obtain eligibility for federal benefits and reduces state spending on immigrants’ benefits. Even with state-provided assistance, the naturalization process takes time and, according to INS, the number of applications continues to increase. Naturalization assistance ranges from providing referrals to community services to classes in preparation for naturalization and financial assistance with the $95 application fee. Anticipating the restrictions for immigrants under welfare reform, New Jersey allocated $4 million for 1997 and 1998, which was matched by private funds, for its naturalization outreach program. New Jersey’s program includes English and civics classes, legal assistance with applications, and help with medical waivers for exemption from citizenship or language testing. Washington, which also began naturalization efforts before welfare reform, boosted funding for its program to $1.5 million per year for state fiscal years 1998 and 1999. Program services include helping immigrants with completing naturalization applications, paying application fees, and providing educational services. Since fiscal year 1998, the state reports an average of 1,200 individuals participating in the program each month. In addition, two of the localities we visited—Seattle and San Francisco—also established naturalization programs to assist immigrants, especially those affected by the loss of federal benefits. Though states and localities have naturalization programs, officials administering these programs expressed concern about the length of time it takes to process citizenship applications. In the three cities we visited, immigrants applying for naturalization had to wait up to 3 years before completing the process. According to INS, the average time for processing naturalization applications is more than 2 years nationwide. In some of the nation’s cities with the largest immigrant populations, the waiting time varies: it takes more than a year and a half in New York City, almost 3 years in Los Angeles, and more than 5 years in Miami. In addition, INS reported significant increases in the number of naturalization applications, from 423,000 in 1989 to more than 1.2 million in fiscal year 1996. INS officials cited the benefits that immigrants would gain from their citizenship among the reasons they expect the number of applications to remain high. The eligibility changes under welfare reform for immigrants expanded states’ administrative responsibilities and added financial responsibilities for those states choosing to provide replacement benefits. Due to these changes, the states will be revising procedures and automated systems to meet the new requirements for verifying an immigrant’s eligibility for welfare benefits. Although some states have concerns about correctly implementing these new requirements, federal agencies neither require nor plan special monitoring efforts for determining if the states are correctly determining eligibility. In addition to the challenges all states face, those providing state-funded programs face challenges obtaining future funding and managing the different eligibility rules and funding streams of both federal and state programs. Implementing the new restrictions required the states and localities to educate welfare workers and immigrant recipients about the eligibility changes and to recertify the eligibility of immigrant recipients. Program officials in the states we visited noted that completing the recertifications was time consuming. States’ more recent and future challenges include implementing the new alien status verification requirements—verifying the citizenship or immigration status of applicants for all federal public benefits, implementing the new sponsor deeming requirements, and enforcing affidavits of support for immigrants sponsored by family members. Officials in the states we visited anticipated making changes to their automated systems or encountering additional work to implement the new verification procedures or develop separate eligibility determination processes to reflect new distinctions among programs. With the new restrictions, states need more information on alien status for making eligibility determinations. Until INS issues the final regulations, the states can follow the interim INS verification guidelines. States will have 2 years after final regulations are issued to ensure that their verification systems comply with the regulations. According to INS, either proposed or interim regulations will most likely be issued in July 1998. States will face the challenge of modifying their procedures and automated systems for determining citizenship or alien status before making eligibility determinations for federal programs. According to the American Public Welfare Association, the states must modify their software programs to address the differing eligibility criteria under welfare reform. In addition, several officials in the states we reviewed reported that it takes additional steps and time for caseworkers to verify the alien status of immigrants applying for benefits and to determine or recertify their eligibility for federal programs. Officials often noted the potential for confusion in making accurate eligibility decisions, prompting concerns about providing benefits to those eligible and denying benefits to those who no longer qualify. Although concerns exist about correctly implementing welfare restrictions for immigrants, federal agencies neither require nor plan special monitoring efforts for determining if the states are correctly determining immigrants’ eligibility for benefits. At the time of our review, federal officials for the Medicaid, SSI, and Food Stamp programs told us that errors in providing benefits to ineligible immigrants could be detected in their quality control reviews. HHS officials commented that TANF program rules require no quality control reviews, and the only method they would have for monitoring immigrant restrictions, such as the length of time an individual receives TANF benefits, is through TANF’s annual single state audit. USDA officials reported that several states did not implement the new food stamp restrictions for immigrants by the required time. USDA billed one state for the amount of federal food stamp benefits provided to immigrants after the restrictions were to have been implemented. By January 1998, USDA officials indicated that as far as they knew all states had fully implemented the food stamp restrictions for immigrants. Issues that the states will face in the future include implementing the new deeming requirements and enforcing the affidavits of support. At the time of our review, the states we visited were waiting for federal or state guidance on implementing these requirements and were uncertain about how they would enforce the new affidavits of support. Welfare reform allows federal, state, and local agencies to seek reimbursement for benefits provided to sponsored immigrants; however, some officials expressed concern about the possible difficulties of locating sponsors who may have moved without reporting a change of address to the INS. The new affidavits of support have been in use since December 19, 1997, for new immigrants and for those whose alien status is changing on or after that date as, for example, from temporary residency to lawfully admitted for permanent residence. As a result of the welfare reform law, states faced major decisions on whether to provide assistance to immigrants no longer entitled to federal benefits. States that chose to provide state-funded assistance to immigrants face some long-term challenges funding and implementing these programs. Officials in the states we reviewed cautioned us that future funding for new state programs is uncertain. Although currently approved, funding for programs was appropriated for only a limited time—ranging from 1 to 2 years in the states we reviewed—and passed during favorable economic times. In New Jersey, for example, the state-funded food stamp program was funded through June 30, 1998, and the state needs to pass legislation to continue the program. California officials reported that although funding for state-provided medical assistance, food stamps, and TANF is not a pressing issue now, future funding is somewhat uncertain. They said the continuation of these state-funded programs depends on the state’s economy and on legislative decisions. The states we reviewed reported determining and tracking the fiscal claims for state and federal funds in parallel programs as an implementation challenge. Implementing state-funded food stamp programs, for example, requires states to track and report to USDA the separate federal and state food stamp issuances. In addition, some state officials reported that determining eligibility and calculating separate federal and state benefit amounts for “mixed” households—those with members who are citizens and immigrants—is challenging. A mixed household could have a new immigrant mother and a citizen child who are receiving food stamps and cash and medical assistance funded separately by federal and state dollars. Washington state officials noted that to some extent they can calculate separate benefit amounts and funding sources because their new computer system is designed to track this information. California officials reported they would have to reprogram their automated systems to identify and track costs of benefits provided to immigrants through federal and state programs. California counties manually tracked immigrants receiving benefits under certain programs until the programming changes were completed. The welfare reform law represents a significant shift of responsibility for decisions about aiding needy immigrants from the federal government to the states. Federal policy now gives the states much latitude in restricting immigrants’ eligibility for welfare programs. States’ welfare policies vary in their treatment of both pre-reform and new immigrants, according to our review. For many immigrants, the extent of assistance provided will depend on state policies and other assistance available at the local level. For those federal benefits that the states could choose to continue, almost all states did so. For those federal benefits that were terminated, many states chose to provide state-financed benefits for at least some part of this population. Few states, however, completely replaced lost federal benefits for either pre-reform or new immigrants. Some local programs, including food banks, already report an increased need for food assistance due to the welfare reform restrictions for immigrants. Our work reviewed the significant changes prompted by welfare reform in its early stages—changes affecting immigrants, including both those immigrating before and after the passage of the law and those considering future immigration. The states are focusing their welfare assistance efforts on immigrants living in the United States before welfare reform and have not yet focused much attention on the possible needs of new immigrants. In addition, the states’ choices about providing additional benefits to immigrants, whether pre-reform or new, were made during favorable economic times and could change during less prosperous times. Furthermore, how federal, state, and local agencies will enforce the new affidavits of support is unknown. In general, it is too soon to measure the long-term impact of welfare reform on immigrants and immigration. In commenting on a draft of this report, HHS took no exception with the report findings, and USDA generally agreed with the findings and observations. Their comments are included in appendixes II and III, respectively. USDA also noted the recent enactment of legislation that restores eligibility for federal food stamp benefits to approximately 250,000 legal immigrants beginning in November 1998, which the report discusses. In addition, USDA stated that it is too early to know the extent to which states operating state-funded food assistance programs will continue their programs for those noncitizens in need of food assistance who remain ineligible for federal benefits. We agree that it is too early to know how the states will respond to this new legislation. HHS and USDA also provided technical comments, which we incorporated as appropriate. We also provided copies of a draft to SSA, INS of the Department of Justice, and the states of California, New Jersey, and Washington. They provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretaries of USDA and HHS and the Commissioners of SSA and INS. We will also make copies available upon request. If you or your staff have any questions about this report, please contact Gale C. Harris, Assistant Director, at (202) 512-7235, or Suzanne Sterling, Senior Evaluator, at (202) 512-3081. Other major contributors to this report are Elizabeth Jones, Deborah Moberly, and Julian Klazkin. This appendix summarizes information on the benefits available to needy immigrants in the locations we visited: San Francisco County, California; Essex County, New Jersey; and Seattle, Washington. The information reflects the states’ actions before enactment of P.L. 105-185 (signed into law in June 1998) that will restore, effective November 1, 1998, federal food stamp eligibility for some pre-reform immigrants. According to INS, as of April 1996 California had about 3.7 million or 35 percent of immigrants living in the United States and ranked as the state with the largest immigrant population. Besides continuing to provide Temporary Assistance for Needy Families (TANF) and Medicaid benefits to immigrants, California is funding a food stamp program for some of those who lost federal benefits. In addition to the state programs available, San Francisco County provides immigrants with food assistance through local food banks, cash benefits through general assistance, and naturalization assistance through community-based organizations. California chose to provide TANF—through the state’s CalWORKS program—to immigrants regardless of their date of entry into the country. In May 1997, the immigrant caseload of 199,381 accounted for almost 22.5 percent of California’s total TANF caseload, according to state estimates. At an average grant of $192 a month, it would cost the state over $178,000 a month to provide state-funded cash assistance to the 931 eligible new immigrant families it estimated would enter California between August 22, 1996, and December 31, 1997. In addition to TANF-comparable benefits, California is providing Medicaid or comparable medical assistance—through its Medi-Cal program—to immigrants regardless of their date of entry, which has increased state spending and prompted changes to state and county data systems to track costs. California officials estimate that about 2,797 or 20 percent of new immigrants will apply for Medi-Cal benefits each month. On the basis of this estimate, by the year 2001, an additional 168,000 immigrants would apply for the state-funded Medi-Cal benefits. California does not fund statewide assistance specifically to replace SSI benefits; however, counties must have general assistance programs. These benefits may be available for nondisabled pre-reform immigrants who are not already receiving SSI and for new immigrants who are no longer eligible for SSI. San Francisco County, for example, provides general assistance to immigrants no longer eligible for SSI of up to $345 per month, which is lower than the average SSI benefit for immigrants of $532 per month. According to a study done in San Francisco County, for each elderly and disabled immigrant no longer eligible for federal assistance on the basis of immigration status who receives general assistance or some form of local cash assistance, the city and county will incur an additional annual cost of between $4,140 and $7,800 per person. If SSI benefits had not been restored, San Francisco estimated that it would have cost the city and county as much as $31 million to provide general assistance to an estimated 7,500 immigrants during the first fiscal year after the termination of SSI benefits. The state created the California Food Assistance Program for Legal Immigrants to provide food stamps to certain categories of pre-reform immigrants. The state-funded food stamps provide these immigrants with the same amount of benefits as those previously received under the federal program and are available to those pre-reform immigrants younger than 18 and aged 65 and over. The program, which is authorized to operate through July 1, 2000, received appropriations of $35.6 million for fiscal year 1998. Begun on September 1, 1997, the program replaces lost federal food stamps for about 56,000 of the 151,700 pre-reform immigrants who lost their federal benefits, according to state estimates. New immigrants are not eligible for state-funded food assistance; however, some local food assistance is available, officials said. Although San Francisco County explored the possibility of providing a food stamp program for those no longer eligible for federal or state food stamps, such as adults under age 65, it has not established such a program. The county, however, provided additional funding of $186,000 to a local food bank to increase purchases and add three or four new distribution centers targeted to reach immigrants no longer eligible for food stamps. These immigrants received notice by mail of the new centers and were told to present their letters at the distribution centers to receive food, which they may claim on a weekly basis. To increase immigrants’ use of the distribution centers, the county and the local food bank are also planning to provide more culturally appropriate foods. California has no statewide naturalization assistance program; however, selected counties and localities in the state provide some assistance. Thirty-five of the state’s 58 counties provide some naturalization assistance. San Francisco County formed the Naturalization Project to provide assistance targeted to the most vulnerable of the immigrant population—those expected to lose SSI before its retention and those scheduled to lose federal food stamp benefits. The goals of the project—comprised of a coalition of city and county government departments, community-based organizations, senior services providers, schools, colleges, private businesses, foundations, and concerned citizens—are to substantially expand service capacity; guarantee responsive, individualized high-quality services; and create a structured network of community services by leveraging all available public, private, and community resources. Funding for this project includes a grant of over $1 million from a private foundation for 1997. According to INS, as of April 1996 New Jersey had approximately 462,000 or over 4 percent of immigrants living in the United States, making it the state with the fifth largest immigrant population. Along with choosing to continue TANF and Medicaid benefits for pre-reform immigrants and to provide these benefits to new immigrants after the federal 5-year bar, New Jersey devised a new state-funded food stamp program to replace lost federal benefits and a statewide naturalization assistance program. In addition to these state-level programs, Essex County provides some food assistance to its immigrants through local food pantries and soup kitchens. New Jersey chose to continue TANF benefits for pre-reform immigrants and to provide these benefits to new immigrants following the federal 5-year bar. The Work First New Jersey program, which is administered at the county level, provides these benefits. New Jersey combined its TANF and general assistance programs in January 1997 to form the Work First New Jersey program. The state provides no state-funded cash assistance to new immigrants during the 5-year federal bar. New Jersey provides Medicaid to pre-reform and new immigrants following the 5-year bar. In addition, the state provides funding for Medicaid-comparable assistance to new immigrants during the 5-year federal bar. The state, however, plans to reduce the medical benefits available to new immigrants to emergency services only. According to New Jersey officials, the state must pass legislation to change the current state law, which requires full medical benefits for all individuals, including immigrants. New Jersey officials also noted that an estimated 2,000 noncitizens no longer eligible for federal Medicaid assistance because they did not meet the new qualifications in the welfare reform law, such as Permanently Residing Under the Color of Law (PRUCOL), were receiving state-funded medical assistance. When the state passes legislation, 1,900 of these individuals’ medical assistance benefits will be reduced to cover only emergency services. In addition to providing Medicaid and state-funded medical assistance, the state funds several hospitals to treat indigent individuals, including immigrants, through New Jersey’s Charity Care program. Along with the TANF portion of the Work First New Jersey program discussed, the general assistance portion of the program provides benefits to single adults or childless couples. Certain noncitizens who remain in the country legally, such as PRUCOLs, but no longer meet the eligibility criteria for federal programs may be eligible for general assistance. They may receive benefits until they can apply for naturalization, as well as for an additional 6 months after they apply, which was the time estimated for completing the naturalization process. State officials were unsure, however, whether the 6-month restriction would be enforced because the average naturalization processing time in New Jersey now is much longer than the 6-month estimate. New immigrants are barred from receiving Work First New Jersey benefits during the first 5 years of residency in the country. The benefit level of general assistance provided through Work First New Jersey averages $140 per month for employable individuals and $210 per month for unemployable individuals; both rates are lower than the average monthly SSI benefit of $515.25. New Jersey created the State Food Stamp program in August 1997 to provide benefits for certain categories of pre-reform immigrants who lost their federal food stamps—those younger than 18, aged 65 and over, or who are disabled. This program, which was created by an executive order of the state’s governor, provided $15 million for contracting with USDA to purchase federal food stamp benefits for this population through June 1998. However, as of June 12, 1998, legislation was pending to continue the state-funded food stamp benefits beyond this time. The legislation would also expand eligibility to include those between the ages of 18 and 65 who have at least one child under 18. The program’s eligibility criteria and benefit levels mirror the federal program’s, with the exception of not requiring citizenship. In addition, the program requires participants to apply for citizenship within 60 days of their eligibility to do so. New Jersey officials originally estimated that 17,000 immigrants lost their federal food stamp benefits due to welfare reform changes; however, as of February 1998, officials reported that the program was providing state-funded benefits to about 5,700 immigrants. Although new immigrants are ineligible for state-funded food assistance, all immigrants are eligible to receive food assistance through local food pantries and soup kitchens statewide. New Jersey provided funding for a statewide naturalization assistance program run through a coalition of 31 service providers in the Immigration Policy Network. The program began providing assistance in January 1997 with $2 million in state funds and $2 million in private funds. The project initially targeted those immigrants expected to lose SSI benefits before they were reinstated. Later in the year, the project was expanded with an additional $2 million in public funds and $2 million in private funds to provide assistance to those immigrants scheduled to lose federal food stamps. Services provided through the program include English language and civics classes, legal assistance with applications, and assistance with medical waivers for exemption from citizenship or language testing. As of February 1998, about 4,200 individuals participating in the program had completed naturalization applications. The program is scheduled to continue through December 1998. According to INS, as of April 1996 approximately 174,000 or about 2 percent of immigrants in the United States lived in the state of Washington, making it the state with the 10th largest immigrant population. Anticipating the federal restrictions under welfare reform, the governor proposed programs that would treat immigrants in need the same as citizens. Besides continuing to provide TANF and Medicaid benefits for pre-reform immigrants and providing these benefits to new immigrants following the 5-year bar, Washington devised several new state-funded programs to replace lost federal benefits and provides naturalization assistance as well. In addition to these state programs, Seattle created its own naturalization assistance program for immigrants and refugees losing federal and state benefits. Washington chose to continue TANF benefits for pre-reform immigrants and to provide these benefits to new immigrants after the 5-year bar. In November 1997, the state began providing state-funded cash assistance for new immigrants during the federal 5-year bar. Immigrants are eligible to apply for these benefits after living in the state for 12 months. With the exception of not requiring citizenship, the state-funded program applies the same eligibility and deeming rules as the TANF program and offers the same level of benefits. As of February 1998, approximately 230 immigrant families were receiving state-funded cash assistance at a monthly cost to the state of about $74,000. Washington provides Medicaid benefits to pre-reform and new immigrants following the 5-year bar. In August 1997, the state began providing state-funded medical assistance to new immigrants during the federal 5-year bar if they met the requirements considered to be categorically needy. Like the state-funded cash assistance program, the state medical assistance program requires a residency period of 12 months. With the exception of not requiring citizenship, the program applies the same eligibility criteria and deeming rules as the federal program and offers the same level of benefits. As of December 1997, a total of 389 immigrants participated in the program at cost to the state of approximately $5,200 for that month. In addition to this state-funded medical assistance, some new immigrants may receive additional state or local medical assistance during their 5-year bar. The types of assistance available include medical care services for incapacitated, aged, blind, or disabled people determined eligible for general assistance; emergency medical services; and services for pregnant women and children not eligible for the state medical assistance program. Washington provides general assistance benefits for some new immigrants who are no longer eligible for SSI. Immigrants who are 18 and older and incapable of gainful employment for at least 90 days can apply for the state’s General Assistance-Unemployable program. This program provides an average monthly benefit of $339, which is less than the average monthly SSI benefit of $512. In 1997, Washington created the Food Assistance program to provide state-funded food stamp benefits for pre-reform and new immigrants no longer eligible for federal food stamps. At the state’s initiative, Washington was the first of eight states to contract with USDA to purchase federal food stamps. The eligibility criteria and benefit levels mirror the federal program’s, with the exception of not requiring citizenship. The state program began with a budget of $65 million for fiscal years 1998 and 1999. The state estimated that the program would serve approximately 38,363 immigrants in 1998; however, state officials mentioned that this estimate did not account for those immigrants who became citizens or qualified for federal benefits due to an exception such as being credited with 40 work quarters. As of January 1998, the program was serving about 14,800 immigrants at a cost to the state of approximately $1.7 million for that month. Washington’s naturalization assistance program, which began before welfare reform, targets its assistance to those immigrants expected to lose federal benefits. For fiscal years 1998 and 1999, funding for the program totaled approximately $1.5 million per year. According to state officials, an average of 1,200 immigrants participated in the program each month since July 1997. Washington officials estimate that over 70 percent of the participants complete their classes and file a citizenship application. Services provided through the program include help with completing applications, payment of citizenship application and photograph fees, and training courses to help them pass citizenship exams. Seattle also provides several services for immigrants through its naturalization program—the New Citizen Initiative. Begun in 1996, the program is administered by the city’s Department of Housing and Human Services in partnership with the Seattle Public Library and a consortium of community-based organizations. The program provides a variety of services for immigrants, including a naturalization information clearinghouse, and prioritizes its services for immigrants who are elderly, disabled, or have inadequate language and literacy skills. The city has funded this initiative with $500,000 for fiscal years 1998 and 1999, and private organizations are providing an additional $200,000 in funding. Program officials estimate that assistance will be provided to between 500 and 800 immigrants during 1998. Welfare Reform: States Are Restructuring Programs to Reduce Welfare Dependence (GAO/HEHS-98-109, June 18, 1998). Medicaid: Early Implications of Welfare Reform for Beneficiaries and States (GAO/HEHS-98-62, Feb. 24, 1998). Welfare Reform: State and Local Responses to Restricting Food Stamp Benefits (GAO/RCED-98-41, Dec. 18, 1997). Illegal Aliens: Extent of Welfare Benefits Received on Behalf of U.S. Citizen Children (GAO/HEHS-98-30, Nov. 19, 1997). Alien Applications: Processing Differences Exist Among INS Field Units (GAO/GGD-97-47, May 20, 1997). Food Stamp Program: Characteristics of Households Affected by Limit on the Shelter Deduction (GAO/RCED-97-118, May 14, 1997). Welfare Reform: Implications of Proposals on Legal Immigrants’ Benefits (GAO/HEHS-95-58, Feb. 2, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed Title IV of the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 and the impact its restrictions would have on immigrant children and their families, focusing on: (1) the options states chose regarding Temporary Assistance for Needy Families (TANF) and Medicaid benefits for immigrants and state-funded assistance available to new immigrants during the 5-year bar; (2) for restricted federal programs, Supplemental Security Income (SSI), and food stamps, the number of immigrants, including children, whose federal benefits have been terminated, and the state-funded assistance available to them; and (3) the major implementation issues and challenges state agencies face in administering the provisions changing welfare assistance to immigrants. GAO noted that: (1) although the states could have dropped immigrants from their welfare rolls, most states have chosen to provide some welfare benefit to part of this population; (2) nearly all states have chosen to continue providing federal TANF and Medicaid benefits to pre-reform immigrants and to provide these benefits to new immigrants after 5 years of U.S. residency; (3) about a third of the states use state funds to provide similar benefits to some new immigrants during the 5-year bar; (4) among these states are 6 of the 10 where most immigrants live--2 states provide state-funded medical assistance and 4 states provide both state-funded cash and medical assistance; (5) with the states' continuation of TANF and Medicaid benefits to pre-reform immigrants and the retention of these immigrants' SSI benefits, the greatest economic impact of welfare reform for most of these immigrants is the loss of federally funded food stamp benefits; (6) after the implementation of the food stamp restrictions, an estimated 940,000 immigrants receiving food stamps in 1997 lost eligibility for receiving them; (7) almost one-fifth of this group consisted of immigrant children; (8) at the time of GAO's review, 14 states had created state-funded food stamp programs serving about a quarter of this immigrant group nationwide--primarily children, the disabled, and the elderly; (9) fewer states, however, offer state-funded food stamps to new immigrants; (10) the most recent legislation will restore food stamp eligibility to an estimated 250,000 immigrants, mostly children, the disabled, and the elderly, the same groups targeted by state-funded food stamp programs; (11) states' responses to the restoring of these benefits, such as changing eligibility for state-funded programs, are unknown at this time; (12) with the implementation of the welfare reform restrictions for immigrants, states and local governments face added responsibilities; (13) states' future challenges include verifying the citizenship or immigration status of applicants for all federal public benefits and enforcing affidavits of support for new immigrants sponsored by relatives; (14) the states GAO visited anticipated major systems changes and other additional work to implement the new verification procedures; (15) furthermore, states choosing to provide assistance to immigrants no longer eligible for federal benefits are uncertain about future funding for these programs; and (16) these states also face additional challenges managing funding streams and determining eligibility for federal and state programs.
You are an expert at summarizing long articles. Proceed to summarize the following text: In light of the prominent hazing incidents previously noted, Congress, in the National Defense Authorization Act for Fiscal Year 2013, directed that each Secretary of a military department (and the Secretary of Homeland Security in the case of the Coast Guard) submit a report on hazing in each Armed Force under the jurisdiction of the Secretary. Specifically, Congress specified that each Armed Force report include, among other things, an evaluation of the hazing definition contained in an August 1997 Secretary of Defense policy memorandum on hazing, a discussion of their respective policies for preventing and responding to incidents of hazing, and a description of the methods implemented to track and report, including report anonymously, incidents of hazing in the Armed Forces. In response, each service provided reports to Congress in May and July 2013 addressing the requirements of the Act. For example, the Navy, the Marine Corps, and the Coast Guard concurred with DOD’s 1997 definition of hazing. To address all behaviors that involve mistreatment in a single policy, the Army recommended revising the hazing definition to include bullying. The Air Force recommended the hazing definition be revised to better align with the hazing definitions used by the states because DOD’s broader definition risked creating a perception that hazing is a larger problem in the military than it actually is according to the civilian understanding of hazing. The Coast Guard also noted in its report to Congress that it developed its policy to reflect the provisions contained in DOD’s hazing policy. With respect to the feasibility of establishing a database to track, respond to, and resolve incidents of hazing, the Army report stated that existing databases and legal tracking systems are sufficient for tracking hazing incidents. The Navy reported that although it has a tracking database in use, a comprehensive database for all services may be beneficial in combatting hazing. The Marine Corps report stated that the Marine Corps currently uses a service-wide database for tracking and managing all allegations of hazing. The Air Force report stated that it will examine the costs and benefits of establishing a database to track, respond to, and resolve hazing incidents once a common definition and data elements are developed. The Coast Guard stated that existing systems provide adequate management of hazing incidents. Lastly, in response to the requirement to provide any recommended changes to the Uniform Code of Military Justice (UCMJ) or the Manual for Courts-Martial, the Army, Navy, Marine Corps, and Air Force reports stated that they supported inserting a provision in the Manual for Courts- Martial discussion section of Article 92 of the UCMJ that would enable incidents of hazing to be charged as violations of Article 92 (violation of or failure to obey a lawful general order or regulation). All of the armed services agreed that a separate enumerated offense of the UCMJ for hazing would be duplicative. In addition, in May 2012, the House Appropriations Committee Report accompanying the DOD Appropriations Bill, 2013, expressing concern about reports of hazing in the armed services, directed the Secretary of Defense to provide a report to the Committee on the incidence of hazing, harassment, and mistreatment of servicemembers, as well as a review of the policies to prevent and respond to alleged hazing incidents. In response to this requirement, and in addition to the service reports, in September 2013, the Undersecretary of Defense for Personnel and Readiness provided a report to Congress that summarized the armed service reports to Congress. In addition, the report noted that DOD commissioned the RAND Corporation to conduct a study that would include an assessment of the 1997 definition of hazing and subsequent recommendation on a DOD definition of hazing, as well as an evaluation of the feasibility of establishing a DOD-wide database to track hazing incidents, common data elements, and requirements to include in the revision of the 1997 policy memorandum for uniformity across the services. There is no specific article under the UCMJ that defines and prohibits hazing. However, since at least 1950, hazing has been punishable under various punitive articles included in the UCMJ such as Article 93, Cruelty and Maltreatment. To constitute an offense under Article 93, the accused must be cruel toward, or oppress, or maltreat a victim that is subject to his or her orders. Depending on the individual facts and circumstances of the case, hazing could also be charged under other punitive articles, such as Article 128, Assault. Commanders have multiple options to respond to allegations of hazing in their units. After receiving a hazing complaint, commanders or other authorities must promptly and thoroughly investigate the allegation, according to the DOD policy. If the allegation is unsubstantiated, the case is typically dropped. If the investigation substantiates the allegations, the commander must take effective and appropriate action, which may include adverse administrative action, non-judicial punishment, court- martial, or no action, among others. An allegation that is initially deemed substantiated does not necessarily result in punishment for the offender because a servicemember could be found not guilty at non-judicial punishment or court-martial, among other reasons. While we have not reported on hazing in the military since 1992, we have issued multiple reports and made numerous recommendations related to DOD’s and the Coast Guard’s efforts to prevent and respond to the sometimes correlated issue of sexual assault. In particular, our March 2015 report on male servicemember victims of sexual assault reported that hazing incidents may cross the line into sexual assault. We noted that service officials and male servicemembers at several military installations gave us examples of recent incidents involving both hazing and sexual assault. We found that a series of hazing incidents may escalate into a sexual assault and that service officials stated that training on hazing-type activities and their relationship to sexual assault would be particularly beneficial to males in that it might lead to increased reporting and fewer inappropriate incidents. Among other things, we recommended that DOD revise its sexual assault prevention and response training to more comprehensively and directly address how certain behavior and activities, such as hazing, can constitute sexual assault. DOD concurred with this recommendation, but did not state what actions it planned to take in response. The National Defense Authorization Act for Fiscal Year 2016 subsequently included a provision requiring the Secretary of Defense, in collaboration with the Secretaries of the Military Departments, to develop a plan for prevention and response to sexual assaults in which the victim is a male servicemember. This plan is required to include sexual assault prevention and response training to address the incidence of male servicemembers who are sexually assaulted and how certain behaviors and activities, such as hazing, can constitute a sexual assault. Each of the military services has issued policies to address hazing incidents among servicemembers consistent with DOD’s 1997 hazing policy. However, DOD does not know the extent to which these policies have been implemented because the military services, with the exception of the Marine Corps, have not conducted oversight by regularly monitoring policy implementation. The Coast Guard has issued a policy to address hazing incidents, but it likewise has not conducted oversight by regularly monitoring policy implementation. In addition, the military services’ hazing policies are broad and servicemembers may not have enough information to determine whether instances of training or discipline may be considered hazing. In August 1997, the Secretary of Defense issued a memorandum on DOD’s policy that defined and provided examples of what did and did not constitute prohibited hazing conduct. DOD’s policy also specified that commanders and senior noncommissioned officers would promptly and thoroughly investigate all reports of hazing and that they would take appropriate and effective action on substantiated allegations. Further, it required the Secretaries of the Military Departments to ensure that DOD’s hazing policy was incorporated into entry-level enlisted and officer military training, as well as professional military education. Coast Guard officials told us that the Department of Homeland Security (DHS) has not issued any hazing-related policy applicable to the Coast Guard, and DHS officials confirmed that no such policy had been issued, though as we discuss further in this report, the Coast Guard issued policies that reflect DOD’s 1997 hazing policy. From 1997 through 2014, each of the military services issued or updated applicable policies to reflect DOD’s position on hazing and its requirements for addressing such incidents. The military services updated their policies for various reasons, such as implementing tracking requirements or defining and prohibiting bullying along with hazing. The Coast Guard also issued a policy during this timeframe that, as noted in its 2013 report to Congress on hazing, mirrors the policy developed by DOD. Each of the services made their policies punitive so that a violation of the military service regulation could also be charged under the UCMJ as a violation of Article 92, Failure to obey an order or regulation. More recently, in December 2015 DOD issued an updated hazing and bullying memorandum and policy, which among other things included an updated definition of hazing, defined bullying, and directed the secretaries of the military departments to develop instructions to comply with the memorandum. Figure 1 provides additional details on the timeline of DOD, military service, and Coast Guard hazing policies and relevant congressional actions since 1997. The Coast Guard issued a policy in 1991 that required hazing awareness training. Each of the military services’ policies (1) include the same or a similar definition of hazing as the one developed by DOD, (2) require that commanders investigate reported hazing incidents, and (3) direct that all servicemembers receive training on the hazing policy. Though not required, the Army, the Navy, and the Marine Corps hazing policies contain guidance and requirements that supplement several key provisions in DOD’s policy. For example, in addition to the examples of hazing included in DOD’s policy, the Army’s 2014 regulation update explicitly prohibits hazing via social media or other electronic communications, and makes a distinction between hazing and bullying, which it also prohibits. Further, the Army’s, the Navy’s, and the Marine Corps’ hazing policies and guidance include requirements for commanders and senior noncommissioned officers beyond the general investigative and disciplinary responsibilities specified by DOD. Specifically, the Army’s regulation requires its commanders to seek the counsel of their legal advisor when taking actions pursuant to the hazing policy. Navy policy on reporting hazing incidents directs all commands to submit reports of substantiated hazing incidents for tracking by the Navy’s Office of Hazing Prevention. The Marine Corps’ order requires commanding officers to report both substantiated and unsubstantiated hazing incidents to Marine Corps headquarters. In October 1997, the Air Force reissued the Secretary of Defense’s memorandum and DOD’s hazing policy with a cover letter from the Chief of Staff of the Air Force that underscored that hazing is contrary to good order and discipline, that it would not be tolerated, and that commanders and supervisors must stay engaged to ensure that hazing does not occur within the Air Force. Regarding training, the Army’s, the Navy’s, and the Marine Corps’ policies supplement DOD’s requirement that the topic of hazing be incorporated into entry-level enlisted and officer training and Professional Military Education. Specifically, the Army’s hazing regulation requires that commanders at a minimum conduct hazing awareness training on at least an annual basis as part of the Army’s Equal Opportunity training requirements. The Department of the Navy’s instruction requires that hazing awareness training be incorporated into leadership training and commander’s courses, and the Marine Corps’ order includes similar requirements, adding that hazing awareness training also be included in troop information programs and in unit orientation. By including the DOD hazing policy, the Air Force memorandum includes the training requirements specified by DOD, and an Air Education and Training Command policy requires annual hazing awareness training within Air Force training units. In September 2011, the Coast Guard updated its Discipline and Conduct Instruction to include its policy prohibiting hazing. As previously noted, the Coast Guard’s instruction mirrors guidance set forth in a 1997 Secretary of Defense Policy Memorandum, including DOD’s definition of hazing and examples of what does and does not constitute prohibited hazing conduct. Like DOD’s policy, the Coast Guard’s instruction also specifies that commanders who receive complaints or information about hazing must investigate and take prompt, effective action and are to incorporate hazing awareness training into the annual unit training. While similar in some respects, the Coast Guard’s hazing instruction contains guidance and requirements that go beyond the policy issued by DOD. For example, in addition to a requirement to investigate alleged incidents, the Coast Guard’s policy identifies penalties that may result from hazing that, depending on the circumstances, range from counseling to administrative discharge procedures. Further, the Coast Guard’s instruction also requires that a discussion about hazing be incorporated into existing recruit, officer, and leadership training curricula. The Army, the Navy, and the Marine Corps hazing policies state that servicemembers should report hazing complaints within the chain of command, such as to their commander. The Army’s regulation also states that servicemembers may report hazing complaints to law enforcement or the inspector general. The Coast Guard’s hazing instruction states that every military member—to include victims of or witnesses to actual or attempted hazing—must report such incidents to the appropriate level within the chain of command. Headquarters officials from each military service and the Coast Guard told us that servicemembers may report hazing complaints through existing channels, such as the commander, law enforcement, inspector general, or the equal opportunity office, among others. In some cases these channels may be independent of or above the level of their commands, such as an inspector general at a higher level than their own command’s inspector general. In other cases, such as an equal opportunity advisor in their own command, the reporting channel would not be independent of the command. These officials said that in most cases, there are means to report hazing complaints anonymously to many of these channels, such as anonymous inspector general hotlines. In addition, because hazing can be associated with rites of passage and traditions, the Army, the Navy, and the Marine Corps—either in their policies or through supplemental guidance—permit command-authorized rituals, customs, and rites of passage that are not cruel or abusive, and require commanders to ensure that these events do not include hazing. The Army’s policy states that the chain of command will ensure that traditional events are carried out in accordance with Army values, and that the dignity and respect of all participants is maintained. A quick reference legal handbook issued by the Department of the Navy provides guidance to Navy and Marine Corps commanders for conducting ceremonies and traditional events as part of its section on hazing prevention. Although the Air Force instruction on standards does not specifically address traditions and customs, according to officials in the Air Force Personnel Directorate office, commanders are responsible for ensuring the appropriateness of such observances. During a site visit to Naval Base Coronado, we met with the commander of the USS Carl Vinson, who issued local guidance that was more specifically tailored to a particular event or ceremony under his command. Prior to a recent ‘crossing the line’ ceremony—marking the first time a sailor crosses the equator or the international dateline—the commander of the USS Carl Vinson issued formal guidelines for conducting the ceremony that designated oversight and safety responsibilities, listed permissible and non-permissible activities, and noted that participation was voluntary. Specifically, among other things the guidance stated that servicemembers may perform a talent show, provided that it does not include sexually suggestive props, costumes, skits, or gags. The guidance also stated that servicemembers that do not wish to participate in the events may opt out and that non-participants are not permitted to observe the ceremony or any related activities. The Coast Guard’s hazing instruction permits command-authorized rituals, customs, and rites of passage that are not cruel or abusive, and requires commanders to ensure that these events do not include hazing. Specifically, the Coast Guard’s hazing instruction states that traditional ceremonies, including Chief’s Initiations and equator, international dateline, and Arctic and Antarctic Circle crossings, are authorized, provided that commands comply with governing directives when conducting such ceremonies. The instruction further states that commanding officers shall ensure these events do not include harassment of any kind that contains character degradation, sexual overtones, bodily harm or otherwise uncivilized behavior. In its 2013 report to Congress, DOD said that it would develop an update to the 1997 policy memorandum on hazing, to be followed by an instruction outlining its hazing policy. The Office of the Under Secretary of Defense for Personnel and Readiness in 2013 formed a hazing working group, led by the Office of Diversity Management and Equal Opportunity (ODMEO), to update DOD’s hazing policy. The updated policy was issued as a memorandum in December 2015. The updated policy distinguishes between hazing and bullying and includes a hazing and bullying training requirement, among other things. With the issuance of the memorandum, the officials said they will begin working, through the hazing working group, on a DOD instruction on hazing that will replace the updated memorandum. DOD and the Coast Guard do not know the extent to which hazing policies have been implemented because—with the exception of policy compliance inspections conducted by the Marine Corps—DOD, the military services and the Coast Guard do not conduct oversight by regularly monitoring the implementation of their hazing policies. Standards for Internal Control in the Federal Government states that management designs control activities that include the policies, procedures, techniques, and mechanisms that enforce management’s directives to achieve an entity’s objectives. Although most service policies designated implementation responsibilities, DOD, the military services, and the Coast Guard generally do not know the extent or consistency with which their policies have been implemented because— with the exception of the inspections conducted by the Marine Corps— they have not instituted headquarters-level mechanisms to regularly monitor policy implementation, such as by collecting local command data on hazing policy implementation or conducting site inspections to determine the extent to which the policies have been implemented, among other things. DOD’s 2013 report to Congress on hazing stated that prevention of hazing is under the purview of the Under Secretary of Defense for Personnel and Readiness. However, DOD has not conducted oversight by regularly monitoring the implementation of its hazing policy by the military services, and it has not required that the military services regularly monitor the implementation of their hazing policies. Likewise, the Coast Guard has not required regular headquarters-level monitoring of the implementation of its hazing policy. We reviewed each of the military services’ hazing policies and found that the Army, the Navy, and the Marine Corps policies specify some implementation responsibilities. Specifically, the Army’s hazing regulation states that commanders and supervisors at all levels are responsible for its enforcement. However, according to an official in the Army office that developed the Army’s hazing policy, there is no service-wide effort to oversee the implementation of the hazing regulation. The Navy’s instruction designates commanders and supervisors as responsible for ensuring that all ceremonies and initiations in their organizations comply with the policy. The Navy’s instruction also identifies the Chief of Naval Operations as being responsible for ensuring that the hazing policy is implemented. However, officials in the Navy’s office that develops hazing policy said there is no service-wide effort to specifically oversee implementation of the hazing policy. The Marine Corps’ order designates the Deputy Commandant for Manpower and Reserve Affairs, the Commanding General, and the Marine Corps Combat Development Command, as well as commanding officers, and officers-in-charge as being responsible for policy implementation. In addition, the Marine Corps reported conducting regular inspections of command implementation of the Marine Corps hazing policy as a means of overseeing service-wide implementation of its hazing policy. The Air Force’s hazing policy does not contain specific designations of responsibility. However, the Air Force policy memorandum states that commanders and supervisors must stay engaged to make sure that hazing doesn’t occur in the Air Force and the Air Force instruction on standards states that each airman in the chain of command is obligated to prevent hazing. As with the Army and Navy, the Air Force hazing policy memorandum does not include requirements to regularly monitor policy implementation across the service. The Coast Guard’s hazing instruction generally identifies training centers, commanders, and Coast Guard personnel as being responsible for its implementation. Specifically, the instruction specifies that training centers are responsible for incorporating hazing awareness training into curricula administered to different levels of personnel. In addition to their investigative responsibilities, the instruction also states that commanding officers and supervisors are responsible for ensuring that they administer their units in an environment of professionalism and mutual respect that does not tolerate hazing of individuals or groups. Lastly, the instruction charges all Coast Guard personnel with the responsibility to help ensure that hazing does not occur in any form at any level and that the appropriate authorities are informed of any suspected policy violation. However, the Coast Guard reported that it has not regularly monitored hazing policy implementation. An official in the Army’s Equal Opportunity office stated that although its office has responsibility for hazing policy, the office has not been tasked with, and thus has not developed, a mechanism to monitor implementation of its policy. However, the official acknowledged that it could be helpful to have more information on the extent to which elements of such policies are being incorporated by its commands and at its installations. The official added that ways to do this could include collecting and reviewing data from commands on policy implementation, or conducting inspections, though the official noted that inspections would require additional resources. Officials in the Navy’s Office of Behavioral Standards stated that the responsibility for compliance with the hazing policy is delegated to the command level, with oversight by the immediate superior in command, but our review found that the Navy did not have a mechanism to facilitate headquarters-level monitoring of hazing policy implementation. In contrast, the Marine Corps Inspector General, in coordination with the Marine Corps Office of Manpower and Reserve Affairs, conducts service- wide inspections to determine, among other things, whether the provisions of the Marine Corps’ hazing policy are being implemented consistently and to ensure that commands are in compliance with the requirements of the hazing policy. Marine Corps Inspector General officials told us that the Marine Corps Inspector General has inspected command programs to address hazing since June 1997, with the initial issuance of the Marine Corps’ hazing order. Specifically, the Inspector General checks command programs against a series of hazing-related items, such as whether the command includes hazing policies and procedures in its orientation and annual troop information program and whether the command has complied with hazing incident reporting requirements. These inspections do not necessarily cover all aspects of hazing policy implementation. For example, Marine Corps Inspector General officials told us they do not consistently review the content of training materials, although they do review training rosters to verify that servicemembers have received hazing training. However, the inspections provide additional information to Marine Corps headquarters officials on the implementation of hazing policy by commands. Marine Corps Manpower and Reserve Affairs officials also told us that they will begin consistently reviewing training content after they standardize the training. Marine Corps Inspector General officials stated that at the local level, command inspectors general complete compliance inspections every two years, and the Marine Corps headquarters inspector general assesses local command inspectors general every three years to ensure they are effectively inspecting subordinate units. The Marine Corps headquarters inspector general also inspects those commands that do not have their own inspectors general every two years. According to the Office of the Marine Corps Inspector General, commanders are required to provide the Inspector General—within 30 days of its report—a plan for addressing any findings of non-compliance with the hazing policy. Further, a Marine Corps Manpower and Reserve Affairs official said that when commands are found to be out of compliance with the policy, officials conducting the inspections will assist them in taking steps to improve their hazing prevention program. Marine Corps officials told us that in the past 24 months, 3 of 33 commands inspected by the Marine Corps Inspector General were found to have non-mission-capable hazing prevention programs. They added that not having a mission-capable program does not necessarily indicate the existence of a hazing problem in the command. A Marine Corps Inspector General official said that local inspectors general may re-inspect commands within 60 days, and no longer than the next inspection cycle, to ensure they have made changes to comply with the hazing policy. An official from the Air Force Personnel Directorate stated that oversight is inherent in the requirement to comply with policy and that any violations would be captured through the regular investigative, inspector general, and equal opportunity processes, and potentially the military justice process. The official also added that it is ultimately a commander’s responsibility to ensure policy compliance. However, the Air Force has not established a mechanism that monitors implementation to help ensure commanders are consistently applying the policy. Similarly, officials from the Coast Guard’s Office of Military Personnel, Policy and Standards Division stated that they have not instituted a mechanism to monitor implementation of the Coast Guard’s hazing policy. During site visits to Naval Base Coronado and Marine Corps Base Camp Pendleton, we conducted nine focus groups with enlisted servicemembers and found that they were generally aware of some of the requirements specified in DOD’s and their respective service’s policies on hazing. For example, enlisted personnel in all nine focus groups demonstrated an understanding that hazing is prohibited and generally stated that they had received hazing awareness training. In addition, during our site visit to Naval Base Coronado, servicemembers in one focus group said that prior to a recent ceremony aboard the USS Carl Vinson, the ship’s commander provided all personnel aboard with command-specific guidance and training to raise their awareness of hazing. At Marine Corps Base Camp Pendleton, we identified multiple postings of hazing policy statements throughout various commands. We are encouraged by the actions taken at these two installations and we understand that there is a general expectation for commanders and other leaders in the military services and in the Coast Guard to help ensure compliance with policy. In addition, we note that the Marine Corps has implemented a means of monitoring hazing policy implementation throughout the service. However, without regular monitoring by DOD of the implementation of its hazing policy by the services, and without regular monitoring by all of the services of the implementation of their hazing policies, DOD and the military services will be unable to effectively identify issues and, when necessary, adjust their respective approaches to addressing hazing. Likewise, without regular monitoring by the Coast Guard of the implementation of its hazing policy, the Coast Guard will be unable to effectively identify issues and make adjustments to its approach to addressing hazing when necessary. As previously noted, DOD and military service policies generally define hazing and provide examples of prohibited conduct. However, based on our review of these policies, meetings with officials, and focus groups with servicemembers, we found that the military services may not have provided servicemembers with sufficient information to determine whether specific conduct or activities constitute hazing. According to the Standards for Internal Control in the Federal Government, management establishes standards of conduct, which guide the directives, attitudes, and behaviors of the organization in achieving the entity’s objectives. Each of the military services has defined hazing and provided training on the definition to servicemembers, but may not have provided sufficient clarification to servicemembers to help them make distinctions between hazing and generally accepted activities in the military, such as training and extra military instruction. To help servicemembers recognize an incident of hazing, DOD and military service policies provide a definition of hazing and include examples of rituals for servicemembers to illustrate various types of prohibited conduct. As noted previously, from 1997 to December 2015 DOD defined hazing as any conduct whereby a servicemember, without proper authority, causes another servicemember to suffer, or be exposed to any activity which is, among other things, humiliating or demeaning. According to this definition, hazing includes soliciting another to perpetrate any such activity, and can be verbal or psychological in nature. In addition, consent does not eliminate the culpability of the perpetrator. DOD’s 1997 hazing policy also listed examples such as playing abusive tricks; threatening violence or bodily harm; striking; branding; shaving; painting; or forcing or requiring the consumption of food, alcohol, drugs, or any other substance. The policy also noted that this was not an inclusive list of examples. Likewise, DOD’s revised December 2015 hazing definition includes both physical and psychological acts, prohibits soliciting others to perpetrate acts of hazing, states that consent does not eliminate culpability, and gives a non-inclusive list of examples of hazing. Headquarters-level officials from each military service stated that under the hazing definition a great variety of behaviors could be perceived as hazing. For example, Army officials said the definition encompasses a wide range of possible behaviors. Likewise, Marine Corps officials said that based on the definition included in its order, any activity can be construed as hazing. At our site visits, servicemembers in each focus group, as well as groups of non-commissioned officers, noted that perception plays a significant role in deciding whether something is hazing or not—that servicemembers may believe they have been hazed because they feel demeaned, for example. To distinguish hazing from other types of activities, DOD (in its 1997 hazing memorandum) and military service policies also provide examples of things that are not considered to be hazing, including command- authorized mission or operational activities, the requisite training to prepare for such missions or operations, administrative corrective measures, extra military instruction, command-authorized physical training, and other similar activities that are authorized by the chain of command. However, as DOD noted in its 2013 report to Congress on hazing, corrective military instruction has the potential to be perceived as hazing. DOD noted that military training can be arduous, and stated that hazing prevention education should distinguish between extra military instruction and unlawful behavior. DOD also stated that the services should deliberately incorporate discussion of extra military instruction, including proper administration and oversight, in contrast with hazing as part of prevention education. Conversely, a superior may haze a subordinate, and servicemembers therefore need to be able to recognize when conduct by a superior crosses the line into hazing. To raise awareness of hazing, each service has developed training that provides a general overview of prohibited conduct and the potential consequences. However, the training materials we reviewed did not provide servicemembers with information to enable them to identify less obvious incidents of potential hazing, such as the inappropriate or demeaning use of otherwise generally accepted corrective measures such as extra military instruction. Conversely, the training materials that we reviewed also did not include necessary information to help servicemembers recognize an appropriate use of corrective measures. Specifically, the training materials generally focused on clear examples of hazing behaviors, and did not illustrate where accepted activities such as training and discipline can cross the line into hazing. For example, the Army administers hazing awareness training for use at all levels that provides servicemembers with the definition of hazing and information about the circumstances under which hazing may occur, as well as a list of activities that are not considered hazing. However, our review found that the Army’s training materials do not provide information to servicemembers about how to make consistent determinations about whether an activity should be considered hazing, such as in cases that may resemble permitted activities. Likewise, the Navy’s training is designed to empower sailors to recognize, intervene, and stop various behaviors such as hazing that are not aligned with the Navy’s ethos and core values. However, our review found that the training focuses on intervening when an incident of hazing has occurred and does not include information to help servicemembers discern, for example, when a permissible activity is being used in an impermissible manner. The Marine Corps’ hazing awareness training is locally developed and examples of training materials we reviewed provide an overview of the definition of hazing, examples of acts that could be considered hazing similar to those delineated in the Marine Corps order governing hazing, and a list of potential disciplinary actions that could arise from a violation of the hazing order, among other things. However, our review found that the training materials do not provide servicemembers with information on activities that are not considered hazing, such as extra military instruction, or the necessary information to differentiate between permissible and non-permissible activities. In its 2013 report to Congress on Hazing in the Armed Forces, DOD similarly identified that it can be difficult to distinguish between corrective measures and hazing and noted that the services should incorporate a discussion of extra military instruction, to include proper administration and oversight, in contrast with hazing as part of prevention education. During our site visits to Naval Base Coronado and Marine Corps Base Camp Pendleton, three groups of non-commissioned officers reinforced the suggestion that hazing definitions are not sufficiently clear to facilitate a determination of which activities and conduct constitute hazing. The non-commissioned officers we met with generally agreed that the broad definition of hazing prevents them from effectively doing their jobs, including disciplining servicemembers, taking corrective action, or administering extra military instruction for fear of an allegation of hazing. For example, non-commissioned officers during one site visit said that a servicemember need only say “hazing” to prompt an investigation. During another site visit, a non-commissioned officer described one hazing complaint in which the complainant alleged hazing because the complainant’s supervisor had required that the complainant work late to catch up on administrative responsibilities. Although this complaint was later found to be unsubstantiated, the allegation of hazing required that resources be devoted to investigate the complaint. In addition, some noncommissioned officers we met with stated that they were concerned that the use of extra military instruction may result in an allegation of hazing. In our focus groups, enlisted servicemembers—over the course of both site visits—provided a range of possible definitions for hazing that further demonstrated the different interpretations of what constitutes prohibited conduct. For example, some defined hazing only in physical terms, whereas others recognized that hazing can be purely verbal or psychological as well. Some servicemembers believed that an incident would not be hazing if the servicemembers consented to involvement in the activity, although DOD and service policies state that actual or implied consent to acts of hazing does not eliminate the culpability of the perpetrator. In addition, consistent with the concerns expressed by some of the non-commissioned officers that we interviewed, servicemembers in two focus groups stated that they may perceive extra military instruction as hazing. By contrast, unit commanders and legal officials at one site visit stated that they believe that the existing definition of hazing provides supervisors with sufficient latitude to address misconduct. Standards for Internal Control in the Federal Government states that management establishes expectations of competence for key roles, and other roles at management’s discretion. Competence is the qualification to carry out assigned responsibilities, and requires relevant knowledge, skills, and abilities. It also states that management should internally communicate the necessary quality information to achieve the entity’s objectives. Without a more comprehensive understanding among servicemembers of the conduct and activities that warrant an allegation of hazing, servicemembers may not be able to effectively distinguish, and thus effectively identify and address, prohibited conduct. The Army, the Navy, and the Marine Corps track data on reported incidents of hazing. However, the data collected and the methods used to track them vary, and the data are therefore not complete and consistent. The Air Force does not have a method of specifically tracking hazing incidents, and the data it has generated on hazing incidents is also therefore not necessarily complete, or consistent with the other military services’ data. Likewise, the Coast Guard does not have a method of specifically tracking hazing incidents, and the data it has generated on hazing incidents is therefore not necessarily complete. Although it is difficult to determine the total number of actual hazing incidents, the military services’ data may not effectively characterize reported incidents of hazing because, for the time period of data we reviewed, DOD had not articulated a consistent methodology for tracking hazing incidents, such as specifying and defining common data collection requirements. As a result, there is an inconsistent and incomplete accounting of hazing incidents both within and across these services. Standards for Internal Control in the Federal Government state that information should be recorded and communicated to management and others who need it in a form and within a time frame that allows them to carry out their internal control and other responsibilities. In the absence of DOD-level guidance on how to track and report hazing incidents, the Army, the Navy, and the Marine Corps developed differing policies on hazing data collection and collected data on hazing incidents differently. For example, until October 2015 the Army only collected data on cases investigated by criminal investigators and military police, whereas the Navy collected data on all substantiated hazing incidents reported to commanders, and the Marine Corps collected data on both substantiated and unsubstantiated incidents. The Air Force and the Coast Guard hazing policies do not include a similar requirement to collect and track data on hazing incidents. In the absence of DOD guidance, the Air Force has taken an ad hoc approach to compiling relevant information to respond to requests for data on hazing incidents, and in the absence of Coast Guard guidance on tracking hazing incidents, the Coast Guard has also taken an ad hoc approach to compiling hazing data. For example, the Air Force queried its legal database for cases using variants of the word “hazing” to provide information on hazing incidents to Congress in 2013. Table 1 illustrates some of the differences in the services’ collection of data on hazing incidents and the total number of incidents for each service as reflected in the data for the time period we reviewed. However, due to the differences noted, data on reported incidents of hazing are not comparable across the services. Until September 2015, the Army’s primary tracking method for alleged hazing incidents was a spreadsheet maintained by an official within the Army’s Criminal Investigation Command, which included data on alleged hazing incidents that were recorded in a database of cases investigated by either military police or Criminal Investigation Command investigators, according to officials in the Army’s Equal Opportunity office. However, use of this database as the primary means of tracking hazing incidents limited the Army’s visibility over reported hazing incidents because it did not capture allegations handled by other Army offices, such as cases that are investigated by the chain of command or by the office of the inspector general. Data on hazing incidents through September 2015 are therefore not complete or consistent with the data from the other military services. Beginning in October 2015, the Army began to track hazing and bullying incidents in its Equal Opportunity Office’s Equal Opportunity Reporting System, but Army Equal Opportunity officials told us that they continue to have difficulties obtaining all needed information on hazing cases due to limitations in their ability to obtain information on hazing cases from commanders. The Navy requires that commands report all substantiated hazing incidents by sending a report to the headquarters-level Office of Behavioral Standards, where the information is entered into a spreadsheet that contains service-wide data received on reported hazing incidents. Officials in the Navy’s Office of Behavioral Standards told us that they encourage commanders to also report unsubstantiated incidents, but this is at the commanders’ discretion. The data on unsubstantiated incidents are therefore not necessarily comparable with those of services that require the collection and tracking of data on unsubstantiated incidents. Furthermore, as a result of the different types of data that are collected, reported numbers of hazing incidents may not be consistently represented across the services. Since May 2013, the Marine Corps has required that commanders coordinate with their local Equal Opportunity Advisor to record substantiated and unsubstantiated allegations of hazing in the Marine Corps’ Discrimination and Sexual Harassment database. While the Marine Corps’ tracking method is designed to capture all hazing allegations of which a unit commander is aware, we found that the methods used by the service to count cases, offenders, and victims have not been consistent. For example, our analyses of these data identified inconsistencies over time in the method of recording hazing cases. Specifically, we found that in some instances, a reported hazing incident involving multiple offenders or victims was counted as a separate case for each offender-victim pair. In other instances, the incident was counted as a single case even when it involved multiple offenders or victims. So, for example, an incident involving 2 alleged offenders and 4 alleged victims was counted as 8 incidents, and another with 3 alleged offenders and 3 alleged victims was counted as 9 incidents. On the other hand, we found an example of a case with 4 alleged offenders and 1 alleged victim being counted as a single case, and another with 2 alleged offenders and 2 alleged victims counted as a single case. The recording of incidents in the Marine Corps is therefore not internally consistent or consistent with the other military services. As previously noted, the Air Force does not require that data be collected or tracked on reported incidents of hazing, which has complicated its ability to efficiently provide data on hazing incidents when they are requested. To produce the congressionally-mandated report on hazing incidents reported in fiscal year 2013, the Air Force performed a keyword search of its legal database for variants of the word “hazing.” However, given that the database is used and maintained by legal personnel, query results only captured cases that came to the attention of a judge advocate. Further, while the keyword search of its database identified some incidents, the Air Force does not require that the term “hazing” or any of its variants be included in the case narrative, even if the case involved hazing. An official of the Air Force Legal Operations Agency told us that judge advocates focus on the articles of the UCMJ, and depending on the circumstances, they may or may not consider the context of hazing to be relevant information to record in the file. Given that “hazing” is not specifically delineated as an offense in the UCMJ, documented incidents of hazing in the Air Force fall under various UCMJ articles, such as Article 92 on Failure to Obey an Order or Regulation and Article 128 on Assault, and may not identify the incident as hazing. Consequently, Air Force officials stated that queries of the legal database would not necessarily capture all reported hazing cases across the Air Force. The Air Force’s data on hazing incidents are also therefore not necessarily complete or consistent with the other military services’ data. The Coast Guard also has not established a requirement to collect and track data on reported incidents of hazing, which has complicated its ability to efficiently provide data on hazing incidents when they are requested. As with the Air Force, the Coast Guard’s current process of compiling data on hazing cases has complicated its ability to efficiently provide data on hazing incidents when they are requested, according to Coast Guard officials. For example, to produce the congressionally- mandated report on hazing incidents reported in fiscal year 2013, the Coast Guard queried its database of criminal investigations as well as its database of courts-martials for variants of the term “hazing.” According to Coast Guard officials, the Coast Guard’s queries only captured cases that explicitly used a variant of the term “hazing” in the case narrative and that were investigated by the Coast Guard Investigative Service or had resulted in a court-martial. As such, the Coast Guard’s data did not capture, for example, any cases that may have been investigated by the chain of command and deemed unsubstantiated or resolved through administrative action or non-judicial punishment. The military services’ and the Coast Guard’s available information on hazing cases include some information on the dispositions of hazing cases, which have been adjudicated in a variety of ways. Our review of the data showed that this information was not always available or updated, and the sources of the information were not always reliable. We therefore found that data on hazing case dispositions were not sufficiently reliable to report in aggregate. There were a wide range of dispositions, from cases being found unsubstantiated to courts-martial. For example, in one case, multiple servicemembers pled guilty at court-martial to hazing and assault consummated by battery after being accused of attempted penetrative sexual assault. In another hazing case involving taping to a chair, the offender was punished through non-judicial punishment with restriction, extra duty, and forfeiture of pay and the victim was given a similar but lesser punishment for consenting to the hazing. In a third case, a complainant alleged hazing after being told to work late, but an investigation determined that the allegation was unsubstantiated. ODMEO officials acknowledged that it is difficult to gauge the scope and impact of hazing given the limited information that is currently available and the inconsistent nature of the services’ data collection efforts. DOD’s updated hazing policy includes requirements that are intended to promote greater consistency in the services’ collection of data on reported hazing incidents. Specifically, the revised policy includes a requirement for the services to collect data on the number of substantiated and unsubstantiated incidents of hazing and bullying, as well as the demographics of the complainant and alleged offender in each case, a description of the incident, and if applicable, disposition of the case. ODMEO officials said they plan to provide a data collection template that will provide a standard list of data elements and additional details on the data to be collected and reported to ODMEO. DOD’s updated hazing policy will help to improve the consistency of hazing incident data collected by the services. However, it does not appear that the policy will serve to make the services’ disparate data collection efforts fully consistent because the policy does not clearly define the scope of information or define the data to be collected. For example, the policy requires the military services to track hazing incidents, but does not identify how to count an incident relative to the number of alleged offenders and alleged victims, and the services have counted incidents differently for tracking purposes. ODMEO officials said they are continuing to revise the data collection template, which could provide further specificity to the data collection. As a result of inconsistent and incomplete data, DOD and the Coast Guard cannot provide an accurate picture of reported hazing incidents either for the purposes of internal management or for external reporting. Further, without a common basis to guide the collection of data, including a standard list of data elements, decision makers in DOD, the Coast Guard, and Congress will not be able to use these data to determine the number of reported hazing incidents in DOD or the Coast Guard, or to draw conclusions from the data. To date, DOD and the Coast Guard do not know the extent of hazing in their organizations because they have not conducted an evaluation of the prevalence of hazing. In contrast to the limited data on reports of hazing incidents, information on the prevalence of hazing would help DOD and the Coast Guard to understand the extent of hazing beyond those incidents that are reported. The prevalence of hazing could be estimated based on survey responses, as DOD does in the case of sexual assault. We believe such an evaluation could form the baseline against which to measure the effectiveness of their efforts to address hazing and would enhance visibility over the prevalence of such misconduct. Standards for Internal Control in the Federal Government states that it is important to establish a baseline that can be used as criteria against which to assess progress and to help identify any issues or deficiencies that may exist. ODMEO officials said that their efforts to address hazing are in the early stages and that following the issuance of the updated hazing policy, DOD may begin to establish a baseline against which to evaluate appropriate responses to hazing. However, to date DOD and the military services have not evaluated the prevalence of hazing across their organizations in order to determine the appropriate responses. The Coast Guard also has not evaluated the prevalence of hazing within its service. Officials in each of the military services and the Coast Guard told us that reports of hazing incidents are currently the primary indicator used to gauge the incidence of hazing. However, as previously noted, the data that are currently collected on hazing incidents are neither complete or consistent, and data obtained through other sources, such as surveys, suggest that hazing may be more widespread in the military services and the Coast Guard than the current numbers of reports indicate. In particular, the RAND Corporation conducted a survey on sexual assault and sexual harassment in the military for DOD in 2014, the results of which indicate that the actual number of hazing incidents may exceed the number of reported incidents tracked by the services. Based on our analysis of RAND’s survey results, we estimate that in 2014, about 11,000 male servicemembers in the Army, the Navy, the Marine Corps, and the Air Force were sexually assaulted. Of these, RAND estimated that between 24 percent and 46 percent would describe their sexual assaults as hazing (“things done to humiliate or ‘toughen up’ people prior to accepting them in a group”). Officials from DOD and the Coast Guard told us that hazing and sexual assault can occur as part of the same incident, but it will be documented and addressed based on the more egregious offense—in this case, sexual assault. We recognize that the classification of an offense is key in that it directly corresponds to the punitive actions that can be taken, but note that this further reinforces that there may be a broader incidence of hazing than the data currently collected by the military services and the Coast Guard indicate. In addition to the results of RAND’s survey, we also obtained and analyzed the results of organizational climate surveys for each of the military services and the Coast Guard for calendar year 2014 and determined that some servicemembers perceive that hazing occurs in their units despite the policies in place prohibiting hazing. Commanders throughout the military services and the Coast Guard are required—at designated intervals—to administer organizational climate surveys to members of their respective units. These surveys are designed to evaluate various aspects of their unit’s climate, including, among other things, sexual assault and sexual harassment, and were recently revised to include questions that solicit servicemember perspectives on the incidence of hazing. Specifically, in 2014, the Defense Equal Opportunity Management Institute—the organization responsible for administering the surveys—began including questions related to hazing and demeaning behaviors in the organizational climate surveys it administers for commands throughout the military services and the Coast Guard. Each question asked whether respondents strongly disagreed, disagreed, agreed, or strongly agreed with a statement intended to measure either hazing or demeaning behaviors. Table 2 shows the statements in the organizational climate surveys about hazing and demeaning behaviors. These surveys do not measure the prevalence of hazing. Instead, they measure the extent to which servicemembers perceive that hazing (and demeaning behaviors) occurs in their units. In addition, the organizational climate surveys were designed to be a tool for commanders to evaluate their individual units as opposed to aggregate-level analyses; thus, the data have limitations when used for aggregate-level analysis. The results of these surveys are also not generalizable, in part because the Army requires that command climate surveys be conducted more frequently than is required by the other services. As such, Army responses are overrepresented relative to the other military services when results are aggregated. Finally, survey data may reflect other errors, such as differences in how questions are interpreted. Since demographic information is gathered through self-selection, breaking down the results into specific subgroups may introduce additional error. Despite these limitations, analysis of these data yields insight into perceptions of hazing within and across the services. Table 3 shows the results of our analysis of data from these organizational climate surveys administered by the Defense Equal Opportunity Management Institute for servicemembers in active-duty units in the Army, Navy, Marine Corps, Air Force, and Coast Guard for 2014 on hazing and demeaning behaviors. As shown in table 3, about 12 percent of responses by enlisted servicemembers in active-duty units at the E1-E3 pay grades agreed with all three statements about hazing (noted in table 3, above) and about 18 percent of responses at these pay grades agreed with all three statements about demeaning behaviors. These percentages dropped to about 8 percent and 14 percent, respectively, at the E4-E6 levels, and continued to drop, reaching about 1 percent for hazing and 2 percent for demeaning behaviors for officers at the O4-O6 level. These responses indicate that perceptions of the extent of hazing and demeaning behaviors in the military services and in the Coast Guard may be different between those at the lower and middle enlisted ranks and those with responsibility for developing or enforcing policy. The data also show that perceptions of hazing may differ by service. For hazing, about 9 percent of Army responses agreed with all three statements; about 5 percent of Navy responses agreed with all three statements; about 11 percent of Marine Corps responses agreed with all three statements; and about 2 percent of responses in the Air Force and Coast Guard agreed with all three statements. Likewise, for demeaning behaviors, about 14 percent of Army responses agreed with all three statements; about 9 percent of Navy responses agreed with all three statements; about 15 percent of Marine Corps responses agreed with all three statements; and responses from the Air Force and Coast Guard came in at about 5 percent in agreement with all three statements for each service. The results of such analyses indicate that sufficient numbers of servicemembers perceive hazing to be occurring to warrant evaluation of the prevalence of hazing. In addition, such survey data can provide valuable insights that can be used by military leaders to help form a baseline of information. For example, the services could use the results to evaluate service-wide as well as command-specific perceptions of hazing, compare how perceptions change over time, make comparisons with incident rates, and perform other analyses to identify trends and areas needing improvement. Standards for Internal Control in the Federal Government states that management analyzes identified risks to estimate their significance, which provides a basis for responding to the risks. Management estimates the significance of a risk by considering the magnitude of impact, likelihood of occurrence, and the nature of the risk. In addition, according to leading practices for program evaluations, evaluations can play a key role in planning and program management by providing feedback on both program design and execution. However, DOD and the military services have not evaluated the extent of hazing in their organizations or the magnitude of its impact or likelihood of occurrence, in order to effectively target their responses to hazing. Likewise, the Coast Guard has not evaluated the extent of hazing in the Coast Guard. Without doing so, the services may be limited in their ability to further develop and target their efforts in such a way as to have the maximum positive effect for the most efficient use of resources. Incidents of hazing in DOD and the Coast Guard can have effects that extend beyond their victims and perpetrators, undermining unit cohesion and potentially reducing operational effectiveness as a consequence. At the service-wide level, high-profile hazing incidents can shape public perceptions, potentially making recruitment and retention more challenging. Both DOD and the Coast Guard have issued policies that prohibit hazing. However, DOD issued its earlier hazing policy in 1997, and despite several hazing incidents coming to public attention in recent years, DOD and the Coast Guard do not regularly monitor implementation of their hazing policies and do not know the extent of hazing in their organizations. Without effective monitoring by DOD, the Coast Guard, and each of the services, the offices with responsibility for addressing hazing will not know whether hazing prevention policies and training are being consistently implemented. In addition, servicemembers may not sufficiently understand how to recognize and respond to hazing incidents. As our discussions with groups of servicemembers and officials suggest, there may be confusion that persists. Without providing additional clarification to servicemembers, perhaps through revising and tailoring training or providing more communication, servicemembers may be limited in their ability to carry out their responsibilities, such as recognizing hazing and enforcing discipline. At the same time, if they do not fully understand the hazing policies, hazing victims may not be able to recognize hazing when it occurs, including hazing by those in positions of authority. DOD’s and the Coast Guard’s efforts to reduce hazing would also benefit from a better understanding of the extent of hazing incidents. Available data do not provide a complete picture of the extent of reported hazing incidents. Without consistent and complete tracking of hazing incidents within and across the services, decision makers will not be able to identify areas of concern and target resources appropriately. Achieving such visibility over hazing incidents depends on better data, which will not be available without guidance specifying that the services should track all reported hazing incidents, with standardized and defined data elements that will facilitate the accurate tracking of reported hazing incidents. Concurrent with better data, DOD and the Coast Guard need to evaluate the prevalence of hazing in their organizations, since the data on reported incidents alone will not provide a picture of the full extent of hazing in the armed forces. Without such an evaluation, decision makers will not be positioned to appropriately tailor their response or to judge progress in their efforts. We recommend that the Secretary of Defense take the following seven actions: To enhance and to promote more consistent oversight of efforts within the department to address the incidence of hazing, direct the Under Secretary of Defense for Personnel and Readiness to: regularly monitor the implementation of DOD’s hazing policy by the military services; and require that the Secretaries of the military departments regularly monitor implementation of the hazing policies within each military service. To improve the ability of servicemembers to implement DOD and service hazing policies, direct the Under Secretary of Defense for Personnel and Readiness to establish a requirement for the Secretaries of the military departments to provide additional clarification to servicemembers to better inform them as to how to determine what is or is not hazing. This could take the form of revised training or additional communications to provide further guidance on hazing policies. To promote greater consistency in and visibility over the military services’ collection of data on reported hazing incidents and the methods used to track them, direct the Under Secretary of Defense for Personnel and Readiness, in coordination with the Secretaries of the military departments, to issue DOD-level guidance on the prevention of hazing that specifies data collection and tracking requirements, including the scope of data to be collected and maintained by the military services on reported incidents of hazing; a standard list of data elements that each service should collect on reported hazing incidents; and definitions of the data elements to be collected to help ensure that incidents are tracked consistently within and across the services. To promote greater visibility over the extent of hazing in DOD to better inform DOD and military service actions to address hazing, direct the Under Secretary of Defense for Personnel and Readiness, in collaboration with the Secretaries of the Military Departments, to evaluate prevalence of hazing in the military services. We recommend that the Commandant of the Coast Guard take the following five actions: To enhance and to promote more consistent oversight of the Coast Guard’s efforts to address the incidence of hazing, regularly monitor hazing policy implementation. To promote greater consistency in and visibility over the Coast Guard’s collection of data on reported hazing incidents and the methods used to track them, by issuing guidance on the prevention of hazing that specifies data collection and tracking requirements, including the scope of the data to be collected and maintained on reported incidents of hazing; a standard list of data elements to be collected on reported hazing definitions of the data elements to be collected to help ensure that incidents are tracked consistently within the Coast Guard. To promote greater visibility over the extent of hazing in the Coast Guard to better inform actions to address hazing, evaluate the prevalence of hazing in the Coast Guard. We provided a draft of this report to DOD and DHS for review and comment. Written comments from DOD and DHS are reprinted in their entirety in appendixes IV and V. DOD and DHS concurred with each of our recommendations and also provided technical comments, which we incorporated in the report as appropriate. In its written comments, DOD concurred with the seven recommendations we directed to it, and made additional comments about ways in which its newly issued December 2015 hazing policy memorandum takes actions toward our recommendations. Among other things, the new hazing policy assigns authority to the Under Secretary for Personnel and Readiness to amend or supplement DOD hazing and bullying policy, requires training on hazing and bullying for servicemembers, and requires tracking of hazing incidents, but in itself does not fully address our recommendations. Regarding our recommendation for the Under Secretary of Defense for Personnel and Readiness to regularly monitor the implementation of DOD’s hazing policy by the military services, DOD stated that its December 23, 2015 updated hazing policy memorandum provides comprehensive definitions of hazing and bullying, enterprise-wide guidance on prevention training and education, as well as reporting and tracking requirements. We agree that these are important steps to address hazing in the armed services. However, the policy does not specifically require the Under Secretary of Defense for Personnel and Readiness to regularly monitor the implementation of DOD’s hazing policy, and we continue to believe that the Under Secretary of Defense for Personnel and Readiness should monitor the implementation of DOD’s hazing policy to ensure its requirements are implemented throughout the military services. With respect to our recommendation to establish a requirement for the secretaries of the military departments to provide additional clarification to servicemembers to better inform them as to how to determine what is or is not hazing, DOD stated that its December 2015 updated hazing policy memorandum directs the military departments to develop training that includes descriptions of the military departments' hazing and bullying policies and differentiates between what is or is not hazing and bullying. We are encouraged by DOD’s efforts to integrate the recommendation into its policy requirements and believe the services will benefit by incorporating these requirements into their hazing prevention activities. Regarding our recommendations to issue DOD-level guidance that specifies data collection and tracking requirements for hazing incidents, including the scope of data to be collected and maintained by the military services on reported incidents of hazing and a standard list of data elements that each service should collect on reported hazing incidents, DOD stated that its December 2015 updated hazing policy memorandum provides guidance and requirements for tracking and reporting incidents of hazing and bullying. We believe that the incident data tracking requirements in this policy are an important step for DOD to improve its data collection on hazing incidents. As noted in our report, the updated policy memorandum will not fully address disparities in service-specific data collection efforts until DOD and the services clearly define the scope of information or define the data to be collected. For example, the hazing policy requires the services to track hazing incidents, but does not identify how to count an incident relative to the number of alleged offenders and alleged victims, and the services have counted incidents differently for tracking purposes. As we note in the report, DOD plans to provide a data collection template to the services, and this could provide a vehicle for fully addressing these recommendations. In its written comments, DHS concurred with the five recommendations we directed to the Coast Guard, and made additional comments about steps the Coast Guard will take to address our recommendations. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of Homeland Security, the Under Secretary of Defense for Personnel and Readiness, the Secretaries of the Army, the Navy, and the Air Force, and the Commandants of the Marine Corps and the Coast Guard. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To determine the extent to which the Department of Defense (DOD) and the Coast Guard have developed and implemented policies to address hazing incidents, we reviewed DOD’s 1997 hazing memorandum, its December 2015 updated hazing and bullying policy memorandum, and the hazing policies of each military service and the Coast Guard. We compared the policies, definitions of hazing, and oversight and training requirements to determine similarities and differences. To better understand the hazing policies and guidance from each service, including the Coast Guard, we interviewed knowledgeable officials from the Office of Diversity Management and Equal Opportunity in the Office of the Under Secretary of Defense for Personnel and Readiness, the Army Equal Opportunity Office, the Navy Office of Behavioral Standards, the Marine Corps Office of Manpower and Reserve Affairs, the Air Force Personnel Directorate, and the Coast Guard Office of Military Personnel, Policy and Standards Division, as well as officials in other offices listed in table 4, below. In addition, we reviewed the services’ hazing awareness training requirements included in their respective policies and analyzed the services’ training materials to determine how servicemembers are trained on hazing awareness, prevention, and response. We also interviewed or requested information from officials responsible for developing training from the Army Training and Doctrine Command, Naval Education and Training Command, Marine Corps Training and Education Command, Air Force Personnel Directorate, and the Coast Guard Fleet Forces Command and Leadership Development Center. To better understand the reporting and response mechanisms employed by DOD and the Coast Guard, as well as the approaches in each service for responding to allegations of hazing as well as applications of the Uniform Code of Military Justice (UCMJ), court-martial, non-judicial punishment, and administrative action, we reviewed relevant policies and interviewed cognizant officials from the Army Office of the Provost Marshal General and Criminal Investigation Command, Naval Criminal Investigative Service, Marine Corps Judge Advocate Division and Inspector General, Air Force Office of Special Investigations, Security Forces Directorate, Legal Operations Agency, and Inspector General, and the Coast Guard Office of the Judge Advocate General and the Coast Guard Investigative Service. To better understand how policy and training is implemented at installations, and to obtain servicemember perspectives on hazing and hazing awareness training, we conducted site visits to Naval Base Coronado, California, and Marine Corps Base Camp Pendleton, California. We selected these sites based upon reported hazing data, media reports of hazing, data on male victims of sexual assault, and geographic proximity to each other. During these site visits we conducted nine focus groups with enlisted servicemembers in grades E-3 through E-5 that included a self-administered pen and paper survey of all participants. We selected these grades because available data on reported hazing incidents indicated that these grades were most likely to be victims or perpetrators of a hazing incident. In addition, we met with groups of noncommissioned officers (grades E-6 through E-9), commanding officers, inspectors general, equal opportunity advisors, staff judges advocates, and chaplains to obtain perspectives of servicemembers and other officials that may be involved in addressing hazing. For further information about the focus group and survey methodology, see appendix III. We compared the extent to which DOD and each armed service has oversight mechanisms in place to monitor the implementation of hazing policies to the Standards for Internal Control in the Federal Government criteria on control activities, which include the policies, procedures, techniques, and mechanisms that enforce management’s directives to achieve an entity’s objectives. We also compared the extent to which guidance to servicemembers provides enough clarity to determine when hazing has occurred to the Standards for Internal Control in the Federal Government criteria that state that management establishes standards of conduct that guide the directives, attitudes, and behaviors of the organization in achieving the entity’s objectives, as well as Standards for Internal Control in the Federal Government criteria that state that management establishes expectations of competence for key roles, and other roles at management’s discretion and that management should internally communicate the necessary quality information to achieve the entity’s objectives. To determine the extent to which DOD and the Coast Guard have visibility over hazing incidents involving servicemembers, we reviewed the DOD and Coast Guard hazing policies noted above to identify any tracking requirements. To determine the number of reported hazing incidents and the nature of these incidents, we reviewed available data on reported hazing allegations from each service covering a two-year time period. The Army, Navy, Air Force, and Coast Guard data covered the period from December 2012 through December 2014. The Marine Corps database for tracking hazing incidents began tracking in May 2013, so we analyzed Marine Corps data from May 2013 through December 2014. We reviewed the methods each service used to track hazing incident data by interviewing officials from the Army Equal Opportunity Office and the Army Criminal Investigation Command; the Navy Office of Behavioral Standards; the Marine Corps Office of Manpower and Reserve Affairs; the Air Force Personnel Directorate and Air Force Legal Operations Agency; and the Coast Guard Office of Military Personnel, Policy and Standards Division and the Coast Guard Investigative Service. We found that the Army and Navy data were sufficiently reliable to report the number of hazing cases, offenders, and victims, as well as demographic and rank data on offenders and victims. However, due to limitations in the methods of collection, the data reported do not necessarily represent the full universe of reported hazing incidents in the Army and Navy. We found that the Marine Corps data was not sufficiently reliable to report accurate information on the total number of cases, offenders, and victims, or demographic and rank data. The Marine Corps did not record the number of hazing cases in an internally consistent manner, resulting in duplicate records for cases, offenders, and victims, and no consistent means for correcting for the duplication. We found that the Air Force data were sufficiently reliable to report the number of cases and offenders, but not to report demographic information for the offenders or to report any information on the victims because it did not consistently track and report demographic and rank information. We also found that the Coast Guard data were sufficiently reliable to report the number of cases, offenders, and victims, but not to report demographic and rank information because it did not consistently track and report demographic and rank information. In addition, due to limitations of the collection methods, the data reported do not necessarily represent the full universe of reported hazing incidents in the Air Force and Coast Guard. We found that hazing data in all services were not sufficiently reliable to report information on the disposition of hazing cases because they did not consistently track and report this information, and because the source data for these dispositions was not reliable. We also compared the services’ methods of data collection with Standards for Internal Control in the Federal Government criteria stating that information should be recorded and communicated to management and others who need it in a form and within a time frame that allows them to carry out their internal control and other responsibilities. We also reviewed the 2014 RAND Corporation military workplace study commissioned by the Office of the Secretary of Defense and analyzed data reported on that study on sexual assault and hazing. We also interviewed officials of the Defense Equal Opportunity Management Institute about command climate surveys and analyzed data obtained from responses to command climate survey questions relating to hazing and demeaning behaviors. We obtained survey data based on three hazing questions and three demeaning behavior questions that were asked of all survey respondents during calendar year 2014; in addition, we obtained survey data for demographic and administrative variables that we used to analyze the data across all of the command climate surveys we obtained. The data we analyzed included responses by active-duty servicemembers in all five armed services—Army, Navy, Marine Corps, Air Force, and Coast Guard—during calendar year 2014. We summarized the results for active-duty servicemembers by rank, gender, race/ethnicity, and by service across all of the command climate survey responses that were collected for the time period. Because of the nature of the process used to administer and to collect the results of the command climate surveys, the analysis cannot be generalized to the entire population of active servicemembers across the armed forces or for each service. For example, it is not possible to discern whether every unit administered the command climate survey, nor whether any particular unit administered the survey multiple times within the time period from which we obtained data. Therefore, the analyses we present using the command climate survey data are not intended to reflect precise information about the prevalence of perceptions related to hazing, but rather to demonstrate how the survey data might be used if the methods allowed the ability to generalize to all servicemembers. We compared the extent to which DOD and the Coast Guard have evaluated the prevalence of hazing with Standards for Internal Control in the Federal Government criteria on evaluating risks, and with leading practices for program evaluations. In addition to these organizations, we also contacted the RAND Corporation. We conducted this performance audit from April 2015 to February 2016 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Not all of the military services or the Coast Guard track data on reported hazing. Further, the data that are collected and the methods used to track them vary by service because neither the Department of Defense (DOD) or the Coast Guard has articulated a consistent methodology. As a result of inconsistent and incomplete data, any data tracked and reported by the armed services currently cannot be used to provide a complete and accurate picture of hazing in the armed services, and the data from one service cannot be compared to that of another service. To the extent possible based on the availability of data, we obtained and reviewed data on reported hazing cases from each military service covering the period December 2012 to December 2014. For the Air Force and Coast Guard, neither of which specifically tracked hazing cases, we obtained information derived from legal and criminal investigative databases, which were the methods these services used to report hazing information to congressional committees in 2013. The following information is derived from our analyses of these data. The Army specifies the use of its Equal Opportunity Reporting System database to track hazing cases. However, the Army only began using its equal opportunity database to track hazing cases in October 2015. Previously hazing cases were tracked by Army Criminal Investigation Command. Criminal Investigation Command tracked cases using its database of cases investigated by Criminal Investigation Command and by military police, so these data necessarily exclude cases that were not investigated by Criminal Investigation Command or military police. Figure 2 shows our analysis of the Army’s hazing cases from December 2012 through December 2014. NOTE: Data are from December 2012 through December 2014.These data only include allegations investigated by military police or criminal investigators. We excluded from the above data one case with one alleged offender and an unknown number of alleged victims due to the absence of a precise number of victims. Enlisted grades begin at E1 (lowest grade), and officer grades begin at O1. As shown in Figure 2, during this time period the Army identified a total of 17 alleged cases involving 93 alleged offenders and 47 alleged victims. The majority of alleged offenders and alleged victims were either in grades E4-E6 or E1-E3, and more alleged offenders were E4-E6 than E1- E3, while more alleged victims were E1-E3 than E4-E6. A majority of alleged offenders and alleged victims were male. Most alleged victims and alleged offenders were white, non-Hispanic, but the race and ethnicity information for some alleged offenders and alleged victims was unknown. The Navy requires commanders to report substantiated hazing cases to the Office of Behavioral Standards, which then tracks the cases in a spreadsheet. Although Navy policy only requires substantiated cases to be reported, officials in the Navy’s Office of Behavioral Standards told us they encourage commanders to report both unsubstantiated and substantiated cases, and the data include both, to the extent reported. Figure 3 shows our analysis of this data from December 2012 through December 2014. NOTE: Data are from December 2012 through December 2014. These data include some unsubstantiated cases; however, Navy policy only requires substantiated cases to be reported, so the data may not include all unsubstantiated cases. Ten cases are excluded from the above data due to the inclusion of an unknown number of alleged offenders or alleged victims. These cases included 5 known alleged offenders and 7 known alleged victims. From FY13 to FY14, the Navy switched its method of recording race and ethnicity. In FY13, the Navy included “Hispanic” as one category among other racial/ethnic categories; beginning in FY14, it began tracking race and ethnicity separately. Beginning in FY14 the Navy data record some cases where it was unknown whether the alleged victim or offender was Hispanic—82 alleged offenders and 65 alleged victims of unknown ethnicity in total. Therefore, all racial/ethnic categories not specifically marked as Hispanic could include Hispanics in the data above. Enlisted grades begin at E1 (lowest grade), and officer grades begin at O1. As shown in Figure 3, during this time period the Navy identified 63 alleged hazing cases, involving 127 alleged offenders and 97 alleged victims. The majority of alleged offenders were in grades E4-E6, while the majority of alleged victims were either E1-E3 or E4-E6. Alleged offenders were overwhelmingly male, while alleged victims included a significant minority of women. In terms of race and ethnicity, the greatest single group of both alleged offenders and alleged victims was white, non- Hispanic. The Marine Corps uses its Discrimination and Sexual Harassment database to track alleged hazing incidents, both substantiated and unsubstantiated. We obtained and analyzed data from May 2013, when the Marine Corps began using this tracking method, through December 2014. We found internal inconsistencies in the Marine Corps’ tracking data, and for that reason found that the data were not reliable enough to report detailed information about these alleged hazing cases. Specifically, from May 2013 through December 2014, the Marine Corps recorded 303 alleged hazing cases for which there were 390 alleged victims and 437 alleged offenders. However, our analyses of these data identified inconsistencies in the methods used to aggregate categories of information collected on reported incidents of hazing. For example, we found that in some instances, a reported hazing case involving two alleged offenders and one alleged victim was counted as a single case, whereas other instances that involved the same number of individuals were classified as two cases—one for each alleged offender. Similarly, we identified single reports of hazing that involved multiple alleged victims and were classified as one case that, at other times, were documented as separate cases relative to the number of alleged victims involved. We determined that the Marine Corps’ data, for the time period requested, were overstated by at least 100 reported hazing cases, at least 50 alleged offenders, and at least 90 alleged victims. The Air Force has not established a system specifically to track hazing cases. In its July 2013 report to congressional committees, Hazing in the Armed Forces, the Air Force stated that hazing incidents in the service are best tracked using its legal database by querying the text of the cases for variants of the word “hazing.” Accordingly, we obtained information on hazing cases from December 2012 through December 2014 from a search performed in this database for variants of the word “hazing,” the results of which were provided to us by the Air Force Legal Operations Agency. This data showed 4 cases with 17 alleged offenders that were reported from December 2012 through December 2014. However, these data do not present a complete picture of hazing cases in the Air Force, as they do not necessarily capture any cases that did not come to the attention of a staff judge advocate. The case files did not generally capture race or ethnicity data for alleged offenders and alleged victims; did not systematically capture gender of alleged offenders and alleged victims; generally did not capture the rank of alleged victims; and did not systematically capture the number of alleged victims. Therefore, we are not reporting rank or demographic data. The Coast Guard has not established a system specifically to track hazing cases. In its 2013 report to congressional committees, Hazing in the Coast Guard, the Coast Guard reported hazing incidents derived from legal and criminal investigative sources. Accordingly, to obtain data on Coast Guard hazing incidents, we used the Coast Guard’s Quarterly Good Order and Discipline Reports, which contain a summary of disciplinary and administrative actions taken against Coast Guard military members or civilian employees, as well as Coast Guard Investigative Service case files. For the Good Order and Discipline reports covering disciplinary and administrative actions taken between October 2012 and March 2015, only one case explicitly mentioned hazing. However, these reports only include brief descriptions for certain types of cases, such as courts-martial, and do not include any details of the alleged offense and punishment for cases resulting in non-judicial punishment. In response to our request to identify Coast Guard Investigative Service cases using variants of the word “hazing” from December 2012 through December 2014, the Coast Guard identified six cases involving 14 known alleged victims and 20 known alleged offenders (the number of both offenders and victims in one case were unknown). These case files did not consistently track and report the race, ethnicity, rank, and gender of the offenders and victims; therefore we are not reporting rank or demographic data. Due to the limitations of these methods of capturing reported hazing cases, these data do not necessarily present a complete picture of the number of reported hazing incidents in the Coast Guard. In addition, Coast Guard officials told us that conducting this search for case file information was time- and resource-consuming, and even with this allocation of time and resources the results of the judicial and investigative information sources may not yield complete information on reported hazing cases in the Coast Guard. To obtain servicemembers’ perspectives related to each of our objectives, we conducted nine focus group meetings with active-duty servicemembers in the grades E3-E5. Four of these meetings were held at Marine Corps Base Camp Pendleton, California, and five meetings were held at U.S. Naval Base Coronado, California. We selected these sites based upon reported hazing data, media reports of hazing, data on male victims of sexual assault, and geographic proximity to each other. To select specific servicemembers to participate in our focus groups, we requested lists of servicemembers who were stationed at each location and likely available to participate at the time of our visit. The documentation included information about their rank, gender, and occupation. (Navy) Petty Officer Taylor is on his first deployment to the South Pacific. His fellow shipmates have told him about an upcoming ceremony to celebrate those crossing the equator for the first time. The day of the equator crossing, all shipmates (“shellbacks and wogs”) dress up in costume. The wogs, or those who are newly crossing the equator, rotate through different stations, including tug-of-war and an obstacle course. One of the shellbacks, or those who have already crossed the line, is dressed up as King Neptune and asks the wogs to kiss his hands and feet. In addition, all of the “wogs” are required to take a shot of tequila. After completing all the stations and crossing the equator, Petty Officer Taylor is officially a shellback. (Marine Corps) Lance Corporal Jones recently received a promotion to Corporal. To congratulate him for the promotion, members of his unit take him to the barracks and begin hitting him at the spot of his new rank. (Navy) After dinner, Petty Officer Sanchez talks with fellow sailors about playing some pranks on other members of the ship. They see Seaman Williams walking down the hall and bring him into a storage closet. There, they tape his arms and legs to a chair and leave him alone in the closet to see if he can escape. (Marine Corps) After dinner, Sergeant Sanchez talks with fellow marines about playing some pranks on other members of the platoon. They see Corporal Williams walking down the hall and bring him into a storage closet. There, they tape his arms and legs to a chair and leave him alone in the closet to see if he can escape. These scenarios, providing examples of hazing, along with the following set of questions, were the basis for the discussion with participants and the context for responding to the survey questions that were administered following the discussion. Would you consider this example hazing? Do activities like these two examples sound like they could ever happen in the Marine Corps/Navy? What about these activities is good? What about these activities might be harmful? Do you think activities like these are important for a Marine/Sailor to become a part of the group or the unit? Now that we’ve talked about hazing, what kind of training about hazing have you received in the Marine Corps/Navy? Are there any other topics about hazing that we haven’t covered? To obtain additional perspectives on hazing, particularly regarding sensitive information about personal experience with hazing, servicemembers participating in each focus group completed a survey following the discussion. The survey consisted of a self-administered pen and paper questionnaire that was provided to each focus group participant in a blank manila envelope without any identifying information. The moderator provided the following verbal instructions: I’d like you to take a few minutes to complete this survey before we finish. Please do not put your name or any identifying information on it. Take it out of the envelope, take your time and complete the questions, and please place it back in the envelope. When you are done, you can leave it with me/put it on the chair and then leave. Because we did not select participants using a statistically representative sampling method, the information provided from the surveys is nongeneralizable and therefore cannot be projected across the Department of Defense, a service, or any single installation we visited. The questions and instructions are shown below with the results for the closed-ended questions. Survey of Navy and Marine Corps Focus Group Participants Instructions: Please complete the entire survey below. Do not include your name or other identifying information. Once finished, please place the completed survey back in the envelope and return the envelope. 1. Have you experienced hazing in the Navy/Marine Corps? I’m not sure Total 14 4 36 9 5 2 55 15 2. (If “Yes” or “I’m not sure” for 1) What happened? (Please briefly describe the event(s)) 3. In the group discussion we talked about two examples that some would consider hazing. If these examples happened in your unit, would it be OK with the unit leadership? (check one for each row) Crossing the Line (Navy)/Pinning (Marine Corps) I don’t know 4. Some activities that are traditions in the Marine Corps/Navy are now considered hazing. Is it important to continue any of these activities? Please explain why or why not? 5. Have you received hazing prevention training in the Navy/Marine Corps? 6. Is there anything else you want us to know about hazing in the Navy/Marine Corps? In addition to the contact named above, key contributors to this report were Kimberly Mayo, Assistant Director; Tracy Barnes; Cynthia Grant; Simon Hirschfeld; Emily Hutz; Ronald La Due Lake; Alexander Ray; Christine San; Monica Savoy; Amie Lesser; Spencer Tacktill; and Erik Wilkins-McKee.
Initiations and rites of passage can instill esprit de corps and loyalty and are included in many traditions throughout DOD and the Coast Guard. However, at times these, and more ad hoc activities, have included cruel or abusive behavior that can undermine unit cohesion and operational effectiveness. Congress included a provision in statute for GAO to report on DOD, including each of the military services, and Coast Guard policies to prevent, and efforts to track, incidents of hazing. This report addresses the extent to which DOD and the Coast Guard, which falls under the Department of Homeland Security (DHS), have (1) developed and implemented policies to address incidents of hazing, and (2) visibility over hazing incidents involving servicemembers. GAO reviewed hazing policies; assessed data on hazing incidents and requirements for and methods used to track them; assessed the results of organizational climate surveys that included questions on hazing; conducted focus groups with servicemembers during site visits to two installations selected based on available hazing and sexual assault data, among other factors; and interviewed cognizant officials. The Department of Defense (DOD), including each of the military services, and the Coast Guard have issued policies to address hazing, but generally do not know the extent to which their policies have been implemented. The military services' and Coast Guard's policies define hazing similarly to DOD and include servicemember training requirements. The military service and Coast Guard policies also contain guidance, such as responsibilities for policy implementation and direction on avoiding hazing in service customs and traditions, beyond what is included in DOD's policy. However, DOD and the Coast Guard generally do not know the extent to which their policies have been implemented because most of the services and the Coast Guard have not conducted oversight through regular monitoring of policy implementation. The Marine Corps conducts inspections of command hazing policy on issues such as providing servicemembers with information on the hazing policy and complying with hazing incident reporting requirements. While these inspections provide Marine Corps headquarters officials with some information they can use to conduct oversight of hazing policy implementation, they do not necessarily cover all aspects of hazing policy implementation. Without routinely monitoring policy implementation, DOD, the Coast Guard, and the military services may not have the accountability needed to help ensure efforts to address hazing are implemented consistently. DOD and the Coast Guard have limited visibility over hazing incidents involving servicemembers. Specifically, the Army, the Navy, and the Marine Corps track data on reported incidents of hazing, but the data are not complete and consistent due to varying tracking methods that do not always include all reported incidents. For example, until October 2015, the Army only tracked cases investigated by criminal investigators or military police, while the Navy required reports on substantiated hazing cases and the Marine Corps required reports on both substantiated and unsubstantiated cases. The Air Force and Coast Guard do not require the collection of hazing incident data, and instead have taken an ad hoc approach to compiling relevant information to respond to requests for such data. In the absence of guidance on hazing data collection, DOD and the Coast Guard do not have an accurate picture of reported hazing incidents across the services. In addition, DOD and the Coast Guard have not evaluated the prevalence of hazing. An evaluation of prevalence would provide information on the extent of hazing beyond the limited data on reported incidents, and could be estimated based on survey responses, as DOD does in the case of sexual assault. Service officials said that currently, reported hazing incidents are the primary indicator of the extent of hazing. However, data obtained through other sources suggest that hazing may be more widespread in DOD and the Coast Guard than the current reported numbers. For example, GAO analysis of organizational climate survey results from 2014 for the military services and the Coast Guard found that about 12 percent of respondents in the junior enlisted ranks indicated their belief that such incidents occur in their units. Although these results do not measure the prevalence of hazing incidents, they yield insights into servicemember perceptions of hazing, and suggest that an evaluation of the extent of hazing is warranted. Without evaluating the prevalence of hazing within their organizations, DOD and the Coast Guard will be limited in their ability to effectively target their efforts to address hazing. GAO is making 12 recommendations, among them that DOD and the Coast Guard regularly monitor policy implementation, issue guidance on the collection and tracking of hazing incident data, and evaluate the prevalence of hazing. DOD and DHS concurred with all of GAO's recommendations and have begun taking actions to address them.
You are an expert at summarizing long articles. Proceed to summarize the following text: DHS satisfied or partially satisfied each of the applicable legislative conditions specified in the act. In particular, the plan, including related program documentation and program officials’ statements, satisfied or provided for satisfying all key aspects of (1) compliance with the DHS enterprise architecture; (2) federal acquisition rules, requirements, guidelines, and systems acquisition management practices; and (3) review and approval by DHS and the Office of Management and Budget (OMB). Additionally, the plan, including program documentation and program officials’ statements, satisfied or provided for satisfying many, but not all, key aspects of OMB’s capital planning and investment review requirements. For example, DHS fulfilled the OMB requirement that it justify and describe its acquisition strategy. However, DHS does not have current life cycle costs or a current cost/benefit analysis for US-VISIT. DHS has implemented one, and either partially implemented or has initiated action to implement most of the remaining recommendations contained in our reports on the fiscal year 2002 and fiscal year 2003 expenditure plans. Each recommendation, along with its current status, is summarized below: Develop a system security plan and privacy impact assessment. The department has partially implemented this recommendation. As to the first part of this recommendation, the program office does not have a system security plan for US-VISIT. However, the US-VISIT Chief Information Officer (CIO) accredited Increment 1 based upon security certifications for each of Increment 1’s component systems and a review of each component’s security-related documentation. Second, although the program office has conducted a privacy impact assessment for Increment 1, the assessment does not satisfy all aspects of OMB guidance for conducting an assessment. For example, the assessment does not discuss alternatives to the methods of information collection, and the system documentation does not address privacy issues. Develop and implement a plan for satisfying key acquisition management controls, including acquisition planning, solicitation, requirements management, program management, contract tracking and oversight, evaluation, and transition to support, and implement the controls in accordance with the Software Engineering Institute’s (SEI) guidance. The department plans to implement this recommendation. The US-VISIT program office has assigned responsibility for implementing the recommended controls. However, it has not yet developed explicit plans or time frames for defining and implementing them. Ensure that future expenditure plans are provided to the department’s House and Senate Appropriations Subcommittees in advance of US- VISIT funds being obligated. With respect to the fiscal year 2004 expenditure plan, DHS implemented this recommendation by providing the plan to the Senate and House subcommittees on January 27, 2004. According to the program director, as of February 2004 no funds had been obligated to US-VISIT. Ensure that future expenditure plans fully disclose US-VISIT capabilities, schedule, cost, and benefits. The department has partially implemented this recommendation. Specifically, the plan describes high-level capabilities, high-level schedule estimates, categories of expenditures by increment, and general benefits. However, the plan does not describe planned capabilities by increment and provides only general information on how money will be spent in each increment. Moreover, the plan does not identify all expected benefits in tangible, measurable, and meaningful terms, nor does it associate any benefits with increments. Establish and charter an executive body composed of senior-level representatives from DHS and each US-VISIT stakeholder organization to guide and direct the program. The department has implemented this recommendation by establishing a three-entity governance structure. The entities are (1) the Homeland Security Council, (2) the DHS Investment Review Board, and (3) the US- VISIT Federal Stakeholders Advisory Board. The purpose of the Homeland Security Council is to ensure the coordination of all homeland security- related activities among executive departments and agencies, and the Investment Review Board is expected to monitor US-VISIT’s achievement of cost, schedule, and performance goals. The advisory board is chartered to provide recommendations for overseeing program management and performance activities, including providing advice on the overarching US- VISIT vision; recommending changes to the vision and strategic direction; and providing a communications link for aligning strategic direction, priorities, and resources with stakeholder operations. Ensure that human capital and financial resources are provided to establish a fully functional and effective program office. The department is in the process of implementing this recommendation. DHS has determined that US-VISIT will require 115 government personnel and has filled 41 of these, including 12 key management positions. However, 74 positions have yet to be filled, and all filled positions are staffed by detailees from other organizational units within the department. Clarify the operational context in which US-VISIT is to operate. The department is in the process of implementing this recommendation. DHS released Version 1 of its enterprise architecture in October 2003, and it plans to issue Version 2 in September 2004. Determine whether proposed US-VISIT increments will produce mission value commensurate with cost and risks. The department plans to implement this recommendation. The fiscal year 2004 expenditure plan identifies high-level benefits to be delivered, but the benefits are not associated with specific increments. Additionally, the plan does not identify the total cost of Increment 2. Program officials expected to finalize a cost-benefit analysis this past March and a US-VISIT life cycle cost estimate this past April. Define program office positions, roles, and responsibilities. The department is in the process of implementing this recommendation. Program officials are currently working with the Office of Personnel Management to define program position descriptions, including roles and responsibilities. The program office has partially completed defining the competencies for all 12 key management areas. These competencies are to be used in defining the position descriptions. Develop and implement a human capital strategy for the program office. The department plans to implement this recommendation in conjunction with DHS’s ongoing workforce planning, but stated that they have yet to develop a human capital strategy. According to these officials, DHS’s departmental workforce plan is scheduled for completion during fiscal year 2004. Develop a risk management plan and report all high risks areas and their status to the program’s governing body on a regular basis. The department has partially implemented this recommendation. The program has completed a draft risk management plan, and is currently defining risk management processes. The program is creating a risk management team to operate in lieu of formal processes until these are completed, and also maintains a risk-tracking database that is used to manage risks. Define performance standards for each program increment that are measurable and reflect the limitations imposed by relying on existing systems. The department is in the process of implementing this recommendation. The program office has defined limited performance standards, but not all standards are being defined in a way that reflects the performance limitations of existing systems. Our observations recognize accomplishments to date and address the need for rigorous and disciplined program management practices relating to system testing, independent verification and validation, and system change control. An overview of specific observations follows: Increment 1 commitments were largely met. An initial operating capability for entry (including biographic and biometric data collection) was deployed to 115 air and 14 sea ports of entry on January 5, 2004, with additional capabilities deployed on February 11, 2004. Exit capability (including biometric capture) was deployed to one air and one sea port of entry. Increment 1 testing was not managed effectively and was completed after the system became operational. The Increment 1 system acceptance test plan was developed largely during and after test execution. The department developed multiple plans, and only the final plan, which was done after testing was completed, included all required content, such as tests to be performed and test procedures. None of the test plan versions, including the final version, were concurred with by the system owner or approved by the IT project manager, as required. By not having a complete test plan before testing began, the US-VISIT program office unnecessarily increased the risk that the testing performed would not adequately address Increment 1 requirements and failed to have adequate assurance that the system was being fully tested. Further, by not fully testing Increment 1 before the system became operational, the program office assumed the risk of introducing errors into the deployed system. In fact, post-deployment problems surfaced with the Student and Exchange Visitor Information System (SEVIS) interface as a result of this approach, and manual work-arounds had to be implemented. The independent verification and validation contractor’s roles may be in conflict. The US-VISIT program plans to use its contractor to review some of the processes and products that the contractor may be responsible for defining or executing. Depending on the products and processes in question, this approach potentially impedes the contractor’s independence, and thus its effectiveness. A program-level change control board has not been established. Changes related to Increment 1 were controlled primarily through daily coordination meetings (i.e., oral discussions) among representatives from Increment 1 component systems teams and program officials, and the various boards already in place for the component systems. Without a structured and disciplined approach to change control, program officials do not have adequate assurance that changes made to the component systems for non-US-VISIT purposes do not interfere with US-VISIT functionality. The fiscal year 2004 expenditure plan does not disclose management reserve funding. Program officials, including the program director, stated that reserve funding is embedded within the expenditure plan’s various areas of proposed spending. However, the plan does not specifically disclose these embedded reserve amounts. By not creating, earmarking, and disclosing a specific management reserve fund in the plan, DHS is limiting its flexibility in addressing unexpected problems that could arise in the program’s various areas of proposed spending, and it is limiting the ability of the Congress to exercise effective oversight of this funding. Plans for future US-VISIT increments do not call for additional staff or facilities at land ports of entry. However, these plans are based on various assumptions that potential policy changes could invalidate. These changes could significantly increase the number of foreign nationals who would require processing through US-VISIT. Additionally, the Data Management Improvement Act Task Force’s 2003 Second Annual Report to Congress has noted that existing land port of entry facilities do not adequately support even the current entry and exit processes. Thus, future US-VISIT staffing and facility needs are uncertain. The fiscal year 2004 US-VISIT expenditure plan (with related program office documentation and representations) at least partially satisfies the legislative conditions imposed by the Congress. Further, steps are planned, under way, or completed to address most of our open recommendations. However, overall progress on all of our recommendations has been slow, and considerable work remains to fully address them. The majority of these recommendations are aimed at correcting fundamental limitations in the program office’s ability to manage US-VISIT in a way that reasonably ensures the delivery of mission value commensurate with costs and provides for the delivery of promised capabilities on time and within budget. Given this background, it is important for DHS to implement the recommendations quickly and completely through active planning and continuous monitoring and reporting. Until this occurs, the program will continue to be at high risk of not meeting expectations. To the US-VISIT program office’s credit, the first phase of the program has been deployed and is operating, and the commitments that DHS made regarding this initial operating capability were largely met. However, this was not accomplished in a manner that warrants repeating. In particular, the program office did not employ the kind of rigorous and disciplined management controls that are typically associated with successful programs, such as effective test management and configuration management practices. Moreover, the second phase of US-VISIT is already under way, and these controls are still not established. These controls, while significant for the initial phases of US-VISIT, are even more critical for the later phases, because the size and complexity of the program will only increase, and the later that problems are found, the harder and more costly they are to fix. Also important at this juncture in the program’s life are the still open questions surrounding whether the initial phases of US-VISIT will return value to the nation commensurate with their costs. Such questions warrant answers sooner rather than later, because of the program’s size, complexity, cost, and mission significance. It is imperative that DHS move swiftly to address the US-VISIT program management weaknesses that we previously identified, by implementing our remaining open recommendations. It is equally essential that the department quickly corrects the additional weaknesses that we have identified. Doing less will only increase the risk associated with US-VISIT. To better ensure that the US-VISIT program is worthy of investment and is managed effectively, we are reiterating our prior recommendations, and we further recommend that the Secretary of Homeland Security direct the Under Secretary for Border and Transportation Security to ensure that the US-VISIT program director takes the following actions: Develop and approve complete test plans before testing begins. These plans, at a minimum, should (1) specify the test environment, including test equipment, software, material, and necessary training; (2) describe each test to be performed, including test controls, inputs, and expected outputs; (3) define the test procedures to be followed in conducting the tests; and (4) provide traceability between test cases and the requirements to be verified by the testing. Establish processes for ensuring the independence of the IV&V contractor. Implement effective configuration management practices, including establishing a US-VISIT change control board to manage and oversee system changes. Identify and disclose to the Appropriations Committees management reserve funding embedded in the fiscal year 2004 expenditure plan. Ensure that all future US-VISIT expenditure plans identify and disclose management reserve funding. Assess the full impact of a key future US-VISIT increment on land port of entry workforce levels and facilities, including performing appropriate modeling exercises. To ensure that our recommendations addressing fundamental program management weaknesses are addressed quickly and completely, we further recommend that the Secretary direct the Under Secretary to have the program director develop a plan, including explicit tasks and milestones, for implementing all of our open recommendations, including those provided in this report. We further recommend that this plan provide for periodic reporting to the Secretary and Under Secretary on progress in implementing this plan. Lastly, we recommend that the Secretary report this progress, including reasons for delays, in all future US-VISIT expenditure plans. In written comments on a draft of this report signed by the US-VISIT Director (reprinted in app. II, along with our responses), DHS agreed with our recommendations and most of our observations. It also stated that it appreciated the guidance that the report provided and described actions that it is taking or plans to take in response to our recommendations. However, DHS stated that it did not fully agree with all of our findings, specifically offering comments on our characterization of the status of one open recommendation and two observations. First, it did not agree with our position that it had not developed a security plan and completed a privacy impact assessment. According to DHS, it has completed both. We acknowledge DHS’s activity on both of these issues, but disagree that completion of an adequate security plan and privacy impact assessment has occurred. As we state in the report, the department’s security plan for US-VISIT, titled Security and Privacy: Requirements & Guidelines Version 1.0, is a draft document, and it does not include information consistent with relevant guidance for a security plan, such as a risk assessment methodology and specific controls for meeting security requirements. Moreover, much of the document discusses guidelines for developing a security plan, rather than specific contents of a plan. Also, as we state in the report, the Privacy Impact Assessment was published but is not complete because it does not satisfy important parts of OMB guidance governing the content of these assessments, such as discussing alternatives to the designed methods of information collection and handling. Second, DHS stated that it did not fully agree with our observation that the Increment 1 system test plan was developed largely during and after testing, citing several steps that it took as part of Increment 1 requirements definition, test preparation, and test execution. However, none of the steps cited address our observations that DHS did not have a system acceptance test plan developed, approved, and available in time to use as the basis for conducting system acceptance testing and that only the version of the test plan modified on January 16, 2004 (after testing was completed) contained all of the required test plan content. Moreover, DHS’s comments acknowledge that the four versions of its Increment 1 test plan were developed during the course of test execution, and that the test schedule did not permit sufficient time for all stakeholders to review, and thus approve, the plans. Third, DHS commented on the roles and responsibilities of its various support contractors, and stated that we cited the wrong operative documentation governing the role of its independent verification and validation contractor. While we do not question the information provided in DHS’s comments concerning contractor roles, we would add that its comments omitted certain roles and responsibilities contained in the statement of work for one of its contractors. This omitted information is important because it is the basis for our observation that the program office planned to task the same contractor that was responsible for program management activities with performing independent verification and validation activities. Under these circumstances, the contractor could not be independent. In addition, we disagree with DHS’s comment that we cited the wrong operative documentation, and note that the document DHS said we should have used relates to a different support contractor than the one tasked with both performing program activities and performing independent verification and validation activities. The department also provided additional technical comments, which we have incorporated as appropriate into the report. We are sending copies of this report to the Chairmen and Ranking Minority Members of other Senate and House committees and subcommittees that have authorization and oversight responsibilities for homeland security. We are also sending copies to the Secretary of State and the Director of OMB. Copies of this report will also be available at no charge on our Web site at www.gao.gov. Should you or your offices have any questions on matters discussed in this report, please contact me at (202) 512-3439 or at hiter@gao.gov. Another contact and key contributors to this report are listed in appendix III. facilitate legitimate trade and travel, contribute to the integrity of the U.S. immigration system,1 and adhere to U.S. privacy laws and polices. US-VISIT capability is planned to be implemented in four increments. Increment 1 began operating on January 5, 2004, at major air and sea ports of entry (POEs). This goal has been added since the last expenditure plan. established by the Office of Management and Budget (OMB), including OMB Circular A-11, part 3.2 Complies with DHS’s enterprise architecture. Complies with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government. Is reviewed and approved by DHS and OMB. Is reviewed by GAO. OMB Circular A-11 establishes policy for planning, budgeting, acquisition, and management of federal capital assets. 1. determine whether the US-VISIT fiscal year 2004 expenditure plan satisfies the 2. determine the status of our US-VISIT open recommendations, and 3. provide any other observations about the expenditure plan and DHS’s management of US-VISIT. We conducted our work at DHS’s headquarters in Washington, D.C., and at its Atlanta Field Operations Office (Atlanta’s William B. Hartsfield International Airport) from October 2003 through February 2004 in accordance with generally accepted government auditing standards. Details of our scope and methodology are given in attachment 1. Legislative conditions 1. Meets the capital planning and investment control review requirements established by OMB, including OMB Circular A-11, part 7.2. Complies with the DHS enterprise architecture. 3. Complies with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government. 4. Is reviewed and approved by DHS and OMB. 5.Is reviewed by GAO. GAO open recommendations 1. Develop a system security plan and privacy impact assessment. 2. Develop and implement a plan for satisfying key acquisition management controls, including acquisition planning, solicitation, requirements development and management, project management, contract tracking and oversight, evaluation, and transition to support, and implement the controls in accordance with SEI guidance. 3. Ensure that future expenditure plans are provided to DHS’s House and Senate Appropriations Subcommittees in advance of US-VISIT funds being obligated. 4. Ensure that future expenditure plans fully disclose US-VISIT system capabilities, schedule, cost, and benefits to be delivered. Actions have been taken to fully implement the recommendation. GAO open recommendations 5. Establish and charter an executive body composed of senior-level representatives from DHS and each stakeholder organization to guide and direct the US-VISIT program. 6. Ensure that human capital and financial resources are provided to establish a fully functional and effective US-VISIT program office. 7. Clarify the operational context in which US-VISIT is to operate. 8. Determine whether proposed US-VISIT increments will produce mission value commensurate with costs and risks. 9. Define US-VISIT program office positions, roles, and responsibilities. 10. Develop and implement a human capital strategy for the US-VISIT program office that provides for staffing positions with individuals who have the appropriate knowledge, skills, and abilities. 11. Develop a risk management plan and report all high risks and their status to the executive body on a regular basis. 12. Define performance standards for each US-VISIT increment that are measurable and reflect the limitations imposed by relying on existing systems. Commitments were largely met; the system is deployed and operating. Testing was not managed effectively; if continued, the current approach to testing would increase risks. The system acceptance test (SAT) plan was developed largely during and after test execution. The SAT plan available during testing was not complete. SAT was not completed before the system became operational. Key program issues exist that increase risks if not resolved. Independent verification and validation (IV&V) contractor’s roles may be conflicting. Program-level change control board has not been established. Expenditure plan does not disclose management reserve funding. Land POE workforce and facility needs are uncertain. To assist DHS in managing US-VISIT, we are making eight recommendations to the Secretary of DHS. In their comments on a draft of this briefing, US-VISIT program officials stated that they generally agreed with the briefing and that it was fair and balanced. collecting, maintaining, and sharing information on certain foreign nationals who enter and exit the United States; identifying foreign nationals who (1) have overstayed or violated the terms of their visit; (2) can receive, extend, or adjust their immigration status; or (3) should be apprehended or detained by law enforcement officials; detecting fraudulent travel documents, verifying traveler identity, and determining traveler admissibility through the use of biometrics; and facilitating information sharing and coordination within the border management community. Classes of travelers that are not subject to US-VISIT are foreign nationals admitted on A-1, A-2, C-3 (except for attendants, servants, or personal employees of accredited officials), G-1, G-2, G-3, G-4, NATO-1, NATO-2, NATO-3, NATO-4, NATO-5, or NATO-6 visas, unless the Secretary of State and the Secretary of Homeland Security jointly determine that a class of such aliens should be subject to the rule; children under the age of 14; and persons over the age of 79. The Miami Royal Caribbean seaport and the Baltimore/Washington International Airport. included the development of policies, procedures, and associated training for implementing US-VISIT at the air and sea POEs; included outreach efforts, such as brochures, demonstration videos, and signage at air and sea POEs; did not include additional inspector staff at air and sea POEs; and did not include the acquisition of additional entry facilities. For exit, DHS is in the process of assessing facilities space and installing conduit, electrical supply, and signage. Increment 2 is divided into two Increments—2A and 2B. Increment 2A is to include at all POEs the capability to process machine- readable visas and other travel and entry documents that use biometric identifiers. This increment is to be implemented by October 26, 2004. According to the US-VISIT Deputy Director: Each of the 745 entry and exit traffic lanes at these 50 land POEs is to have the infrastructure, such as underground conduit, necessary to install the RF technology. Secondary inspection is used for more detailed inspections that may include checking more databases, conducting more intensive interviews of the individual, or both. RF technology would require proximity cards and card readers. RF readers read the information contained on the card when the card is passed near the reader, and could be used to verify the identity of the card holder. of manually completed I-94 forms1 from exiting travelers. Increment 3 is to expand Increment 2B system capability to the remaining 115 land POEs. It is to be implemented by December 31, 2005. I-94 forms have been used for years to track foreign nationals’ arrivals and departures. Each form is divided into two parts: an entry portion and an exit portion. Each form contains a unique number printed on both portions of the form for the purposes of subsequent recording and matching the arrival and departure records on nonimmigrants. An indefinite-delivery/indefinite-quantity contract provides for an indefinite quantity, within stated limits, of supplies or services during a fixed period of time. The government schedules deliveries or performance by placing orders with the contractor. IBIS lookout sources include: DHS’s Customs and Border Protection and Immigration and Customs Enforcement; the Federal Bureau of Investigation; legacy Immigration and Naturalization Service and Customs information; the U.S. Secret Service; the U.S. Coast Guard; the Internal Revenue Service; the Drug Enforcement Agency; the Bureau of Alcohol, Tobacco & Firearms; the U.S. Marshals Service; the U.S. Office of Foreign Asset Control; the National Guard; the Treasury Inspector General; the U.S. Department of Agriculture; the Department of Defense Inspector General; the Royal Canadian Mounted Police; the U.S. State Department; Interpol; the Food and Drug Administration; the Financial Crimes Enforcement Network; the Bureau of Engraving and Printing; and the Department of Justice Office of Special Investigations. This footnote has been modified to include additional information obtained since the briefing’s delivery to the Committees. stores biometric data about foreign visitors;1 Student Exchange Visitor Information System (SEVIS), a system that contains information on foreign students; Computer Linked Application Information Management System (CLAIMS 3), a system that contains information on foreign nationals who request benefits, such as change of status or extension of stay; and Consular Consolidated Database (CCD), a system that includes information on whether a visa applicant has previously applied for a visa or currently has a valid U.S. visa. Includes data such as: Federal Bureau of Investigation information on all known and suspected terrorists, selected wanted persons (foreign-born, unknown place of birth, previously arrested by DHS), and previous criminal histories for high-risk countries; DHS Immigration and Customs Enforcement information on deported felons and sexual registrants; DHS information on previous criminal histories and previous IDENT enrollments. Information from the bureau includes fingerprints from the Integrated Automated Fingerprint Identification System. This footnote has been modified to include additional information obtained since the briefing’s delivery to the Committees. A CD-ROM is a digital storage device that is capable of being read, but not overwritten. CLAIMS 3’s interface with ADIS was deployed and implemented on February 11, 2004. U.S. General Accounting Office, Homeland Security: Risks Facing Key Border and Transportation Security Program Need to Be Addressed, GAO-03-1083 (Washington, D.C.: Sept. 19, 2003). Operational context is unsettled. Near-term facilities solutions pose challenges. Mission value of first increment is currently unknown. GAO’s Review of Fiscal Year 2002 Expenditure Plan In our report on the fiscal year 2002 expenditure plan,1 we reported that INS intended to acquire and deploy a system with functional and performance capabilities consistent with the general scope of capabilities under various laws; the plan did not provide sufficient information to allow Congress to oversee the program; INS had not developed a security plan and privacy impact assessment; and INS had not implemented acquisition management controls in the area of acquisition planning, solicitation, requirements development and management, project management, contract tracking and oversight, and evaluation consistent with SEI guidance. We made recommendations to address these areas. U.S. General Accounting Office, Information Technology: Homeland Security Needs to Improve Entry Exit System Expenditure Planning, GAO-03-563 (Washington, D.C.: June 9, 2003). Fiscal Year 2004 Expenditure Plan Summary (see next slides for descriptions) Available appropriations (millions) The US-VISIT expenditure plan satisfies or partially satisfies each of the legislative conditions. Condition 1. The plan, including related program documentation and program officials’ statements, partially satisfies the capital planning and investment control review requirements established by OMB, including OMB Circular A-11, part 7, which establishes policy for planning, budgeting, acquisition, and management of federal capital assets. The table that follows provides examples of the results of our analysis. Examples of A-11 conditions Provide justification and describe acquisition strategy. Summarize life cycle costs and cost/benefit analysis, including the return on investment. Results of our analysis US-VISIT has completed an Acquisition Plan dated November 28, 2003. The plan provides a high-level justification and description of the acquisition strategy for the system. DHS does not have current life cycle costs nor a current cost/benefit analysis for US-VISIT. According to program officials, US-VISIT has a draft life cycle cost estimate and cost/benefit analysis. Both are expected to be completed in March 2004.A security plan for US-VISIT has not been developed. Instead, US-VISIT was certified and accredited based upon the updated security certification for each of Increment 1’s component systems. The US-VISIT program published a privacy impact assessment on January 5, 2004. Provide risk inventory and assessment. US-VISIT has developed a draft risk management plan and a process to implement and manage risks. US-VISIT also maintains a risk and issues tracking database. Condition 2. The plan, including related program documentation and program officials’ statements, satisfies this condition by providing for compliance with DHS’s enterprise architecture. DHS released version 1 of the architecture in October 2003.1 It plans to issue version 2 in September 2004. According to the DHS Chief Information Officer (CIO), DHS is developing a process to align its systems modernization efforts, such as US-VISIT, to its enterprise architecture. Alignment of US-VISIT to the enterprise architecture has not yet been addressed, but DHS CIO and US-VISIT officials stated that they plan to do so. Department of Homeland Security Enterprise Architecture Compendium Version 1.0 and Transitional Strategy. Condition 3. The plan, including related program documentation and program officials’ statements, satisfies the condition that it comply with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government. These criteria provide a management framework based on the use of rigorous and disciplined processes for planning, managing, and controlling the acquisition of IT resources, including acquisition planning, solicitation, requirements development and management, project management, contract tracking and oversight, and evaluation. The table that follows provides examples of the results of our analysis. The US-VISIT program has developed and documented an acquisition strategy and plan for a prime contractor to perform activities for modernizing US-VISIT business processes and systems, calling for, among other things, these activities to meet all relevant legislative requirements. Activities identified include U.S. border management- related work and support; other DHS-related strategic planning, and any associated systems development and integration, business process reengineering, organizational change management, information technology support, and program management work and support; and other business, technical, and management capabilities to meet the legislative mandates, operational needs, and government business requirements. The strategy defines a set of acquisition objectives, identifies key roles and responsibilities, sets general evaluation criteria, and establishes a high-level acquisition schedule. The plan describes initial tasking, identifies existing systems with which to interoperate/interface, defines a set of high-level risks, and lists applicable legislation. The RFP for the prime contractor acquisition was issued on November 28, 2003. A selecting official has been assigned responsibility, and a team, including contract specialists, has been formed and has received training related to this acquisition. A set of high-level evaluation factors have been defined for selecting the prime integrator, and the team plans to define more detailed criteria. Condition 4 met. The plan, including related program documentation and program officials’ statements, satisfies the requirement that it be reviewed and approved by DHS and OMB. DHS and OMB reviewed and approved the US-VISIT fiscal year 2004 expenditure plan. Specifically, the DHS IRB1 approved the plan on December 17, 2003, and OMB approved the plan on January 27, 2004. The IRB is the executive review board that provides acquisition oversight of DHS level 1 investments and conducts portfolio management. Level 1 investment criteria are contract costs exceeding $50 million; importance to DHS strategic and performance plans; high development, operating, or maintenance costs; high risk; high return; significant resource administration; and life cycle costs exceeding $200 million. According to the DHS CIO, US-VISIT is a level 1 investment. Condition 5 met. The plan satisfies the requirement that it be reviewed by GAO. Our review was completed on March 2, 2004. Open Recommendation 1: Develop a system security plan and privacy impact assessment. Security Plan. DHS does not have a security plan for US-VISIT. Although program officials provided us with a draft document entitled Security & Privacy: Requirements & Guidelines Version 1.0,1 this document does not include information consistent with relevant guidance for a security plan. The OMB and the National Institute of Standards and Technology have issued security planning guidance.2 In general, this guidance requires the development of system security plans that (1) provide an overview of the system security requirements, (2) include a description of the controls in place or planned for meeting the security requirements, (3) delineate roles and responsibilities of all individuals who access the system, (4) discuss a risk assessment methodology, and (5) address security awareness and training. Security & Privacy: Requirements & Guidelines Version 1.0 Working Draft, US-VISIT Program (May 15, 2003). Office Management and Budget Circular Number A-130, Revised (Transmittal Memorandum No. 4), Appendix III, “Security of Federal Automated Information Resources” (Nov. 28, 2000) and National Institute of Standards and Technology, Guide for Developing Security Plans for Information Systems, NIST Special Publication 800-18 (December 1998). The draft document identifies security requirements for the US-VISIT program and addresses the need for training and awareness. However, the document does not include (1) specific controls for meeting the security requirements, (2) a risk assessment methodology, and (3) roles and responsibilities of individuals with system access. Moreover, with the exception of the US-VISIT security requirements, much of the document discusses guidelines for developing a security plan, rather than specific contents of US-VISIT security plan. Despite the absence of a security plan, the US-VISIT CIO accredited Increment 1 based upon updated security certifications1 for each of Increment 1’s component systems (e.g., ADIS, IDENT, and IBIS) and a review of the documentation, including component security plans, associated with these updates. According to the security evaluation report (SER), the risks associated with each component system were evaluated, component system vulnerabilities were identified, and component system certifications were granted. Certification is the evaluation of the extent to which a system meets a set of security requirements. Accreditation is the authorization and approval granted to a system to process sensitive data in an operational environment; this is made on the basis of a compliance certification by designated technical personnel of the extent to which design and implementation of the system meet defined technical requirements for achieving data security. Based on the SER, the US-VISIT security officer certified Increment 1, and Increment 1 was accredited and granted an interim authority to operate for 6 months. This authority will expire on June 18, 2004. Additionally, this authority would not extend to a modified version of Increment 1. For example, the SER states that US-VISIT exit functionality was not part of the Increment 1 certification and accreditation, and that it was to be certified and accredited separately from Increment 1. The SER also notes that the Increment 1 certification will require updating upon the completion of security documentation for the exit functionality. Privacy Impact Assessment. The US-VISIT program has conducted a privacy impact assessment for Increment 1. According to OMB guidance,1 the depth and content of such an assessment should be appropriate for the nature of the information to be collected and the size and complexity of the system involved. OMB Guidance for Implementing the Privacy Provisions of the E-Government Act of 2002, OMB M-03-22 (Sept. 26, 2003). The assessment should also, among other things, (1) identify appropriate measures for mitigating identified risks, (2) discuss the rationale for the final design or business process choice, (3) discuss alternatives to the designed information collection and handling, and (4) address whether privacy is provided for in system development documentation. The OMB guidance also notes that an assessment may need to be updated before deploying a system in order to, among other things, address choices made in designing the system or in information collection and handling. The Increment 1 assessment satisfies some, but not all, of the above four OMB guidance areas. Specifically, it identifies Increment 1 privacy risks, discusses mitigation strategies for each risk, and briefly discusses the rationale for design choices. However, the assessment does not discuss alternatives to the designed methods of information collection and handling. Additionally, the Increment 1 systems documentation does not address privacy issues. According to the Program Director, the assessment will be updated for future increments. Open Recommendation 2: Develop and implement a plan for satisfying key acquisition management controls, including acquisition planning, solicitation, requirements development and management, project management, contract tracking and oversight, evaluation, and transition to support, and implement the controls in accordance with SEI guidance. According to the US-VISIT Program Director, the program office has established a goal of achieving SEI Software Acquisition Capability Maturity Model (SA-CMM®) level 2, and the office’s Acquisition and Program Management Lead has responsibility for achieving this status. To facilitate attaining this goal, the Acquisition and Program Management Lead’s organization includes functions consistent with the management controls defined by the SA-CMM®, such as acquisition planning and requirements development and management. According to the Acquisition and Program Management Lead, an approach for achieving level 2 will be defined as part of a strategy that has yet to be developed. However, the lead could not provide a date for when the strategy would be developed. The expenditure plan indicates that the US-VISIT program office will solicit SEI’s participation in achieving level 2. Open Recommendation 3: Ensure that future expenditure plans are provided to the Department’s House and Senate Appropriations Subcommittees on Homeland Security in advance of US-VISIT funds being obligated. The Congress appropriated $330 million in fiscal year 2004 funds for the US-VISIT program.1 On January 27, 2004, DHS provided its fiscal year 2004 expenditure plan to the Senate and House Appropriations Subcommittees on Homeland Security. On January 26, 2004, DHS submitted to the Senate and House Appropriations Subcommittees on Homeland Security a request for the release of $25 million from the fiscal year 2004 appropriations. Department of Homeland Security Appropriations Act, 2004, Pub. L. 108-90 (Oct. 1, 2003). Open Recommendation 4: Ensure that future expenditure plans fully disclose US- VISIT system capabilities, schedule, cost, and benefits to be delivered. The expenditure plan identifies high-level capabilities, such as record arrival of foreign nationals, identify foreign nationals who have stayed beyond the authorized period, and use biometrics to verify identity of foreign nationals. The plan does not associate these capabilities with specific increments. The plan identifies a high-level schedule for implementing the system. For example, Increment 2A is to be implemented by October 26, 2004; Increment 2B by December 31, 2004; and Increment 3 by December 31, 2005. The plan identifies total fiscal year 2004 costs by each increment. For example, DHS plans to obligate $73 million in fiscal year 2004 funds for Increment 2A. However, the plan does not break out how the $73 million will be used to support Increment 2A, beyond indicating that the funds will be used to read biometric information in travel documents, including fingerprints and photos, at all ports of entry. Also, the plan does not identify any nongovernment costs. The plan identifies seven general benefits and planned performance metrics for measuring three of the seven benefits. The plan does not associate the benefits with increments. The following table shows US-VISIT benefits and whether associated metrics have been defined. metric defined? Open Recommendation 5: Establish and charter an executive body composed of senior-level representatives from DHS and each stakeholder organization to guide and direct the US-VISIT program. DHS has established a three-entity governance structure. The entities are (1) the Homeland Security Council (HSC), (2) the DHS Investment Review Board (IRB), and (3) the US-VISIT Federal Stakeholders Advisory Board. The HSC is tasked with ensuring the coordination of all homeland security- related activities among executive departments and agencies and is composed of senior-level executives from across the federal government. According to the expenditure plan, the HSC helps to set policy boundaries for the US-VISIT program. According to DHS’s investment management guidance,1 the IRB is the executive review board that provides acquisition oversight of DHS level 1 investments2 and conducts portfolio management. The primary function of the IRB is to review level 1 investments for formal entry into the budget process and at key decision points. The plan states that the IRB is to monitor the US- VISIT program’s achievement of cost, schedule, and performance goals. DHS Management Directive 1400, Investment Review Process (undated). Level 1 investment criteria are contract costs exceeding $50 million; importance to DHS strategic and performance plans; high development, operating, or maintenance costs; high risk; high return; significant resource administration; and life cycle costs exceeding $200 million. According to the DHS CIO, US-VISIT is a level 1 investment. According to its charter, the Advisory Board provides recommendations for overseeing US-VISIT management and performance activities, including providing advice on the overarching US-VISIT vision; recommending the overall US-VISIT strategy and its responsiveness to all operational missions, both within DHS and with its participating government agencies; recommending changes to the US-VISIT vision and strategic direction; providing a communication link for aligning strategic direction, priorities, and resources with stakeholder operations; reviewing and assessing US-VISIT programwide institutional processes to ensure that business, fiscal, and technical priorities are integrated and carried out in accordance with established priorities; and reviewing and recommending new US-VISIT program initiatives, including the scope, funding, and programmatic resources required. Open Recommendation 6: Ensure that human capital and financial resources are provided to establish a fully functional and effective program office. DHS established the US-VISIT program office in July 2003 and determined the office’s staffing needs to be 115 government and 117 contractor personnel. As of February 2004, DHS had filled all the program office’s 12 key management and 29 other positions, leaving 74 positions to be filled. All filled positions are currently staffed by detailees from other organizational units within DHS, such as Immigration and Customs Enforcement. The graphic on the next page shows the US-VISIT program office organization structure and functions, the number of positions needed by each office, and the number of positions filled by detailees. In addition to the 115 government staff anticipated, the program anticipated 117 contractor support staff. As of February 2004, program officials told us they had filled 97.5 of these 117. Open Recommendation 7: Clarify the operational context in which US-VISIT is to operate. DHS is in the process of defining the operational context in which US-VISIT is to operate. In October 2003, DHS released version 1 of its enterprise architecture, and it plans to issue version 2 in September 2004.1 We are currently reviewing DHS’s latest version of its architecture at the request of the House Committee on Government Reform’s Subcommittee on Technology, Information Policy, Intergovernmental Relations, and the Census. Department of Homeland Security Enterprise Architecture Compendium Version 1.0 and Transitional Strategy. Open Recommendation 8: Determine whether proposed US-VISIT increments will produce mission value commensurate with cost and risks. The expenditure plan identifies high-level benefits to be provided by the US-VISIT program, such as the ability to prevent the entry of high-threat or inadmissible individuals through improved and/or advanced access to data before the foreign national’s arrival. However, the plan does not associate these benefits with specific increments. Further, the plan does not identify the total estimated cost of Increment 2. Instead, the plan identifies only fiscal year 2004 funds to be obligated for Increments 2A and 2B, which are $73 million and $81 million, respectively. In addition, the plan does not include any nongovernmental costs associated with US- VISIT. The RFP indicates that the total solution for Increment 2 has not been determined and will not be finalized until the prime contractor is on board. Until that time, DHS is not in a position to determine the total cost of Increments 2A and 2B, and thus whether they will produce mission value commensurate with costs. According to program officials, they have developed a life cycle cost estimate and cost-benefit analysis that are currently being reviewed and are to be completed in March 2004. According to these officials, the cost-benefit analysis will be for Increment 2B. Open Recommendation 9: Define US-VISIT program office positions, roles, and responsibilities. The US-VISIT program is working with the Office of Personnel Management (OPM) through an interagency agreement to, among other things, assist the program office in defining its position descriptions (including position roles and responsibilities), issuing vacancy announcements, and recruiting persons to fill the positions. The US-VISIT program is also working with OPM to define the competencies that are to be used in defining the position descriptions. As of February 2004, the program office reported that it has partially completed defining the competencies for its 12 offices and has partially competed position descriptions for 4 of the 12 offices. The following slide shows the competencies defined and position descriptions written. Open Recommendation 10: Develop and implement a human capital strategy for the US-VISIT program office that provides for staffing positions with individuals who have the appropriate knowledge, skills, and abilities. The US-VISIT program office has not yet defined a human capital strategy, although program officials stated that they plan to develop one in concert with the department’s ongoing workforce planning. As part of its effort, DHS is drafting a departmental workforce plan that, according to agency officials, will likely be completed during fiscal year 2004. According to the Program Director, the Director of Administration and Management is responsible for developing the program’s strategic human capital plan. However, descriptions of the Administration and Management office functions, including those provided by the program office and those in the expenditure plan, do not include strategic human capital planning. Open Recommendation 11: Develop a risk management plan and report all high risks and their status to the executive body on a regular basis. The program office has developed a draft risk management plan, dated June 2003. The draft defines plans to develop, implement, and institutionalize a risk management program. The program’s primary function is to identify and mitigate US-VISIT risks. The expenditure plan states that the program office is currently defining risk management processes. In the interim, the program office is creating a risk management team to assist the program office in proactively identifying and managing risks while formal processes and procedures are being developed. The expenditure plan also states that the US-VISIT program office currently maintains a risk and issue tracking database and conducts weekly risk and schedule meetings. Within the risk database, each risk is assigned a risk impact rating and an owner. The database also gives the date when the risk is considered closed. In addition, the US-VISIT program office has staff dedicated to tracking these items and meeting weekly with the various integrated project teams to mitigate potential risks. Open Recommendation 12: Define performance standards for each US-VISIT increment that are measurable and reflect the limitations imposed by relying on existing systems. US-VISIT has defined limited, measurable performance standards. For example: System availability1—the system shall be available 99.5 percent of the time. Data currency—(1) US-VISIT Increment 1 Doc Key2 data shall be made available to any interfacing US-VISIT system within 24 hours of the event (enrollment, biometric encounter, departure, inspector modified data); (2) IBIS/APIS arrival manifests, departure manifests, and inspector-modified data shall be made available to ADIS within 24 hours of each stated event; and (3) IDENT shall reconcile a biometric encounter within 24 hours of the event. System availability is defined as the time the system is operating satisfactorily, expressed as a percentage of time that the system is required to be operational. DocKey includes such information as biographical data and the fingerprint identification number, and is used to track a foreign national’s identity as the information is shared between systems. However, not all performance standards are being defined in a way that reflects the performance limitations of existing systems. In particular, US-VISIT documentation states that the system performance standard for Increment 1 is 99.5 percent. However, Increment 1 availability is the product of its component system availabilities. Given that US-VISIT system documentation also states that the system availability performance standard for IDENT and ADIS is 99.5 percent, Increment 1 system availability would have to be something less than 99.5 percent (99.5 x 99.5 x other component systems’ availability). Observation 1: Increment 1 commitments were largely met; the system is deployed and operating. According to DHS, Increment 1 was to deliver an initial operating capability to all air and sea POEs by December 31, 2003, that included recording the arrival and departure of foreign nationals using passenger and crew manifest data, verifying foreign nationals’ identity upon entry into the United States through the use of biometrics and checks against watchlists at air POEs and 13 of 42 sea POEs, interfacing with seven existing systems that contain data about foreign nationals, identifying foreign nationals who have overstayed their visits or changed their visitor status, and potentially including an exit capability beyond the capture of the manifest data. Generally, an initial operating capability was delivered to air and sea POEs on January 5, 2004. In particular, Increment 1 entry capability (including biographic and biometric data collection) was deployed to 115 airports and 14 seaports on January 5, 2004. Further, while the expenditure plan states that an Increment 1 exit capability was deployed to 80 air and 14 sea POEs on January 5, 2004, exit capability (including biometric capture) was deployed to only one air POE (Baltimore/Washington International Airport) and one sea POE (Miami Royal Caribbean seaport). DHS’s specific satisfaction of each commitment is described on the following slides. INS Data Management Improvement Act of 2000, Pub. L. 106-215 (June 15, 2000). Recording the arrival and departure of foreign nationals using passenger and crew manifest data: Satisfied: Carriers submit electronic arrival and departure manifest data to IBIS/APIS. Verifying foreign nationals’ identity upon entry into the United States through the use of biometrics and checks against watchlists at air POEs and 13 sea POEs: Satisfied: After carriers submit electronic manifest data to IBIS/APIS, IBIS/APIS is queried to determine whether there is any biographic lookout or visa information for the foreign national. Once the foreign national arrives at a primary POE inspection booth, the inspector, using a document reader, scans the machine-readable travel documents. IBIS/APIS returns any existing records on the foreign national, including manifest data matches and biographic lookout hits. When a match is found in the manifest data, the foreign national’s name is highlighted and outlined on the manifest data portion of the screen. (Verifying foreign nationals’ identity, cont’d) Biographic information, such as name and date of birth, is displayed on the bottom half of the screen, as well as the picture from the scanned visa. IBIS also returns information about whether there are, within IDENT, existing fingerprints for the foreign national. The inspector switches to the IDENT screen and scans the foreign national’s fingerprints (left and right index fingers) and photograph. The system accepts the best fingerprints available within the 5-second scanning period. This information is forwarded to the IDENT database, where it is checked against stored fingerprints in the IDENT lookout database. If no prints are currently in the IDENT database, the foreign national is enrolled in US-VISIT (i.e., biographic and biometric data are entered). If the foreign national’s fingerprints are already in IDENT, the system performs a 1:1 match (a comparison of the fingerprint taken during the primary inspection to the one on file) to confirm that the person submitting the fingerprints is the person on file. If the system finds a mismatch of fingerprints or a watchlist hit, the foreign national is sent to secondary inspection for further screening or processing. Interfacing seven existing systems that contain data about foreign nationals: Largely satisfied: As of January 5, 2004, US-VISIT interfaced six of seven existing systems. The CLAIMS 3 to ADIS interface was not operational on January 5, 2004, but program officials told us that it was subsequently placed into production on February 11, 2004. Identifying foreign nationals who have overstayed their visits or changed their visitor status: Largely satisfied: ADIS matches entry and exit manifest data provided by air and sea carriers. The exit process includes the carriers’ submission of electronic manifest data to IBIS/APIS. This biographic information is passed to ADIS, where it is matched against entry information. (Verifying foreign nationals who overstay or change status, cont’d) US-VISIT was to rely on interfaces with CLAIMS 3 and SEVIS to obtain information regarding changes in visitor status. However, as of January 5, 2004, the CLAIMS 3 interface was not operational; it was subsequently placed into production on February 11, 2004. Further, although the SEVIS to ADIS interface was implemented on January 5, 2004, after January 5, problems surfaced, and manual workarounds had to be implemented. According to the program officials, the problems are still being addressed. Potentially include an exit capability beyond the capture of the manifest data: Not satisfied: Biometric exit capability was not deployed to the 80 air1 and 14 sea POEs that received Increment 1 capability. Instead, biometric exit capability was provided to two POEs for pilot testing. Under this testing, foreign nationals use a self-serve kiosk where they are prompted to scan their travel documentation and provide their fingerprints (right and left index fingers). On a daily basis, the information collected on departed passengers is downloaded to a CD-ROM.2 The CD is then express mailed to a DHS contractor facility to be uploaded into IDENT, where a 1:1 match is performed (i.e., the fingerprint captured during entry is compared with the one captured at exit). According to program officials, biometric capture for exit was deployed at two POEs on January 5, 2004, as a pilot. According to these officials, this exit capability was deployed to only two POEs because US-VISIT decided to evaluate other exit alternatives. Only 80 of the 115 air POEs are departure airports for international flights. A CD-ROM is a digital storage device that is capable of being read, but not overwritten. Observation 2: The system acceptance test (SAT) plan was developed largely during and after test execution. The purpose of SAT is to identify and correct system defects (i.e., unmet system functional, performance, and interface requirements) and thereby obtain reasonable assurance that the system performs as specified before it is deployed and operationally used. To be effective, testing activities should be planned and implemented in a structured and disciplined fashion. Among other things, this includes developing effective test plans to guide the testing activities. According to relevant systems development guidance,1 SAT plans are to be developed before test execution. However, this was not the case for Increment 1. Specifically, the US-VISIT program provided us with four versions of a test plan, each containing more information than the previous version. While the initial version was dated September 18, 2003, which is before testing began, the three subsequent versions (all dated November 17, 2003) were modified on November 25, 2003, December 18, 2003, and January 16, 2004, respectively. According to US-VISIT officials, in the absence of a DHS Systems Development Life Cycle (SDLC), they followed the former Immigration and Naturalization Service’s SDLC, version 6.0, to manage US-VISIT development. According to the program office, the version modified on January 16, 2004, is the final plan. According to the SAT Test Analysis Report (dated January 23, 2004), testing began on September 29, 2003, and was completed on January 7, 2004, meaning that the plans governing the execution of testing were not sufficiently developed before test execution.1 The following timeline compares test plan development and execution. According to an IT management program official, although the Test Analysis Report was marked “Final,” it is still being reviewed. According to US-VISIT officials, SAT test plans were not completed before testing began because of the compressed schedule for testing. According to these officials, a draft test plan was developed and periodically updated to reflect documentation provided by the component contractors. In the absence of a complete test plan before testing began, the US-VISIT program office unnecessarily increased the risk that the testing performed would not adequately address Increment 1 requirements, which increased the chances of either having to redo already executed tests or deploy a system that would not perform as intended. In fact, postdeployment problems surfaced with the SEVIS interface, and manual workarounds had to be implemented. According to the program officials, the problems are still being addressed. Observation 3: SAT plan available during testing was not complete. To be effective, testing activities should be planned and implemented in a structured and disciplined fashion. Among other things, this includes developing effective test plans to guide the testing activities. According to relevant systems development guidance, a complete test plan (1) specifies the test environment, including test equipment, software, material, and necessary training; (2) describes each test to be performed, including test controls, inputs, and expected outputs; (3) defines the test procedures to be followed in conducting the tests; and (4) provides traceability between test cases and the requirements to be verified by the testing.1 This guidance also requires that the system owner concur with, and the IT project manager approve, the test plan before SAT testing. According to US-VISIT officials, in the absence of a DHS Systems Development Life Cycle (SDLC), they followed the former Immigration and Naturalization Service’s SDLC, version 6.0, to manage US-VISIT development. As previously noted, the US-VISIT program office provided us with four versions of the SAT test plan. The first three versions of the plan were not complete. The final plan largely satisfied the above criteria. The September 18, 2003, test plan included a description of the test environment and a brief description of tests to be performed, but the description of the tests did not include controls, inputs, and expected outputs. Further, the plan did not include specific test procedures for implementing the test cases and provide traceability between the test cases and the requirements that they were designed to test. Similarly, the November 25, 2003, test plan included a description of the test environment and a brief description of tests to be performed, but the description of the tests did not include controls, inputs, and expected outputs. Further, the plan did not include specific test procedures for implementing the test cases or provide traceability between the test cases and the requirements they were designed to test. The December 18, 2003, test plan included a description of the test environment and a brief description of 55 tests to be performed. The plan also described actual test procedures and controls, inputs, and expected outputs for 24 of the 55 test cases. The plan included traceability between the test cases and requirements. The January 16, 2004, test plan included a description of the test environment; the tests to be performed, including inputs, controls, and expected outputs; the actual test procedures for each test case; and traceability between the test cases and requirements. None of the test plan versions, including the final version, indicated concurrence by the system owner or approval by the IT project manager. The following graphic shows the SAT plans’ satisfaction of relevant criteria. According to US-VISIT officials, SAT test plans were not completed before testing began because the compressed schedule necessitated continuously updating the plan as documentation was provided by the component contractors. According to an IT management official, test cases were nevertheless available for ADIS and IDENT in these systems’ regression test plans or in a test case repository. Without a complete test plan for Increment 1, DHS did not have adequate assurance that the system was being fully tested, and it unnecessarily assumed the risk that errors detected would not be addressed before the system was deployed, and that the system would not perform as intended when deployed. In fact, postdeployment problems surfaced with the SEVIS interface, and manual workarounds had to be implemented. According to the program officials, the problems are still being addressed. Observation 4: SAT was not completed before the system became operational. The purpose of SAT is to identify and correct system defects (i.e., unmet system functional, performance, and interface requirements) and thereby obtain reasonable assurance that the system performs as specified before it is deployed and operationally used. SAT is accomplished in part by (1) executing a predefined set of test cases, each traceable to one or more system requirements, (2) determining if test case outcomes produce expected results, and (3) correcting identified problems. To the extent that test cases are not executed, the scope of system testing can be impaired, and thus the level of assurance that the system will perform satisfactorily is reduced. Increment 1 began operating on January 5, 2004. However, according to the SAT Test Analysis Report, testing was completed 2 days after Increment 1 began operating (January 7, 2004). Moreover, the Test Analysis Report shows that important test cases were not executed. For example, none of the test cases designed to test the CLAIMS 3 and SEVIS interfaces were executed. According to agency officials, the CLAIMS 3 to ADIS interface was not ready for acceptance testing before January 5, 2004. Accordingly, deployment of this capability and the associated testing were deferred; they were completed on February 11, 2004. Similarly, the SEVIS to ADIS interface was not ready for testing before January 5, 2004. However, this interface was implemented on January 5, 2004, without acceptance testing. According to program officials, the program owner and technical project managers were aware of the risks associated with this approach. By not fully testing Increment 1 before the system became operational, the program office assumed the risk of introducing errors into the deployed system and potentially jeopardizing its ability to effectively perform its core functions. In fact, postdeployment problems surfaced with the SEVIS interface as a result of this approach, and manual workarounds had to be implemented. According to the program officials, the problems are still being addressed. Observation 5: Independent verification and validation (IV&V) contractor’s roles may be conflicting. As we have previously reported,1 the purpose of independent verification and validation (IV&V) is to provide an independent review of system processes and products. The use of IV&V is a recognized best practice for large and complex system development and acquisition projects like US-VISIT. To be effective, the IV&V function must be performed by an entity that is independent of the processes and products that are being reviewed. The US-VISIT program plans to use its IV&V contractor to review some of the processes and products that the contractor may be responsible for. For example, the contractor statement of work, dated July 18, 2003, states that it shall provide program and project management support, including providing guidance and direction and creating some of the strategic program and project level products. At the same time, the statement of work states that the contractor will assess contractor and agency performance and technical documents. U.S. General Accounting Office, Customs Service Modernization: Results of Review of First Automated Commercial Environment Expenditure Plan, GAO-01-696 (Washington, D.C.: June 5, 2001). Depending on the products and processes in question, this approach potentially does not satisfy the independence requirements of effective IV&V, because the reviews conducted could lack independence from program cost and schedule pressures. Without effective IV&V, DHS is unnecessarily exposing itself to the risk that US-VISIT increments will not perform as intended or be delivered on time and within budget. Observation 6: Program-level change control board has not been established. The purpose of configuration management is to establish and maintain the integrity of work products (e.g., hardware,software, and documentation). According to relevant guidance,1 system configuration management includes four management tasks: (1) identification of hardware and software parts (items/components/ subcomponents) to be formally managed, (2) control of changes to the parts, (3) periodic reporting on configuration status, and (4) periodic auditing of configuration status. A key ingredient to effectively controlling configuration change is the functioning of a change control board (CCB); using such a board is a structured and disciplined approach for evaluating and approving proposed configuration changes. SEI’s Capability Maturity Model ® Integration (CMMISM) for Systems Engineering, Software Engineering, Integrated Product and Process Development, and Supplier Sourcing, Version 1.1 (Pittsburgh: March 2002). According to the US-VISIT CIO, the program does not yet have a change control board. In the absence of one, program officials told us that changes related to Increment 1 were controlled primarily through daily coordination meetings (i.e., oral discussions) among representatives from Increment 1 component systems (e.g., IDENT, ADIS, and IBIS) teams and program officials, and the CCBs already in place for the component systems. The following graphic depicts the US-VISIT program’s interim change control board approach compared to a structured and disciplined program-level change control approach. In particular, the interim approach requires individuals from each system component to interface with as many as six other stakeholders on system changes. Moreover, these interactions are via human-to-human communication. In contrast, the alternative approach reduces the number of interfaces to one for each component system and relies on electronic interactions with a single control point and an authoritative configuration data store. Without a structured and disciplined approach to change control, the US-VISIT program does not have adequate assurance that approved system changes are actually made; that approved changes are based, in part, on US-VISIT impact and value rather than solely on system component needs; and most importantly, that changes made to the component systems for non-US-VISIT purposes do not interfere with US-VISIT functionality. Observation 7: Expenditure plan does not disclose management reserve funding. The creation and use of a management reserve fund to earmark resources for addressing the many uncertainties that are inherent in large-scale systems acquisition programs is an established practice and a prudent management approach. The appropriations committees have historically supported an explicitly designated management reserve fund in expenditure plans submitted for such programs as the Internal Revenue Service’s Business Systems Modernization and DHS’s Automated Commercial Environment. Such explicit designation provides the agency with a flexible resource source for addressing unexpected contingencies that can inevitably arise in any area of proposed spending on the program, and it provides the Congress with sufficient understanding about management reserve funding needs and plans to exercise oversight over the amount of funding and its use. The fiscal year 2004 US-VISIT expenditure plan does not contain an explicitly designated management reserve fund. According to US-VISIT officials, including the program director, reserve funding is instead embedded within the expenditure plan’s various areas of proposed spending. However, the plan does not specifically disclose these embedded reserve amounts. We requested but have yet to receive information on the location and amounts of reserve funding embedded in the plan.1 By not creating, earmarking, and disclosing a specific management reserve fund in its fiscal year 2004 US-VISIT expenditure plan, DHS is limiting its flexibility in addressing unexpected problems that could arise in the program’s various areas of proposed spending, and it is limiting the ability of the Congress to exercise effective oversight of this funding. In agency comments on a draft of this report, US-VISIT stated that it supported establishing a management reserve and would be revising its fiscal year 2004 expenditure plan to identify a discrete management reserve amount. Observation 8: Land POE workforce and facility needs are uncertain. Effectively planning for program resource needs, such as staffing levels and facility additions or improvements, depends on a number of factors, including the assumptions being made about the scope of the program and the sufficiency of existing staffing levels and facilities. Without reliable assumptions, the resulting projections of resource needs are at best uncertain. For entry at land POEs, DHS plans for Increment 2B do not call for additional staff or facilities. The plans do not call for acquiring and deploying any additional staff to collect biometrics while processing foreign nationals through secondary inspection areas. Similarly, these plans provide for using existing facilities, augmented only by such infrastructure improvements as conduits, electrical supply, and signage. For exit at land POEs, DHS’s plans for Increment 2B also do not call for additional staff or facilities, although they do provide for installation of RF technology at yet-to-be- defined locations in the facility area to record exit information. US-VISIT Increment 2B workforce and facility plans are based on various assumptions, including (1) no additional foreign nationals will need to go to secondary inspection and (2) the average time needed to capture the biometric information will be 15 seconds, based on the Increment 1 experience at air POEs. However, these assumptions raise questions for several reasons. According to DHS program officials, including the Acting Increment 2B Program Manager, the Director of Facilities and Engineering, and the Program Director, any policy changes that could significantly increase the number of foreign nationals who would require processing through US-VISIT could impact these assumptions and thus staffing and facilities needs. According to the Increment 1 pilot test results, the average time needed to capture biometric information is 19 seconds. Moreover, DHS facilities told us that they have yet to model the impact of even the additional 15 seconds for secondary inspections. Moreover, according to a report from the Data Management Improvement Act Task Force,1 existing land POE facilities do not adequately support even the current entry and exit processes. In particular, more than 100 land POEs have less than 50 percent of the required capacity (workforce and facilities) to support current inspection processes and traffic workloads. To assist in its planning, the US-VISIT program office has begun facility feasibility assessments and space utilization studies at each land POE. Until such analysis is completed, the assumptions being used to support Increment 2B workforce and facility planning will be questionable, and the projected workforce and facility resource needs will be uncertain. Data Management Improvement Act Task Force, Second Annual Report to the Congress (Washington, D.C., December 2003). at a minimum, should (1) specify the test environment, including test equipment, software, material, and necessary training; (2) describe each test to be performed, including test controls, inputs, and expected outputs; (3) define the test procedures to be followed in conducting the tests; and (4) provide traceability between test cases and the requirements to be verified by the testing. Establish processes for ensuring the independence of the IV&V contractor. Implement effective configuration management practices, including establishing a US-VISIT change control board to manage and oversee system changes. Ensure that all future US-VISIT expenditure plans identify and disclose management reserve funding. Assess the full impact of Increment 2B on land POE workforce levels and facilities, including performing appropriate modeling exercises. To ensure that our recommendations addressing fundamental program management weaknesses are addressed quickly and completely, we further recommend that the Secretary direct the Under Secretary to have the program director develop a plan, including explicit tasks and milestones, for implementing all our open recommendations, including those provided in this report. We further recommend that this plan provide for periodic reporting to the Secretary and Under Secretary on progress in implementing this plan. Last, we recommend that the Secretary report this progress, including reasons for delays, in all future US-VISIT expenditure plans. assessed DHS’s plans and ongoing and completed actions to establish and implement the US-VISIT program (including acquiring the US-VISIT system, expanding and modifying existing port of entry facilities, and developing and implementing policies and procedures) and compared them to existing guidance to assess risks. For DHS-provided data that we did not substantiate, we have made appropriate attribution indicating the data’s source. We conducted our work at DHS’s headquarters in Washington, D.C., and at its Atlanta Field Operations Office (Atlanta’s William B. Hartsfield International Airport) from October 2003 through February 2004 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Homeland Security’s letter dated April 27, 2004. 1. We do not agree that the US-VISIT program has a security plan. In response to our request for the US-VISIT security plan, DHS provided a draft document entitled Security and Privacy: Requirements & Guidelines Version 1.0. However, as we state in the report, this document does not include information consistent with relevant guidance for a security plan. For example, this guidance states that a system security plan should (1) provide an overview of the system security requirements, (2) include a description of the controls in place or planned for meeting the requirements, (3) delineate roles and responsibilities of all individuals who have access to the system, (4) describe the risk assessment methodology to be used, and (5) address security awareness and training. The document provided by DHS addressed two of these requirements—security requirements and training and awareness. As we state in the report, the document does not (1) describe specific controls to satisfy the security requirements, (2) describe the risk assessment methodology, and (3) identify roles and responsibilities of individuals with system access. Further, much of the document discusses guidelines for developing a security plan, rather than providing the specific content expected of a plan. 2. Although DHS has completed a Privacy Impact Assessment for Increment 1, the assessment is not consistent with the Office of Management and Budget guidance. This guidance says that a Privacy Impact Assessment should, among other things, (1) identify appropriate measures for mitigating identified risks, (2) discuss the rationale for the final design or business process choice, (3) discuss alternatives to the designed information collection and handling, and (4) address whether privacy is provided for in system development and documentation. While the Privacy Impact Assessment for US-VISIT Increment 1 discusses mitigation strategies for identified risks and briefly discusses the rationale for design choices, it does not discuss alternatives to the designed information collection and handling. Further, Increment 1 system documentation does not address privacy. 3. DHS’s comments did not include a copy of its revised fiscal year 2004 expenditure plan because, according to an agency official, OMB has not yet approved the revised plan for release, and thus we cannot substantiate its comments concerning either the amount or the disclosure of management reserve funding. Further, we are not aware of any unduly burdensome restrictions and/or approval processes for using such a reserve. We have modified our report to reflect DHS’s statement that it supports establishing a management reserve and the status of revisions to its expenditure plan. 4. We have modified the report as appropriate to reflect these comments and subsequent oral comments concerning the membership of the US- VISIT Advisory Board. 5. We do not believe that DHS's comments provide any evidence to counter our observation that the system acceptance test plan was developed largely during and after testing. In general, these comments concern the Increment 1 test strategy, test contractor and component system development team coordination, Increment 1 use cases, and pre-existing component system test cases, none of which are related to our point about the completeness of the four versions of the test plan. More specifically, our observation does not address whether or not an Increment 1 test strategy was developed and approved, although we would note that the version of the strategy that the program office provided to us was incomplete, was undated, and did not indicate any level of approval. Further, our observation does not address whether some unspecified level of coordination occurred between the test contractor and the component system development teams; it does not concern the development, modification, and use of Increment 1 “overarching” use cases, although we acknowledge that such use cases are important in developing test cases; and it does not address the pre- existence of component system test cases and their residence in a test case repository, although we note that when we previously asked for additional information on this repository, none was provided. Rather, our observation concerns whether a sufficiently defined US- VISIT Increment 1 system acceptance test plan was developed, approved, and available in time to be used as the basis for conducting system acceptance testing. As we state in the report, to be sufficient such a plan should, among other things, define the full complement of test cases, including inputs and outputs, and the procedures for executing these test cases. Moreover, these test cases should be traceable to system requirements. However, as we state in our report, this content was added to the Increment 1 test plan during the course of testing, and only the version of the test plan modified January 16, 2004, contained all of this content. Moreover, DHS's comments recognize that these test plan versions were developed during the course of test execution and that the test schedule did not permit sufficient time for all stakeholders to review the versions. 6. We do not disagree with DHS’s comments describing the roles and responsibilities of its program office support contractor and its Federally Funded Research and Development Center (FFRDC) contractor. However, DHS’s description of the FFRDC contractor’s roles and responsibilities do not cover all of the taskings envisioned for this contractor. Specifically, DHS’s comments state that the FFRDC contractor is to execute such program and project management activities as strategic planning, contractor source selection, acquisition management, risk management, and performance management. These roles and responsibilities are consistent with the FFRDC contractor’s statement of work that was provided by DHS. However, DHS’s comments omit other roles and responsibilities specified in this statement of work. In particular, the comments do not cite that this contractor is also to conduct audits and evaluations in the form of independent verification and validation activities. It is this audit and evaluation role, particularly the independence element, which is the basis for our concern and observation. As we note above and state in the report, US-VISIT program plans and the contractor’s statement of work provide for using the same contractor both to perform program and project management activities, including creation of related products, and to assess those activities and products. Under these circumstances, the contractor could not be sufficiently independent to effectively discharge the audit and evaluation tasks. 7. We do not agree with DHS’s comment that we cited the wrong operative documentation pertaining to US-VISIT independent verification and validation plans. As discussed in our comment No. 6, the statement of work that we cite in the report relates to DHS plans to use the FFRDC contractor to both perform program and project management activities and develop related products and to audit and evaluate those activities and products. The testing contractor and testing activities discussed in DHS comments are separate and distinct from our observation about DHS plans for using the FFRDC contractor. Accordingly, our report does not make any observation regarding the independence of the testing contractor. 8. We agree that US-VISIT lacks a change control board and support DHS’s stated commitment to establish a structured and disciplined change control process that would include such a board. In addition to the individual named above, Barbara Collier, Gary Delaney, Neil Doherty, Tamra Goldstein, David Hinchman, Thomas Keightley, John Mortin, Debra Picozzi, Karl Seifert, and Jessica Waselkow made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Department of Homeland Security (DHS) has established a program--the United States Visitor and Immigrant Status Indicator Technology (US-VISIT)--to collect, maintain, and share information, including biometric identifiers, on selected foreign nationals who travel to the United States. By congressional mandate, DHS is to develop and submit for approval an expenditure plan for US-VISIT that satisfies certain conditions, including being reviewed by GAO. Among other things, GAO was asked to determine whether the plan satisfied these conditions, and to provide observations on the plan and DHS's program management. DHS's fiscal year 2004 US-VISIT expenditure plan and related documentation at least partially satisfies all conditions imposed by the Congress, including meeting the capital planning and investment control review requirements of the Office of Management and Budget (OMB). DHS developed a draft risk management plan and a process to implement and manage risks. However, DHS does not have a current life cycle cost estimate or a cost/benefit analysis for US-VISIT. The US-VISIT program merges four components into one integrated whole to carry out its mission. GAO also developed a number of observations about the expenditure plan and DHS's management of the program. These generally recognize accomplishments to date and address the need for rigorous and disciplined program practices. US-VISIT largely met its commitments for implementing an initial operating capability, known as Increment 1, in early January 2004, including the deployment of entry capability to 115 air and 14 sea ports of entry. However, DHS has not employed rigorous, disciplined management controls typically associated with successful programs, such as test management, and its plans for implementing other controls, such as independent verification and validation, may not prove effective. More specifically, testing of the initial phase of the implemented system was not well managed and was completed after the system became operational. In addition, multiple test plans were developed during testing, and only the final test plan, completed after testing, included all required content, such as describing tests to be performed. Such controls, while significant for the initial phases of US-VISIT, are even more critical for the later phases, as the size and complexity of the program will only increase. Finally, DHS's plans for future US-VISIT resource needs at the land ports of entry, such as staff and facilities, are based on questionable assumptions, making future resource needs uncertain.
You are an expert at summarizing long articles. Proceed to summarize the following text: Exchanging data electronically is a common method of transferring information among federal, state, and local governments; private sector organizations; and nations around the world. As computers play an ever-increasing role in our society, more information is being exchanged regularly. Federal agencies now depend on electronic data exchanges to execute programs and facilitate commerce. For example, federal agencies routinely use data exchanges to transfer funds to contractors and grantees; collect data necessary to make eligibility determinations for veterans, social security, and medicare benefits; gather data on program activities to determine if funds are being expended as intended and the expected outcomes achieved; and share weather information that is essential for air flight safety. To facilitate commerce, federal agencies regulate or provide oversight to organizations that use data exchanges extensively to process payments through the banking system; purchase or sell securities through stock exchanges and futures markets; and facilitate import and export shipments through ports of entry. We have reported on potential data exchange issues that could affect many of these activities (see the list of related products at the end of this report). An electronic data exchange is the transfer (sending or receiving) of a data set using electronic media. Electronic data exchanges can be made using various methods, including direct computer-to-computer exchanges over a dedicated network; direct exchanges over commercially available networks or the Internet; or exchanges of magnetic media such as computer tapes or disks. The information transferred in a data set often includes at least one date. Because many computer systems have been using a 2-digit year in the date format, the data exchanges have also used 2-digit years. Now that many formats are being changed to use 4 digits to correctly process dates beyond 1999, data exchanges using 2-digit year formats must also be changed to 4 digits or bridges must be used to convert incoming 2-digit years to 4-digit years or convert outgoing 4-digit years to 2-digits. These conversions generally involve the use of algorithms to distinguish the century (for example, 2-digit years less than 50 may be considered 2000 dates and 2-digit years of 50 or more may be considered 1900 dates). In addition to using bridges, filters may be needed to screen and identify incoming noncompliant data to prevent it from corrupting data in the receiving system. These conversions are not necessary if the data exchanges are designed to employ certain electronic data interchange standards (see appendix II for a glossary of data exchange standards used by some federal agencies). A data exchange standard defines the format of a specific data set for transmission. Some of these standards specify a 4-digit year format. Federal agencies often use exchanges that do not involve a standard format. Instead, the data exchanges consist of individual text files with a structure that is established by agreement between the exchange partners. Files using these formats are generally referred to as flat files. As part of their Year 2000 correction efforts, organizations must identify the date formats used in their data exchanges, develop a strategy for dealing with exchanges that do not use 4-digit year formats, and implement the strategy. These efforts generally involve the following steps. Assess information systems to identify data exchanges that are not Year 2000 compliant. Contact the exchange partner and reach agreement on the date format to be used in the exchange. Determine if data bridges and filters are needed. Determine if validation processes are needed for incoming data. Set dates for testing and implementing new exchange formats. Develop and test bridges and filters to handle nonconforming data. Develop contingency plans and procedures for data exchanges and incorporate into overall agency contingency plans. Implement the validation process for incoming data. Test and implement new exchange formats. The testing and implementation of new data exchanges must be closely coordinated with exchange partners to be completed effectively. In addition to an agency testing its data exchange software, effective testing involves end-to-end testing—initiation of the exchange by the sending computer, transmission through intermediate communications software and hardware, and receipt and acceptance by receiving computer(s), thus completing the exchange process. Resolving data exchange issues will require significant efforts and costs according to federal and state officials. At an October 1997 summit, federal and state information technology officials estimated that about 20 percent of Year 2000 efforts will be directed toward correcting data exchange problems. This could be significant considering the magnitude of expected Year 2000 costs. According to OMB’s February 15, 1998, Year 2000 status reports of 24 federal agencies, the federal government’s Year 2000 costs are estimated to be about $4.7 billion. Based on estimates provided by states to NASIRE, the states’ Year 2000 costs are estimated to be about $5.0 billion. If Year 2000 data exchange problems are not corrected, the adverse impact could be severe. Federal agencies exchange data with thousands of external entities, including other federal agencies, state agencies, private organizations, and foreign governments and private organizations. If data exchanges do not function properly, data will not be exchanged between systems or invalid data could cause receiving computer systems to malfunction or produce inaccurate computations. For example, such failures could result in the Social Security Administration not being able to determine the eligibility of applicants or compute and pay benefits because it relies on data exchanges for eligibility information and payment processing. This could have a widespread impact on the public since the agency processes payments to more than 50 million beneficiaries each month, which in fiscal year 1997 totaled about $400 billion; National Highway Traffic Safety Administration not being able to provide states with information needed for driver registrations, which could result in licenses being issued to drivers with revoked or suspended licenses in other states; Department of Veterans Affairs not being able to determine correct benefits and make payments to eligible veterans; U.S. Coast Guard not receiving weather information necessary to plan search and rescue operations; and Nuclear Regulatory Commission not receiving information from nuclear reactors that is needed to trigger emergency response actions. The overall responsibility for tracking and overseeing actions by federal agencies to address Year 2000 issues rests with OMB and the President’s Council on Year 2000 Conversion that was established in February 1998. OMB has been tracking major federal agencies’ Year 2000 activities by requiring them to submit quarterly status reports. Efforts to address data exchange issues are in early stages. Federal and state coordinating organizations reached initial agreements in 1997 on the steps to address data exchanges issues; however, many federal agencies and states have not yet finished assessing their data exchanges to determine if they are Year 2000 compliant. Further, little progress has been made in completing key steps such as reaching agreements with partners on exchange formats, developing and testing bridges and filters, and developing contingency plans. Federal and state coordinating organizations began to address Year 2000 data exchange problems in 1997. Initial agreements on steps to address data exchange issues were reached at a state/federal summit in October 1997 that was hosted by the State of Pennsylvania and sponsored by the federal Chief Information Officer Council (CIO Council) and NASIRE. At the summit, federal agency and state representatives agreed to establish a contiguous 4-digit year date as a default standard for exchanges. They also agreed that federal agencies will take the lead in providing information on exchanges with states, any planned date format changes, and timeframes for any changes. In addition, joint federal and state policy and working groups were established to continue the dialogue on exchange issues. To implement these agreements, OMB issued instructions in January 1998 for federal agencies to inventory all data exchanges with outside parties by February 1, 1998, and coordinate plans for transitioning to Year 2000 compliant data exchanges with exchange partners by March 1, 1998. OMB also set March 1999 as the target date to complete the data exchange corrections. In addition, for the February 15, 1998, quarterly reports, OMB required the federal agencies to describe the status of their efforts to inventory all data exchanges with outside entities and the method for assuring that those organizations will be or have been contacted, particularly state governments. However, OMB did not require the agencies to report their status in completing key steps for data exchanges, such as those listed earlier in this report. According to its Year 2000 Coordinator, NASIRE plans to continue implementing the agreements reached at the October 1997 summit through active participation in joint policy and working groups and by holding additional state/federal meetings on data exchange issues. These activities will supplement NASIRE’s continuing efforts to provide states with access to information on vendors, software, and methodologies for resolving Year 2000 problems. The federal CIO Council’s State Interagency Subgroup also plans to continue pursuing the agreements reached at the October 1997 summit through joint state and federal meetings on data exchange issues and by hosting a state/federal meeting in April 1998. The federal CIO Council also designated an official in the State Department to act as the focal point for international exchange issues. The designee plans to work through federal agencies that have international operations to increase our foreign data exchange partners’ awareness of Year 2000 issues. For example, we were told that the State Department will add Year 2000 issues to bilateral and multilateral discussion agendas, such as the Summit of the Americas and the Asian-Pacific Economic Cooperation meetings. Twenty of the 42 federal agencies we surveyed reported having finished inventorying and assessing data exchanges for mission-critical systems as of the first quarter of 1998. Eighteen agencies have not completed their assessments and the status of one federal agency is not discernable because it was not able to provide information on their total number of exchanges and the number assessed. The remaining three federal agencies said they do not have external data exchanges. Federal agencies reported that they have a total of almost 500,000 data exchanges with other federal agencies, states, local governments, and the private sector for their mission-critical systems. Almost 90 percent of the exchanges were reported by the Federal Reserve and the Department of Housing and Urban Development (HUD) which reported having 316,862 and 133,567, respectively. The Federal Reserve exchanges data with federal agencies and the private sector using software it provides to these entities. The Federal Reserve reported that it has assessed all of these exchanges.Similarly, HUD has exchanges with housing authorities, states agencies, and private sector organizations. HUD has determined that 92 percent of these exchanges are not Year 2000 compliant. The other agencies reported their mission-critical systems have about 49,000 data exchanges with other federal agencies, states, local governments, and the private sector, as shown in figure 1. These agencies reported that they have assessed about 39,000, or about 80 percent, of the exchanges. (See appendix III for the status of assessments and other actions for each of the federal agencies.) Significant federal actions will be needed to address Year 2000 problems with data exchanges. Of the 39,000 exchanges that federal agencies said they assessed, they reported about 27 percent as not being Year 2000 compliant. Only six federal agencies told us that all their data exchanges are Year 2000 compliant and these represent only 123 of the approximately 39,000 data exchanges that have been assessed. As discussed previously, dealing with data exchanges involves a number of steps. For each noncompliant exchange, the agency must reach agreement with the exchange partners on whether they will (1) change the date format to make it compliant or (2) agree to retain the existing 2-digit format and use bridges as an interim measure. To resolve Year 2000 data exchange problems, all federal agencies have chosen to adopt a contiguous 4-digit year format; however, some agencies plan to continue using a 2-digit year format for some of their exchanges in the near term. If a 2-digit exchange format is retained but the agency’s system will be using 4-digit years, the agency must develop, test, and implement (1) bridges to convert dates to a useable form and (2) filters to recognize 2-digit years and prevent them from entering agency systems. In addition, the agencies should identify the exchanges where there is a probability that, even though agreements have been reached to exchange 4-digit years, one partner may not be compliant. In these cases, agencies must develop contingency plans to ensure that mission-critical operations continue. The status of activities to contact and reach agreement on Year 2000 readiness with exchange partners varies significantly among federal agencies. Only one federal agency reported having reached agreements with all its exchange partners. While on average the other federal agencies reported having reached agreements on about 24 percent of their exchanges, almost half of federal agencies reported that they have reached agreements on 10 percent or less of their exchanges, as shown in figure 2 below. Few federal agencies reported having taken actions to install bridges or filters. Seventeen federal agencies responding to our survey have identified the need to install 988 bridges or filters. In total, the agencies reported having developed and tested 203, or 21 percent, of the needed bridges or filters. In addition, only 38 percent of the federal agencies reported having developed contingency plans for data exchanges. The need for bridges, filters, and contingency plans may increase as agencies continue assessing data exchanges and contacting and reaching agreements with exchange partners. Only two states reported to us that they have finished inventorying and assessing data exchanges for mission-critical systems. The status of 15 of the 39 states that responded to our survey is not discernable because they were not able to provide us with information on their total number of exchanges and the number assessed. In addition, all but two states were able to provide only partial responses or estimates on the status of exchanges. For the 24 states that provided actual or estimated data on the status of their exchanges, an average of 47 percent of the exchanges had not been assessed. Similar to the federal agencies, states reported that the largest number of exchanges were with the private sector, as shown in figure 3 below. (See appendix IV for the status of assessments and other actions for each state.) Significant state actions will be needed to address Year 2000 data exchange issues. Of the 12,262 total exchanges that states reported as having assessed, 5,066 exchanges (41 percent) are reported as not being Year 2000 compliant. None of the states reported that all their data exchanges are Year 2000 compliant. For each of the noncompliant exchanges, the states must take the same types of actions, as described earlier for federal agencies, to reach agreements with the exchange partners, develop, test, and implement bridges and filters, and develop data exchange contingency plans. Similar to federal agencies, states reported having made limited progress in reaching agreement with exchange partners on addressing changes needed for Year 2000 readiness, installing bridges and filters, and developing contingency plans. However, we can draw only limited conclusions on the status of the states actions because data were provided on only a small portion of states’ data exchanges. Officials from several states told us that they were unable to provide actual, statewide data on their exchanges because the states do not collect and maintain such information centrally and the state agencies did not provide the data requested in our survey. According to NASIRE’s Year 2000 committee chairman, individual state agencies are aware of data exchange issues and have started taking action to address them, but few state chief information officers have begun monitoring these actions on a statewide basis. In addition to working with their exchange partners to resolve Year 2000 issues, some federal agencies are providing Year 2000 guidance to the organizations that they regulate or oversee and monitoring their Year 2000 activities. Sixteen federal agencies reported that they have regulatory or oversight responsibilities. Seven of the agencies focus on the financial services area, including banks, thrifts, and security exchanges. The others regulate or provide oversight to organizations performing government services, such as housing authorities and grantees, and private organizations in a variety of industry sectors such as the import and export industry, the maritime industry, manufacturers of medical devices and pharmaceuticals, and the oil, gas, and mineral industries. All but 3 of the 16 agencies reported providing guidance or establishing working groups addressing Year 2000 issues for the organizations for which they have regulatory or oversight responsibility. In total, 11 of the 16 federal agencies provided guidance on Year 2000 issues and the guidance from all but two addressed data exchange issues, 10 agencies have sponsored Year 2000 working groups, 12 agencies have monitored progress in resolving Year 2000 problems, and 5 have established inspection or validation programs. Of the 12 agencies that have been monitoring progress on the resolution of Year 2000 problems, 10 reported that they have data on the corrective action status of the organization they regulate or oversee. See appendix V for Year 2000 activities undertaken by each federal regulatory or oversight agency. Federal agencies in the financial services area reported having initiated efforts domestically and internationally to address Year 2000 problems with international data exchanges, but other federal agencies reported that they are still in the initial stages of addressing these issues. Ten federal agencies reported having 702 data exchanges with foreign governments or the foreign private sector. These 702 foreign data exchanges reported by federal agencies represent less than 1 percent of all federal data exchanges. The federal agencies reported reaching agreement on formats for 98, or 14 percent, of the foreign exchanges. Three federal agencies—the Departments of the Interior, Treasury, and Defense—have the bulk of the reported foreign data exchanges. For its 416 reported foreign exchanges, Interior plans to notify its foreign data exchange partners that it will continue to use a 2-digit year in data exchanges and use bridges with algorithms to compute the century. Treasury has reached agreement on year formats for 71 of its 107 reported foreign exchanges and advised us that it is using bank examiners to monitor the activities to make all the exchanges Year 2000 compliant. The Department of Defense reported reaching agreement on 18 of its 103 data exchanges with foreign entities. The remaining seven federal agencies reported having reached agreement on 9 of their 76 foreign data exchanges. Interior was the only agency that reported having developed and tested bridges and filters to convert dates and prevent the corruption of its systems. None of the agencies reported having developed contingency plans to process transactions if the exchange partners’ systems were not Year 2000 compliant. Nine federal agencies—six in the financial services area—said they have regulatory or oversight responsibility for organizations with international data exchanges. Three agencies in the financial services area said they are relying on bank examiners to monitor progress and one is providing guidance to exchange partners for addressing Year 2000 problems. Four of the nine agencies stated that they are also addressing Year 2000 problems by working with international organizations, such as the Bank for International Settlements, the International Organization of Securities Commissions, and the Securities Industry Association. Two of the nine agencies reported having no ongoing international Year 2000 activities. International organizations identified by federal agencies as forums for Year 2000 activities were primarily in the financial services area including the Bank for International Settlements, International Organization of Securities Commissions, Securities Industry Association, and Futures Industry Association. The Department of Transportation also identified the International Civil Aviation Organization as a potential international forum for the resolution of Year 2000 problems. In addition, from our search of the Internet for Year 2000 activities by international organizations, we identified eight other potential international forums. The activities of these organizations are highlighted in table 1 and the reported current and planned activities of each organization are summarized in appendix VI. The primary efforts cited by the international organizations are increasing awareness and providing information and guidance on resolving Year 2000 problems, including posting the information on their Internet web sites. Six organizations also reported that they are sponsoring conferences or workshops to discuss Year 2000 issues and six reported that they are monitoring or surveying the status of their members’ Year 2000 activities. Organizations in the financial services area are the most active in Year 2000 efforts. According to the Bank for International Settlements, payment and settlement systems are essential elements of financial market infrastructures through which clearing organizations, settlement agents, securities depositories, and the various direct and indirect participants in these systems are intricately connected. It is therefore imperative that the systems be adapted and certified early enough to ensure that they are Year 2000 compliant and to allow for testing among institutions. To address these issues, officials at the Bank for International Settlements told us that it is coordinating with the International Organization of Securities Commissions and the International Association of Insurance Supervisors to draw attention to Year 2000 issues. In September 1997, the Bank for International Settlements issued a technical paper for banks which sets out a strategic approach for the development, testing, and implementation of system solutions as well as defining the role that central banks and bank supervisors need to play in promoting awareness of the issue and enforcing action. Other organizations have also used the Bank for International Settlements’ technical framework to stimulate activities of their members. For example, the Securities Industry Association used the framework to develop a project plan with target dates for completing various tasks and posted the plan on its Internet web site for members to use in planning their Year 2000 activities. The Securities Industry Association also used the framework as the basis for a survey instrument for assessing the status of its members’ Year 2000 activities. The European Commission has been publishing issue papers and conducting workshops to increase awareness of Year 2000 computer problems among its member countries. These issue papers and workshops also addressed the implication of European countries’ efforts to convert to the new Euro currency. Because this conversion is taking place at about the same time as the Year 2000 date conversion activities, the two are in competition for financial, technical, and management resources. To identify how businesses are approaching the Euro conversion and the inter-relationship with activities to resolve Year 2000 problems, the European Commission sponsored a survey of more than 1,000 senior information technology managers in 10 countries. The result of this survey, as well as the issue papers and workshop results, are posted on the European Commission’s web site (www.ispo.cec.be/y2keuro). In addition to assisting their members, several of the international organizations reported having programs to ensure that their own systems will be able to process international data exchanges for their members in the Year 2000. For example, the Bank for International Settlements, the International Air Transport Association, and Interpol told us that they have information systems that process transactions and information exchanges for their member organizations. Each of these organizations said that their Year 2000 programs are on schedule and that they will be able to support international data exchanges with Year 2000 dates. Unless federal agencies take action to reach date format agreements with their data exchange partners and deal with data exchanges that will not be Year 2000 compliant, some of the agencies’ mission-critical systems may not be able to function properly. The data reported to us by federal agencies and state governments suggest that the full extent of the managerial and operational challenges posed by the heavy reliance on others for data needed to sustain government activity is not yet known. For the vast majority of data exchanges, including those with international entities, federal agencies have not reached agreement with their exchange partners and, therefore, do not know if the partners will be able to effectively exchange data in the Year 2000. Without knowing the status of activities or reaching agreements with exchange partners, federal agencies can not identify all the exchanges requiring (1) filters to prevent incoming invalid data from corrupting mission-critical systems or (2) provisions in the agencies’ business continuity and contingency plans to ensure the continuation of mission-critical operations. In addition, without extensive coordination with exchange partners, federal agencies will not be able to develop and test new data exchange formats, bridges, and filters to ensure that they will function properly. Because federal agencies and states are still in the early stages of resolving Year 2000 problems for data exchanges and the status of exchange partner activities is generally unknown, federal agencies need to take the lead in setting target dates for critical activities to prevent disruptions to their operations. These include setting target dates for testing and implementing new exchange formats and decision points for initiating the development and implementation of contingency plans. International forums for Year 2000 issues are available for a few economic sectors and primarily in North America and Western Europe. Only recently have any federal activities been directed at international issues and these have been limited to increasing awareness. We recommend that the Director, OMB, in consultation with the Chair of the President’s Council on Year 2000 Conversion, issue the necessary guidance to require federal agencies to take the following actions. Establish schedules for testing and implementing new exchange formats prior to the March 1999 deadline for completing all data exchange corrections; such schedules may include national test days that could be used for end-to-end testing of critical business processes and associated data exchanges affecting federal, state, and/or local governments. Notify exchange partners of the implications to the agency and the exchange partners if they do not make date conversion corrections in time to meet the federal schedule for implementing and testing Year 2000 compliant data exchange processes. Give priority to installing the filters necessary to prevent the corruption of mission-critical systems from data exchanges with noncompliant systems. Develop and implement, as part of their overall business continuity and contingency planning efforts, specific provisions for the data exchanges that may fail, including the approaches to be used to mitigate operational problems if their partners do not make date conversion corrections when needed. Report, as part of their regular Year 2000 status reports, their status in completing key steps for data exchanges, such as the percent of exchanges that have been inventoried, the percent of exchanges that have been assessed, the percent of exchanges that have agreements with exchange partners, the percent of exchanges that have been scheduled for testing and implementation, and the percent of exchanges that have completed testing and implementation. We also recommend that the Director, OMB, ensure that the federal CIO Council (1) identifiy the areas in which adequate forums on Year 2000 issues are not available for our international trade partners and (2) develop an approach to promote Year 2000 compliance activities by these trading partners. We provided a draft of this report to NASIRE, the President’s Council on the Year 2000 Conversion, and OMB for comment. NASIRE stated that its Year 2000 Committee had reviewed the draft and had no suggested changes. The NASIRE President also commented that the information and recommendations seemed reasonable and should assist federal agencies and states in their Year 2000 efforts. The President’s Council on Year 2000 Conversion did not provide comments on the report. OMB provided comments that are reproduced in appendix VIII and summarized and evaluated below. OMB provided updated information on the initial steps taken by federal agencies to address data exchange issues, described actions taken to partially implement three of our recommendations, cited plans to implement one recommendation, and gave reasons for disagreeing with the remaining two recommendations. OMB commented that our survey results would have been markedly different if the data had been collected 1 month later. OMB stated that, after our survey, 24 of the largest federal agencies reported that they had completed their assessments of data exchanges, and that virtually all of these agencies had now reached agreements with their exchange partners on exchange formats. We agree with OMB that these steps would represent a good start; however, many essential actions are yet to be completed. Our recommendations focus on the actions needed to ensure that federal agencies appropriately build on these fundamental steps to comprehensively address data exchange issues. In commenting on our recommendation concerning the establishment of schedules for testing and implementation of new exchange formats, OMB listed the actions that the CIO Council had taken in cooperation with NASIRE to (1) establish lists of exchanges and a contact point for each exchange and (2) develop a reporting format for federal agencies to report monthly on the status of each data exchange with states starting in July 1998. OMB stated that this information will be posted on an Internet web site and be available for federal and state officials to review and determine whether testing is being conducted successfully. While these are positive steps toward implementation of our recommendation, they do not address the need to establish schedules for testing and implementing new exchange formats. Schedules with target dates for testing and implementation of new exchanges are needed for coordinating efforts and measuring progress toward specific milestones. In addition, the actions described by OMB apply only to states and thus do not address exchanges with other federal agencies, local governments, and the private sector that constitute over 80 percent of the total reported exchanges. As to our recommendation concerning the development and implementation of contingency plans for data exchanges that may fail, OMB stated that on April 28, 1998, it directed federal agencies to ensure that their continuity of business plans address all risks to information flows, including those with external organizations. OMB plans to evaluate this guidance and amplify it as necessary based on its review of agencies’ May 15, 1998, Year 2000 status reports. OMB has taken an important step by issuing this directive. However, the May progress reports showed that federal agencies are making slow progress in their Year 2000 activities and this reinforces the need for OMB to provide clear directions on this critical issue. Because of the risk that exchange partners may not be able to make their systems and exchanges Year 2000 compliant and the importance of developing effective contingency plans, OMB should provide explicit directions to ensure that agencies devote sufficient management attention and resources to this critical activity. Such directions should clearly require agencies to perform the key tasks associated with initiating the project, preparing business impact analysis, developing contingency plans, and testing the plans. Regarding our recommendation that OMB require agencies to report their status in completing key steps for data exchanges as part of the regular Year 2000 status reports, OMB stated that the posting of data exchange status information on a web site, as discussed above, will be used rather than imposing an additional reporting requirement on agencies. OMB explained that it and NASIRE have agreed to this approach because it (1) provides sufficient information at a policy level to ensure that the work is getting done, (2) promotes the greatest exchange of information at the working level, and (3) minimizes duplication of reporting. As we previously stated, establishing this status reporting process is a positive step; however, the website will contain information on thousands of data exchanges with states and must be summarized and analyzed for it to be useful in managing and monitoring the time-critical activities to resolve data exchange issues. Also, as previously noted, this reporting requirement only covers the status of exchanges with states and thus excludes the other data exchanges that constitute over 80 percent of the total exchanges. OMB agreed with our recommendation that agencies should give priority to installing the filters necessary to prevent the corruption of mission-critical systems and said that it plans to update its guidance to agencies to make sure they recognize this priority as well. OMB did not agree that agencies need to notify their exchange partners of the implications to the agency and the exchange partners if they do not make date conversions in time to meet the schedule for testing and implementing Year 2000 compliant data exchange processes. OMB stated that exchange partners are well aware of the implications of failing to make date conversions. Although exchange partners are aware of the general implications of date exchange failures, the partners will not know the implications if they do not meet testing and implementation schedules for specific exchanges, unless the agencies notify their exchange partners. Knowledge of these implications is important because the exchange partners have many competing demands for Year 2000 resources and may have to decide which activities will be completed on time and which will be deferred. Therefore, exchange partners need to know the implications of data exchange failures, including the actions that will be needed under contingency plans if the partners do not meet key milestones for testing and implementing data exchanges. OMB also disagreed with our recommendation that the federal CIO Council (1) identify the areas in which adequate forums on Year 2000 issues are not available for our international trade partners and (2) develop an approach to promote Year 2000 compliance activities by these trading partners. OMB said that the Chair of the President’s Council on Year 2000 Conversion agreed that international implications of the Year 2000 problems are of the gravest concern, but disagreed that the CIO Council would be the right place to begin addressing these problems. According to OMB, the Chair has met with representatives from two international organizations to encourage them to be more involved in Year 2000 activities and with the Secretary of State who agreed to have ambassadors conduct outreach efforts in each country. OMB also said that the Chair has asked agency heads to encourage international organizations to cooperate in addressing Year 2000 problems. The steps taken by the Chair to promote international actions on Year 2000 problems represent progress but much more organized, concerted, and continuous effort are needed to adequately address this far-reaching and complex issue—one that the Chair has acknowledged as being of gravest concern. Because the CIO Council includes representatives of agencies that regulate or influence private sector organizations that operate internationally in every economic sector, it could, and should, play an important role in providing the President’s Council with the support needed to deal effectively with Year 2000 issues worldwide. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Chairman of the Committee on Science; the Ranking Minority Member of the Committee on Science; the Chairman of the Subcommittee on Technology; other interested congressional committees; the Director, Office of Management and Budget; and other interested parties. Copies will also be made available to others upon request. I can be reached at (202) 512-6408 or by e-mail at willemssenj.aimd@gao.gov, if you or your staff have any questions. Major contributors to this report are listed in appendix IX. As requested by the Ranking Minority Member of the Subcommittee on Technology, House Committee on Science, our overall objectives for the review were to identify (1) the key actions taken to date to address electronic data exchanges among federal, state, and local governments, (2) actions the federal government has taken to minimize the adverse economic impact of noncompliant Year 2000 data from other countries’ information systems corrupting critical functions of our nation, and (3) international forums where the worldwide economic implications of this issue have been or could be addressed. To identify the key actions taken to date to address electronic data exchanges among federal, state, and local governments, we contacted federal and state organizations responsible for coordinating Year 2000 activities to identify their approaches for addressing data exchange issues. We obtained information on the status of actions of federal agencies and states using a data collection instrument (DCI). The DCI contains questions based on our Year 2000 Computing Crisis: An Assessment Guide (a copy of the DCI is reproduced in appendix VII). The DCI was pretested by having it reviewed for clarity and reasonableness by three agencies’ representatives who are knowledgeable about data exchanges. We revised the DCI based on their comments and further tested it by sending it to six federal agencies and three states. Five of the six federal agencies responded with a completed DCI in November and December 1997 and the other agency did not respond until February 1998. The three states provided oral comments, but did not respond with a completed DCI. Based on the five agencies’ responses and our subsequent follow-up questions concerning inconsistent or incomplete data, we revised the DCI by adding additional definitions and cross references. The DCI was sent to an additional 36 federal departments and major agencies (referred to collectively as federal agencies) and the remaining 47 states, the District of Columbia, and Puerto Rico. All 36 federal agencies and 39 of the 52 state-level organizations responded to our survey between January and March 1998. Three of the federal agencies reported that they did not have external data exchanges. In cases involving incomplete responses or inconsistent data on responses, we contacted the respondents to request additional data or clarification, as appropriate. Responses to follow-up questions were received in February, March, and April 1998. The DCI was also used to identify the federal government’s actions taken to minimize the adverse economic impact of noncompliant Year 2000 data from other countries’ information systems corrupting critical functions of our nation. In this regard, we collected information from federal and state organizations that have, or oversee entities that have, international data exchanges using the DCI. To identify international forums where the worldwide economic implications of this issue have been or could be addressed, we collected information from federal agencies using the DCI and researched international organization and Year 2000 Internet sites. We contacted the organizations identified as potential forums for international Year 2000 data exchange issues from October 1997 through March 1998 and ascertained their current and planned Year 2000 activities. Five of the international organizations that we contacted did not have Year 2000 activities or did not respond to our request for information. These organizations were the International Monetary Fund, Organization for Economic Cooperation and Development, European Monetary Institute, Asia-Pacific Economic Cooperation, and Association of Southeast Asian Nations. We did not independently verify the data provided in the DCI. We performed our work between September 1997 and April 1998 in accordance with generally accepted government audit standards. American National Standards Institute Accredited Standards Committee X12: An ANSI committee that formulates electronic data interchange standards governing transaction sets, segments, data elements, code sets, and interchange control structure. Standards define the format for specific electronic data interchange messages. In June 1997, the committee approved the use of a 8-digit date in X12 that includes the first 2 digits of the year. The Clearing House Interbank Payments System: a computerized network for the transfer of international dollar payments. CHIPS links 115 depository institutions which have offices in New York City. ANSI ASC X12 standards for the formatting and transmission of Medicare electronic transmissions involving enrollments, claims, reimbursements, and other payments. Federal Reserve’s electronic funds and securities transfer service. Fedwire is used by Federal Reserve Banks and branches, the Department of the Treasury, other government agencies, and depository institutions. Federal Information Processing Standards Publication 4-1, Representation for Calendar Date and Ordinal Date for Information Interchange. FIPS 4-1 strongly encourages agencies to use a 4-digit year format for data exchanges. A standard for electronic data exchange in certain health care applications involving patient, clinical, epidemiological, and regulatory data. HL7 standards are not used in healthcare insurance administration applications. United Nations-supported international electronic data exchange standard for administration, commerce, and transport. Department/agency (response date) Agency for International Development (1/22/98) Commodity Futures Trading Commission (1/23/98) Department of Agriculture (3/12/98) Department of Commerce (4/2/98) Department of Defense (4/7/98) Department of Education (4/1/98) Department of Energy (1/29/98) Department of Health and Human Services (3/26/98) Department of Housing and Urban Development (4/8/98) Department of the Interior (3/18/98) Department of Justice (4/9/98) Department of Labor (1/26/98) Department of State (3/4/98) Department of the Treasury (3/25/98) Department of Transportation (3/5/98) Department of Veterans Affairs (1/21/98) Environmental Protection Agency (2/11/98) Federal Communications Commission (4/7/98) Federal Deposit Insurance Corporation (3/13/98) Federal Emergency Management Agency (1/23/98) Federal Maritime Commission (3/31/98) Federal Reserve (3/26/98) Federal Trade Commission (1/23/98) General Services Administration (3/27/98) National Aeronautics and Space Administration (1/23/98) National Archives and Records Administration (3/6/98) National Credit Union Administration (1/23/98) National Science Foundation (1/26/98) National Transportation Safety Board (3/9/98) Nuclear Regulatory Commission (3/31/98) Office of Personnel Management (1/27/98) Overseas Private Investment Corporation (1/22/98) Pension Benefit Guaranty Corporation (1/27/98) Railroad Retirement Board (3/4/98) Securities and Exchange Commission (3/18/98) 18 (continued) Department/agency (response date) Small Business Administration (1/30/98) Social Security Administration (3/9/98) U.S. International Trade Commission (2/2/98) U.S. Postal Service (1/26/98) The date that the agency supplied the most recent information, including new data supplied as the result of follow-up questions. These states were not able to provide information for all state organizations and a significant amount of data were not available. These states provided estimates. (no activities reported) Information on the Year 2000 activities of international organizations was obtained by interviews with their officials and research of information posted on Internet web sites. There may be other organizations addressing international Year 2000 issues that we did not identify. The Bank for International Settlements (BIS) has undertaken a worldwide campaign to increase awareness, provide guidance, and identify the status of Year 2000 efforts by central banks and major international banking organizations. BIS hosts the Basle Committee on Banking Supervision and the Committee on Payment and Settlement Systems that are sponsored by the Group of Ten Governors. According to BIS, payment and settlement systems are an essential element of financial market infrastructures through which clearing organizations, settlement agents, securities depositories, and the various direct and indirect participants in these systems are intricately connected. It is therefore imperative that such systems be adapted and certified early enough to ensure that they are Year 2000 compliant and, very importantly, to allow inter-institution testing. This information is available on the BIS web site (www.bis.org). To increase awareness, in September 1997, the Basle Committee on Banking Supervision issued a technical paper for banks that sets out a strategic approach for the development, testing, and implementation of system solutions as well as defining the role that central banks and bank supervisors need to play in promoting awareness of the issue and enforcing action. The Committee on Payment and Settlement Systems is collecting and publishing information on the state of preparedness of payment and settlement systems around the world with respect to the Year 2000 issue. For this purpose, a special reporting framework has been developed that operators of payment and settlement systems can use to indicate the state of internal testing as well as testing with external participants for key components of their information technology infrastructure. The framework distinguishes between the key components of such infrastructures—the central system, the networks and network interfaces, the participants’ front-end systems, and other main components. For each of these components, information is provided on the start and completion dates for internal testing as well as testing with external participants. An indication is also given as to the connections of the respective payment or settlement systems with other external systems, on the coordinated effort with other payment systems and/or major participants, and where more information can be obtained from the respective operator. The Basle Committee also plans to survey the efforts that banking supervisors have underway in each country as well as the state of readiness of the local banking system. They expect to complete these surveys during the first half of 1998. In April 1998, the Basle Committee, the Committee on Payment and Settlement Systems, the International Organization of Securities Commissions, and the International Association of Insurance Supervisors held a round table on the Year 2000 in order to provide a global platform for the sharing of relevant strategies and experiences across key industries by international bodies representing both the public and the private sector. As the principal international organization of securities regulators, the International Organization of Securities Commissions (IOSCO) has taken a leadership role in promoting awareness of the Year 2000 computer problem and in encouraging its membership and all market participants to take swift and aggressive action to address Year 2000 issues. IOSCO is the largest international organization of securities regulators with 99 members—principally domestic government agencies entrusted with securities regulation. Among other things, IOSCO has called for regular monitoring of Year 2000 readiness and global, industrywide testing to take place in sufficient time to address any weaknesses or deficiencies that are revealed. IOSCO currently exchanges information, periodically engages in joint work with, and to some extent coordinates its ongoing work with, the Basle Committee on Banking Supervision and the International Association of Insurance Supervisors. IOSCO has a working relationship and/or exchanges information on a regular basis with BIS, the International Accounting Standards Committee, the International Federation of Accountants, the Fédération Internationale des Bourses de Valeurs, the International Monetary Fund, and members of the World Bank Group. IOSCO also maintains a liaison relationship with the International Organization for Standards. Information on IOSCO’s current work program is regularly provided to the Group of Seven. IOSCO is surveying and obtaining information on a regular basis about measures being taken by industry and regulators to address Year 2000 computer issues. IOSCO is also encouraging global, industrywide testing. IOSCO’s current work builds on its public statement of June 1997, exhorting all members and market participants in their jurisdictions to take all necessary and appropriate action to address the critical challenges presented by the Year 2000 issue. IOSCO’s Technical Committee, which consists of regulators of the most developed and internationalized markets, is currently surveying its members to ascertain what actions are being taken within member jurisdictions to avoid Year 2000 problems. Because of the critical nature of this project, the Technical Committee decided to conduct similar surveys on industry readiness every 6 months. Each Technical Committee member was requested to supply the following information to the IOSCO Secretary General by January 15, 1998. 1. Awareness: What actions has your organization taken to impress upon relevant entities (self-regulatory organizations, industry groups, financial firms) the importance of addressing the Year 2000 issues identified in the Technical Committee Statement? 2. Guidance: What specific policies and/or procedures are being used by your organization and other relevant organizations within your jurisdiction to prepare markets and market participants for Year 2000? 3. Progress: What steps (including the use of specific interim goals) are being taken by your organization and by the other relevant organizations in your jurisdiction to monitor the progress of relevant entities in addressing Year 2000 problems? 4. Testing: What plans have been made by your organization or other relevant organizations in your jurisdiction for industrywide systems testing for Year 2000 problems? IOSCO added a specific section on the Year 2000 issue to its Internet web site (www.iosco.org) that contains a substantive reference list on this topic. The Securities Industry Association’s (SIA) activities are primarily directed at increasing awareness; however, it is taking a leadership role in its efforts to establish a testing schedule. SIA staff have been making presentations at conferences to increase international awareness of Year 2000 problems. For example, SIA staff gave Year 2000 awareness presentations at IOSCO conferences in Kenya, Taipei, and European cities. SIA is also conducting scenario planning sessions at international conferences to stimulate planning. These sessions focus on priorities for resolving Year 2000 problems. To identify Year 2000 readiness in the securities industry, SIA is conducting an industrywide survey. The survey form is posted on its Internet web site (www.sia.com/year_2000). If sufficient response is received, SIA will post a summary of the results on its web site. SIA has also developed and posted on its web site a conversion and testing schedule for its members to use in coordinating their Year 2000 activities. In addition, SIA is developing a checklist to help chief executive officers focus on key Year 2000 activities. SIA has coordinated extensively with other international organizations, including the Investment Dealer Association, IOSCO, International Insurance Association, Futures Industry Association, Institute Internationale Finance, and Fédération Internationale des Bourses de Valeurs. SIA is considering a coordinated effort with multilateral development banks, such as the World Bank, Asian Development Bank, and the European Development Bank, to promote awareness. The focus of the Futures Industry Association’s (FIA) Year 2000 activities is information sharing and test coordination among its 200 members. Its members include futures commissions merchants, international exchanges, and others interested in the futures market. FIA compiled a “conditions catalog” of products and transactions to be tested on an exchange-by-exchange basis in the United States. It is making this available to international members and encouraging members to adopt the same format for testing between exchanges and intermediaries. FIA has posted this information on its Internet web site (www.fiafii.org). FIA has also placed information about various exchanges on the web site and plans to include additional information about international exchanges in the future. FIA met with brokerage firms, exchanges, the London Clearing House, and key service providers in June and December 1997 to raise awareness of Year 2000 issues and discuss possible test scenarios. FIA also hosted an international meeting at its Futures & Options Expo in October 1997 to discuss various Year 2000 activities around the world. At the FIA International Futures Industry Conference in March 1998, FIA asked key members to support an industrywide test. FIA is surveying 20 of the member exchanges with the highest trade volume to identify their Year 2000 activities. At a Global Technology Forum held in London March 30-April 1, 1998, FIA will request that the 20 member exchanges provide information about the scope of their Year 2000 activities, including their current status, interfaces with intermediaries, plans for individual testing with intermediaries, and willingness to participate in an industrywide test. The International Association of Insurance Supervisors’ Year 2000 activities are primarily directed at increasing awareness of Year 2000 issues among its insurance supervisor members from over 70 countries. It is also working cooperatively with other international organizations to increase awareness. In November 1997, it issued a joint statement with the Basle Committee on Banking Supervision and the International Organization of Securities Commissions that emphasized the importance of the Year 2000 issue. The joint statement urged the development of action plans to resolve Year 2000 problems, including data exchange problems with financial institutions and clients. In December 1997, the International Civil Aviation Organization sent a letter to its members to increase their awareness of Year 2000 computer problems. The letter explained that air traffic service providers may need to perform assessments on operational air traffic control systems and nonoperational systems that provide business and commercial support. Air traffic service operational systems may be date dependent and subject to local implementation. Such systems include aeronautical fixed telecommunication networks, radar data processing, and flight data processing systems. In addition, operational systems often use date information for logging performance information. The letter also suggested a schedule for assessing, implementing solutions, and testing systems. The International Civil Aviation Organization requested that members advise it on remedial actions they have taken. The International Air Transport Association (IATA) represents and serves 259 members in the airline industry. In addition to the airlines, IATA works with airline industry suppliers, including airports, air traffic controls, aircraft/avionics manufacturers, travel agencies, global distribution systems, and information technology suppliers. IATA serves as a clearing house between its airline members to process their debit/credit notes. IATA has an internal Year 2000 project that includes four major steps: software/hardware inventory, Year 2000 compliance analysis, software modification, and contingency planning. IATA has set a target date of December 25, 1998, for Year 2000 compliance for all of its products and services. As an association of international airlines, IATA has established a group to coordinate and synchronize efforts within the industry to ensure timely solutions to Year 2000 issues. Specifically, the date format of interline messages (messages airlines exchange among themselves and other parties as a part of business processes) has been frozen. The member airlines’ applications will have to handle date conversion, if required. In addition, IATA has conducted Year 2000 conferences and seminars to exchange information among members. To monitor the status of Year 2000 activities, IATA has conducted surveys of airline members and industry suppliers. The survey of member airlines showed that (1) very few organizations claim to be fully compliant, (2) the majority of the organizations are well aware of the problem and have already initiated Year 2000 compliance activities, and (3) the typical target date for full compliance is the end of 1998. The results of the survey are available on IATA’s web site (www.iata.org/y2k). The European Commission has declared that it is concerned about the vulnerability of enterprises, infrastructures, and public administrations to the Year 2000 computer problem as well as the possible consequences of this problem for consumers. The Commission had extensive consultations with the public and private sectors during workshops in 1997 to identify the main priorities for action and the roles for enterprises, associations, administrations, and the Commission itself. As a result of these consultations, the Commission adopted a course of action and published it in an official communication on February 25, 1998. The purpose of the communication was to raise awareness and set out the Commission’s steps to address Year 2000 issues, including encouraging and facilitating the exchange of information and experience on Year 2000 initiatives undertaken by the Commission’s member states and European associations, with a view to identifying how synergies can be established to reduce duplication of effort and increase the overall impact; serving as a liaison with the European and international organizations that are responsible for regulating or supervising infrastructural sectors with significant cross-border effects (finance, telecommunications, energy, transportation) in order to exchange information about respective activities and identify where cooperation may be required. An area of particular concern is the planning and implementation of coordinated cross-border testing activities in those sectors that are likely to involve organizations in different member states. The Commission will initiate discussions between relevant organizations and member states; discussing the Year 2000 and its implications through all the relevant contacts available to the Commission services in industry and member states. In particular, attention will be paid to the impact on and preparation of infrastructural sectors, the impact on consumers and small and medium size enterprises, and the potential impact on the functioning of the internal market; and maintaining a Internet web site on the Year 2000 computer problem (www.ispo.cec.be/y2keuro). This site provides access to information about activities in different economic sectors and member states, points to sources of advice on specific aspects of the problem, and links to other sites as well as to all documents and reports produced by the Commission on the subject. The Commission also plans to monitor progress, exchange information, and benchmark best practices while reporting regularly on the progress towards Year 2000 readiness and its related issues. In the context of its policies such as those on industry, small and medium size enterprises, consumers, and training, the Commission will examine whether a further contribution could be made towards helping raise awareness and address Year 2000-related problems. In addition to its Year 2000 activities, the Commission is also addressing the information technology implications of European countries’ conversion to the new Euro currency. Because this conversion is taking place about the same time period as the Year 2000 date conversion activities, the two activities are in competition for financial, technical, and management resources. To identify how businesses are approaching the Euro conversion and the interrelationship with activities to resolve Year 2000 problems, the Commission sponsored the survey of over 1000 senior information technology managers in 10 countries. The results of this survey, as well as the issue papers and workshop results, are posted on the Commission’s web site. The World Bank is conducting an awareness campaign directed toward its client governments and implementing agencies that are responsible for World Bank-financed projects in developing countries. The Bank wants to ensure the continued success and viability of its clients and avoid problems with development projects, many of which comprise information technology systems and embedded logic components that may be vulnerable to the Year 2000 problem. In this effort, however, the Bank limits its role to raising awareness and pointing clients toward ways of evaluating and remediating the problem. To begin this effort, the Bank is (1) distributing an information packet on the Year 2000 problem, (2) pointing recipients to further sources on the Internet, and (3) providing some advice on ascertaining Year 2000 compliance in the procurement process. In the near future, the Bank plans to provide Year 2000 information on the Bank’s Internet web site (www.worldbank.org). The Bank also is hiring a contractor to develop a guide for developing country governments on creating a national Year 2000 policy. When ready, this guide will be placed on the Bank’s Internet web site and will be conveyed to governments via seminars to be held around the world. In November 1997, the United Nations’ Information Technology Services Division posted information on its Internet web site (www.un.org/members/yr2000) to increase awareness of the actions needed to resolve Year 2000 computer problems. This included information on the actions being taken concerning the computer systems operated by United Nations’ organizations and references to issue papers and guidance documents that member countries could use in developing their own Year 2000 program. It also circulated a letter to member countries that recommended dates for Year 2000 compliance and contained references to reading materials and companies providing Year 2000 services. At that time, the United Nations was considering a program to encourage member countries that have not already begun a Year 2000 assessment to take aggressive action in the development of strategic plans to deal with Year 2000 problems. It also circulated a letter to member countries that recommended dates for Year 2000 compliance and contained references on reading materials and companies providing Year 2000 services. The Steering Committee is sponsored by the Group of Seven and its objective is to promote the international sharing of information on the resolution of Year 2000 computer problems. To achieve this objective, the Steering Committee has established an Internet web site (www.itpolicy.gsa.gov) that includes (1) links to Year 2000 web sites of various countries and (2) databases showing the Year 2000 compliance status of commercial-off-the-shelf software, telecommunications, facilities, and biomedical equipment. The Steering Committee is also planning to use the web site to conduct a virtual Year 2000 international conference. The International Council sponsored a workshop in August 1997 with the objectives of exchanging information among members on Year 2000 issues related to each member country and identifying areas of common interest. The workshop was attended by representatives from 14 countries (a report on the workshop is located at www.ogit.gov.au/ica/icay2k). The International Council has scheduled a second workshop for June 1998. Interpol operates an international network that its 177 member countries use to exchange law enforcement information. Member countries connect to telecommunication hubs that are located around the world and their information systems transmit data through the network. Interpol has a project underway to ensure that its network will be ready well before the Year 2000. According to project officials, Interpol has been working with suppliers to ensure that the network’s hardware and software will be Year 2000 compliant. It has also sent its Year 2000 plans to each member country. A key part of these plans is the testing of the network. This testing is scheduled to be performed in October 1998 and January 1999. Year 2000 Computing Crisis: Continuing Risks of Disruption to Social Security, Medicare, and Treasury Programs (GAO/T-AIMD-98-161, May 7, 1998). Year 2000 Computing Crisis: Potential for Widespread Disruption Calls for Strong Leadership and Partnerships (GAO/AIMD-98-85, April 30, 1998). Department of the Interior: Year 2000 Computing Crisis Presents Risk of Disruption to Key Operations (GAO/T-AIMD-98-149, April 22, 1998). Year 2000 Computing Crisis: Federal Regulatory Efforts to Ensure Financial Institution Systems Are Year 2000 Compliant (GAO/T-AIMD-98-116, March 24, 1998). Year 2000 Computing Crisis: Strong Leadership Needed to Avoid Disruption of Essential Services (GAO/T-AIMD-98-117, March 24, 1998). Year 2000 Computing Crisis: Business Continuity and Contingency Planning (GAO/AIMD-10.1.19, Exposure Draft, March 1998). Year 2000 Computing Crisis: FAA Must Act Quickly to Prevent Systems Failures (GAO/T-AIMD-98-63, February 4, 1998). FAA Computer Systems: Limited Progress on Year 2000 Issue Increases Risk Dramatically (GAO/AIMD-98-45, January 30, 1998). Defense Computers: Air Force Needs to Strengthen Year 2000 Oversight (GAO/AIMD-98-35, January 16, 1998). Social Security Administration: Significant Progress Made in Year 2000 Effort, But Key Risks Remain (GAO/AIMD-98-6, October 22, 1997). Defense Computers: Technical Support Is Key to Naval Supply Year 2000 Success (GAO/AIMD-98-7R, October 21, 1997). Veterans Affairs Computer Systems: Action Underway Yet Much Work Remains To Resolve Year 2000 Crisis (GAO/T-AIMD-97-174, September 25, 1997). Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, September 1997). Defense Computers: Improvements to DOD Systems Inventory Needed for Year 2000 Effort (GAO/AIMD-97-112, August 13, 1997). Defense Computers: Issues Confronting DLA in Addressing Year 2000 Problems (GAO/AIMD-97-106, August 12, 1997). Defense Computers: DFAS Faces Challenges in Solving the Year 2000 Problem (GAO/AIMD-97-117, August 11, 1997). Veterans Benefits Computer Systems: Uninterrupted Delivery of Benefits Depends on Timely Correction of Year-2000 Problems (GAO/T-AIMD-97-114, June 26, 1997). Veterans Benefits Computer Systems: Risks of VBA’s Year-2000 Efforts (GAO/AIMD-97-79, May 30, 1997). Medicare Transaction System: Success Depends Upon Correcting Critical Managerial and Technical Weaknesses (GAO/AIMD-97-78, May 16, 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on actions taken to address year 2000 issues for electronic data exchanges, focusing on the: (1) key actions taken to date to address electronic data exchanges among federal, state, and local governments; (2) actions the federal government has taken to minimize the adverse economic impact of non-compliant year 2000 data from other countries' information systems corrupting critical functions of the United States; and (3) international forums where the worldwide economic implications of this issue have been or could be addressed. GAO noted that: (1) key actions to address year 2000 data exchange issues are still in the early stages; however, federal and state coordinating organizations have agreed to use a 4-digit contiguous year format and establish joint federal and state policy and working groups; (2) to implement these agreements, the Office of Management and Budget (OMB) issued instructions in January 1998 to federal agencies to inventory all data exchanges with outside parties by February 1, 1998, and coordinate with these exchange partners by March 1, 1998; (3) at the time of GAO's review, no actions had been taken to establish target dates for additional key tasks; (4) about half of the federal agencies reported during the first quarter of 1998 that they have not yet finished assessing their data exchanges to determine if they will be able to process data with dates beyond 1999; (5) two of the 39 state-level organizations reported having finished assessing their data exchanges; (6) for the exchanges already identified as not year 2000 ready, respondents reported that little progress has yet been made in completing key steps such as reaching agreements with partners on date formats, developing and testing bridges and filters, and developing contingency plans for cases in which year 2000 readiness will not be achieved; (7) most federal agency actions to address year 2000 issues with international data exchanges have been in the financial services area; (8) ten federal agencies reported having a total of 702 data exchanges with foreign governments or the foreign private sector; (9) these foreign data exchanges represented less than 1 percent of federal agencies' total reported exchanges; (10) federal agencies reported reaching agreements so far on formats of 98 of the foreign data exchanges; (11) international organizations addressing year 2000 issues have been the most active in the financial services area; and (12) during 1997, several international organizations initiated activities to increase awareness, provide guidance, and monitor the status of year 2000 efforts.
You are an expert at summarizing long articles. Proceed to summarize the following text: IRS’ policy has long provided that, for taxpayers who are unwilling to pay their tax debts in a manner that is commensurate with their ability to pay, IRS revenue officers were to initiate enforced collection actions that could culminate in the seizure of their property. In fiscal year 1997, IRS revenue officers seized property from about 8,300 taxpayers who owed the federal government an estimated $1 billion in unpaid taxes. When we first reviewed IRS’ management of seized assets in 1992, we concluded that IRS’ controls over seized assets were not adequate to protect against theft, waste, and misuse nor to assure that the highest sales prices at the lowest cost were obtained. These conclusions were based on the following control weaknesses. Little accountability. We found that IRS did not (1) keep up-to-date records on property seized, (2) obtain receipts to document asset custody and storage location, (3) record physical condition of the property seized, or (4) conduct physical inventories of assets-on-hand to verify inventory records or check on the assets. Inadequate security. We found that some seized assets had been stolen or were missing, and in many cases, the value of the property was not documented in the files. We also reported that by not documenting the condition and value of seized assets, IRS left itself open to claims of damage. Sales not yielding highest price at lowest cost. We found that IRS could have attracted more buyers, and thus generated higher sales prices by holding consolidated sales of seized assets. Consolidated sales would also have allowed IRS to reduce sales costs, such as advertising. We also found that IRS did not always arrange for the lowest cost storage of assets. Little oversight. We found that IRS did not know the total amount of property in its possession because it lacked an adequate information system. Moreover, IRS management knew very little about the assets seized, including the types of assets seized, the value or condition of those assets, or where the assets were located. In conclusion, we commented that the asset management and sales functions could best be done by parties who specialize in those functions, such as other agencies or contractors, rather than as additional duties assigned to revenue officers, whose primary responsibility was to collect unpaid taxes. We also said that IRS needed far better information to oversee the management and sales of seized assets. To determine IRS’ progress in removing revenue officers from its process for selling seized assets, we interviewed IRS National and district officials concerning efforts to remove revenue officers from asset sales. We also reviewed the applicable provisions of the Restructuring Act, IRS interpretations of the act’s requirements, IRS procedures for selling seized assets, and seizure case files. To determine IRS’ progress in correcting internal control weaknesses, we discussed the 1992 findings with IRS National and district officials. We reviewed statutory and procedural requirements for conducting seizures and sales of taxpayer assets and examined collection case files to assess how those procedures were carried out. To make our case file review, we first selected a random sample of taxpayers who had property seized by IRS because of unpaid taxes. We selected the random sample from a population of about 8,300 taxpayers who had property seized by IRS in fiscal year 1997. About 9,700 seizures were associated with these 8,300 taxpayers. This sample yielded sufficiently complete information on 115 taxpayers with a corresponding 139 seizures to evaluate IRS’ management and control over assets seized. We followed procedures to express confidence in the precision of the results with a 95-percent confidence interval, separately computed for each estimate and reported as footnotes to the text of this report. Second, we randomly selected 16 cases with assets still in IRS’ possession from a population of 76 cases in 4 IRS district offices. Because this phase of our review involved examining the seized assets, possibly stored hundreds of miles from a district office, and reviewing the case file with the revenue officer in charge of the case, we established a maximum travel range of about 100 miles from our work locations in making our random selections. Our work was done principally in IRS district offices located in Atlanta, GA; Chicago, IL; St. Louis, MO; Oakland, CA; and the IRS National Office in Washington, D.C. We did our work between January 1998 and August 1999 in accordance with generally accepted government auditing standards. We obtained written comments from IRS on a draft of this report. We have summarized those comments in this letter and reprinted the written comments, in entirety, in appendix I. As of October 1999, IRS had not finalized its plans for removing revenue officers from any participation in selling seized assets. As a preliminary step to implement the Restructuring Act mandate, IRS collection managers asked IRS Chief Counsel for a legal interpretation of the point at which revenue officer involvement in a seized asset sale should end. Chief Counsel concluded that many activities that take place before the actual sale, such as the determination of the minimum price that IRS would accept for an asset, are “critical” to the sale of an asset and should be considered as “involved” in the sale. Accordingly, Chief Counsel concluded in its July 1999 interpretation that revenue officer involvement should essentially end with the act of seizing a taxpayer’s assets and may begin again after the sale of the assets has been completed. Chief Counsel also commented that an IRS study group would have the best perspective to structure any new IRS position related to asset sales. Using Chief Counsel’s interpretation as a starting point, IRS convened a study group of IRS staff and asset management and sales specialists from other federal agencies. The group met in October 1999 to discuss issues related to removing revenue officers from asset sales and structuring an IRS asset management and sales specialist position. As part of its discussions, the group recognized that any decisions reached would require consideration of a number of issues, including the following. Seizure workload. Since enactment of the Restructuring Act, the number of IRS asset seizures has dropped from about 10,000 per year to about 200 for 1999. As discussed in our overall report, IRS expects the number of seizures to rebound as IRS staff become more familiar with the act’s collection provisions. Considering the uncertainty regarding the workload for a specialist position, the group discussed issues related to ensuring that the number and location of specialist staff are commensurate with the workload. Allocation of duties and responsibilities. Although the Restructuring Act mandates that revenue officers are to be removed from any participation in sales, the group considered whether a revenue officer or other IRS employee, such as the specialist, should be present at all asset sales in order to stop a sale from being consummated, if appropriate. For example, a sale should be stopped if a taxpayer pays the tax debt or declares bankruptcy—currently the responsibility of the revenue officers involved in the seizures. The group also considered how the requirement for removing revenue officers from sales would affect supervisory responsibilities. Since many supervisory employees of the collection function are revenue officers, the group considered whether it would be permissible for those collection officials to supervise the specialists. Contracting out. The group considered the circumstances under which IRS should use private sector contractors or other government agencies to manage and sell assets. One option was for the specialists to determine, on a case-by-case basis, whether it would be better for the specialist to manage or sell the assets, assign the functions somewhere else in IRS, or contract out the functions. As of the end of October 1999, IRS’ Collection Division management was continuing to review options for structuring the specialist position. In our current review of IRS’ seized asset management and sales processes, we found little improvement from 1992 conditions in the 1997 seizures we reviewed. As in 1992, we found (1) little accountability over seized assets, (2) little or no security for some assets, (3) little assurance that IRS’ sales produced maximum proceeds, and (4) little useful management information for monitoring seized assets. The following summarizes the problems found. Our overall report on weaknesses in IRS’ seizure processes contains additional details. With respect to establishing accountability over seized assets, little had changed from our review in 1992. As detailed in our overall report on weaknesses in IRS’ seizure processes, asset control information documented by revenue officers in their seizure case files was not as comprehensive as the control information specified by federal financial management guidelines. Among other details, the guidelines explain that information should be sufficiently specific to allow the independent verification that each asset exists and that the recorded physical condition, geographic location, and asset value are accurate. We estimate, based on our review of sampled seizure cases, that revenue officers in preparing inventory documents omitted some information on the identity of assets seized in about 25 percent of seizure cases (i.e., asset descriptions used by revenue officers were not detailed enough, such as by identifying make, model, or serial number, to differentiate the items seized from other like items); quantity of assets seized in about 15 percent of seizure cases; condition of assets seized in about 74 percent of seizure cases; value of assets seized in about 12 percent of seizure cases; location of assets seized in about 10 percent of seizure cases; and custodian of assets seized in about 47 percent of seizure cases. Moreover, we estimate that revenue officers did not obtain receipts in 51 percent of the cases when the revenue officer file indicated that the seized assets were stored at contractor locations. Also, IRS did not make periodic physical inventories of assets in the possession of revenue officers or contractors. The omission of detailed information on assets (such as asset identity, quantity, or condition) reduces accountability. Even if IRS made physical inventories, without such information, there would be little basis for determining that all assets seized were still under IRS or third-party custody or appropriately protected against loss or deterioration. Regarding asset protection, little has changed from our review in 1992. As detailed in our overall report on weaknesses in IRS’ seizure processes, we found that an estimated 12 percent of seizure cases involved assets that required safeguards but the revenue officers’ files did not indicate security arrangements were made. For example, in one case, the revenue officer file contained no documentation on where a taxpayer’s $17,000 vehicle was stored or how the vehicle was safeguarded. In another case, the revenue officer seized personal property—jewelry, furniture, and clothes valued at about $10,000—but did not indicate how the assets were protected against loss or damage. Although we only found a few seizures that resulted in loss or alleged loss or damage to property, we could not determine the magnitude of the loss nor who bore responsibility for the loss because of limited documentation in the revenue officers’ files. For example, a piece of seized artwork was damaged while a storage company was moving the assets. The revenue officer did not document the dollar amount of the damage or who was liable for the loss. In another instance, a taxpayer complained that various personal items located in seized real estate were missing. The revenue officer’s file provided no further information on the amount of the alleged loss. Similar to our 1992 review, we found that IRS’ sales practices provided little assurance that the maximum possible sales proceeds were achieved. As detailed in our overall report, this is attributable to two reasons. First, many assets were sold without competitive bidding, and second, IRS’ minimum acceptable price for an asset was often established in an arbitrary manner. We estimate that about 51 percent of the sales attracted no more than one bidder, and only 42 percent of the cases sold for more than the IRS- established minimum price. In general, IRS did not do much to attract bidders. IRS did not hold consolidated asset sales that might attract more prospective buyers. Rather, revenue officers held separate sales for property seized from different taxpayers, mostly during weekday work hours, with minimal advertising (e.g., posting in two public places and a legal notice in a local paper). IRS seldom used professional auctioneers or commercial markets that specialize in selling pre-owned assets. In setting a minimum price, revenue officers followed a formula that provided for reducing the assets’ fair market value by up to 40 percent. Our assessment of the minimum price-setting formula, the revenue officers’ use of the formula, and exceptions to the formula, showed that minimum prices were often arbitrarily set. First, we found little documentation supporting revenue officer estimates of the fair market value of the assets seized—the starting point for computing the minimum acceptable price for the assets. We estimate that only about 4 percent were based on professional appraisals and about 71 percent of seizure case files contained no documentary evidence for the amounts recorded by the revenue officers. Moreover, as indicated by revenue officer file notations, about 35 percent of the recorded values were set on the basis of revenue officer judgment. Second, we found instances where the recorded estimates of asset fair market value were not used as the starting point in setting the minimum price. For example, a revenue officer noted in the case file that, on checking courthouse records, the value of the seized property was about $93,000. In computing the minimum acceptable price for the property, however, the revenue officer used a value of $80,000 without explanation. Without appraisals, neither IRS nor we can be certain of the value of the taxpayer property. Third, we found little justification for the maximum percentage reduction allowed in the formula used to compute the minimum price. National Office officials responsible for program guidance advised us that they were not aware of the origins of the reductions. And while the guidance suggested that these were maximum reductions that needed to be supported, revenue officers used the maximum reduction an estimated 69 percent of the time with little detailed justifications shown. Fourth, the percentage reductions used by the revenue officers did not necessarily reflect the different risks to buyers based on the type of asset. Often we found that revenue officers applied the same maximum reductions to both real property and personal property, yet the conditions associated with the sale of these assets varied substantially. For personal property, such as a car, ownership and control of the asset passed at sale. For real property, such as a taxpayer’s residence, the taxpayer had 6 months to reclaim the asset after sale, and the purchaser usually did not have access to the property during the 6-month period. Fifth, IRS’ policies limited the minimum price to no more than the taxpayer’s tax liability plus the estimated expenses of seizure and sale. Under this policy, the minimum price could be set much lower than the formula’s maximum percentage reduction would allow. In one case that we reviewed, use of the tax debt amounted to another 20 percent reduction below the formula-determined price. After 1992, IRS installed an automated system to inventory and monitor the property seized from delinquent taxpayers. However, the new system still did not provide IRS management with information useful for establishing accountability over seized assets or monitoring the management and sales of the assets as envisioned by federal financial management guidelines. Moreover, the system was not Year 2000 compliant and will not be used beginning January 2000. The first phase of a replacement system, currently under development, will not become operational until about July 2000. In the interim, IRS will rely on an as-yet- unspecified paper-based tracking system. As we detailed in our overall report on weaknesses in IRS’ seizure processes, IRS’ system to track seized assets did not include all the information set out by federal financial management guidelines, and the information it did contain was not always current or accurate. More specifically, the automated inventory system did not require the entry of the full description of assets as recorded by revenue officers in their case files; did not provide data entry fields for capturing information on asset did not provide a data entry field for theft, loss, and damage expenses; did not consistently capture information on the value of the assets—in some instances valuing the assets at the amount of the taxpayer’s delinquency and in others, at the value of the taxpayer’s ownership interest in the assets; did not always coincide with the revenue officers’ files or the actual property on hand (in comparing system records, revenue officers’ files, and our physical inspection of assets involving 16 seizures in 4 IRS district offices, we found discrepancies in 15 seizures); and was not required to be updated in a timely manner. Given the above limitations, the system could produce little useful oversight information that management could use to monitor seized assets. Moreover, the system had limited information-reporting capabilities. It did not even have the capability to produce a report on the total inventory of seized assets held by IRS. IRS is in the process of developing a replacement information system, largely because the existing system was not Year 2000 compliant. Because of Year 2000 complications, IRS will cease using the existing system by January 2000 but does not plan to have a new system in place at that time. In designing the new system, for an estimated implementation in July 2000, IRS took into consideration the financial management guidelines and input from us. While IRS has not completed its system design work, IRS officials told us that the July implementation will not provide for information reporting beyond the limited capabilities of the existing system. They also said that any enhancements to these capabilities would follow in later phases of development of the system. Regardless of whether seized asset sales are done “in-house” by an IRS specialist or contracted out to a private concern, IRS must have controls that provide for accountability over seized assets, security for assets, sales practices that protect the government’s and taxpayers’ interests, and information to allow for management oversight. Without such controls, taxpayers who have their assets seized are at risk of having their interests suffer—for example, from asset sales that fail to maximize net proceeds. To this end, we have made a number of recommendations in our overall report. This report summarizes the information supporting those recommendations. This report repeats the recommendations detailed in our overall report. The recommendations are as follows. To improve IRS’ process for controlling assets after seizure, we recommend that the Commissioner fully implement federal financial management guidelines to include ensuring that revenue officers document basic asset control information, including detailed asset identity descriptions, asset condition, and custody information; ensuring that basic control information is entered in a timely manner and included in the revised automated inventory control system; ensuring asset security and accountability through scrutiny of decisions regarding security and periodic reconciliation of inventory records to assets-on-hand (periodic physical inventories); and requiring revenue officers to record and account for all theft, loss, and damage expenses of each asset and document efforts to obtain reimbursement for the expenses in collection case files. To strengthen the sales process for assuring that the highest prices are obtained from seized asset sales, we recommend that the Commissioner develop guidelines for establishing minimum asset prices to preclude the use of arbitrary percentage reductions or the amount of the delinquency as the minimum price and take the steps necessary to promote reasonable competition among potential buyers during asset sales. To strengthen oversight of seizure activities, we recommend that the Commissioner establish a method for providing IRS senior managers with useful information to monitor the use of seizure authority, including the quality of asset management and disposal activities. In written comments on a draft of this report, IRS agreed with the report’s findings and said it was working to address them. More specifically, IRS said that it needed to strengthen its requirements for documenting the property seized and its process for marketing assets. IRS also noted that, as discussed in our overall report, certain conditions associated with the sale of seized assets (e.g., sale of assets in “where is” and “as is” condition) may depress the price at which the assets may be sold. Additionally, IRS acknowledged that, in the short term, it will not have an information system that will provide IRS management with all of the asset management information needed. But IRS said that it expects to expand the capabilities of the management information system so that, in the long term, IRS will have an automated system that will meet all of the federal financial management guidelines. For additional comments on individual recommendations, IRS referred to its response to our overall report. In those comments, IRS generally agreed with most of the recommendations but said it was impractical, at this time, to implement those associated with monitoring the quality of seizure decisionmaking and the results of seizures (see IRS Seizures: Needed for Compliance but Processes for Protecting Taxpayer Rights Have Some Weaknesses, (GAO/GGD-00-4, Nov. 29, 1999)). As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to Representative William J. Coyne, Ranking Minority Member, Subcommittee on Oversight, House Committee on Ways and Means; the Honorable Charles O. Rossotti, Commissioner of Internal Revenue; other interested congressional committees; and other interested parties. We will also make copies available upon request. This work was done under the direction of Thomas M. Richards. Other major contributors are listed in appendix II. If you have any questions, you may contact me on (202) 512-9110. In addition to those named above, Wendy Ahmed, Julie Cahalan, Sharon Caporale, Kevin Daly, Sally Gilley, Leon Green, Mary Jankowski, Joseph Jozefczyk, Stuart Kaufman, Ann Lee, Mary Jo Lewnard, John Mingus, George Quinn, Julie Scheinberg, Sidney Schwartz, Samuel Scrutchins, James Slaterback, Shellee Soliday, Clarence Tull, Margarita Vallazza, and Thomas Venezia made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Internal Revenue Service's (IRS) progress in eliminating asset management control weaknesses, focusing on: (1) the implementation of the IRS Restructuring and Reform Act's mandate to remove revenue officers from the asset sale function; and (2) other internal control weaknesses identified in GAO's 1992 testimony. GAO noted that: (1) as of October 1999, IRS had not finalized its plans for removing revenue officers from its process for selling seized assets; (2) after the passage of the Restructuring Act, IRS organized a study group to consider establishing a specialist position for both managing and disposing of assets after they were seized by revenue officers; (3) the group has been meeting and is considering the scope of the new position; (4) however, the scope of the position, including the extent to which private sector contractors may be used to manage and sell seized property, a position description, or procedures for governing the specialists' actions, has not been finalized; (5) GAO's review of a representative sample of 1997 nationwide seizure cases, selected as part of GAO's overall review of weaknesses in IRS' seizure processes, showed that the fundamental internal control weaknesses GAO identified in 1992 remained; (6) more specifically, GAO's review of case files showed the following: (a) similar to 1992, sufficiently complete information to establish accountability over assets was not always recorded by revenue officers when assets were seized; (b) as in 1992, IRS' security arrangements for seized assets were, in some instances, minimal or nonexistent; (c) similar to 1992, IRS' sale practices provided little assurance that the maximum possible sales proceeds were achieved; and (d) although installed after 1992, IRS' automated seizure information system still did not provide IRS management with information useful for establishing accountability over seized assets or monitoring the management and sales of the assets; and (7) regardless of the results of IRS' decisions on contracting out all or part of the asset management and sales function, IRS will remain responsible for assuring that assets are appropriately managed and sold.
You are an expert at summarizing long articles. Proceed to summarize the following text: In 1995, we reported on a study of how three agencies collected and reported evaluative information about their programs to this Committee.We found that the agencies collected a great deal of useful information about their programs, but much of it was not requested and thus did not reach the Committee, and much of what the Committee did receive was not as useful as it could have been. We also found that communication between the Committee and agency staff on information issues was limited and afforded little opportunity to build a shared understanding of the Committee’s needs and how to meet them. At that time, we proposed a strategy for obtaining information to assist program oversight and reauthorization review: (1) select descriptive and evaluative questions to be asked about a program at reauthorization and in interim years, (2) explicitly arrange to obtain oversight information and results of evaluation studies at reauthorization, and (3) provide for increased communication with agency program and evaluation officials to ensure that information needs are understood and requests and reports are suitably framed. At the time, GPRA had recently been enacted, requiring agencies to develop multiyear strategic plans and annual performance plans and reports over a 7-year implementation period. In our 1995 report, we noted that annual reporting under GPRA was expected to fill some of the information gaps we described and that GPRA also emphasized the importance of consultation with Congress as evaluation strategies are planned, goals and objectives are identified, and indicators are selected. We suggested that our proposed process for identifying questions would be useful as agencies prepared to meet GPRA requirements and that consultation with Congress would help ensure that data collected to meet GPRA reporting requirements could also be used to meet the Committee’s special needs (for example, to disaggregate performance data in ways important to the Committee). We also saw a need for a useful complement to GPRA reports (and their focus on progress towards goals) that would provide additional categories of information, such as program description, side effects, and comparative advantage to other programs. The Committee had found such information to be useful, especially in connection with major program reauthorizations and policy reviews. Since its enactment, we have been tracking federal agencies’ progress in implementing GPRA by identifying promising practices in performance measurement and results-based management, as well as by evaluating agencies’ strategic plans and the first two rounds of performance plans.We found that although agencies’ fiscal year 2000 performance plans, on the whole, showed moderate improvements over the fiscal year 1999 plans, key weaknesses remained and important opportunities existed to improve future plans to make them more useful to Congress. Overall, the fiscal year 2000 plans provided general, rather than clear, pictures of intended performance, but they had increased their use of results-oriented goals and quantifiable measures. Although some agencies made useful linkages between their budget requests and performance goals, many needed to more directly explain how programs and initiatives would achieve their goals. Finally, many agencies offered only limited indications that their performance data would be credible, a source of major concern about the usefulness of the plans. This report does not directly evaluate the three agencies’ performance plans but rather looks more broadly at the types of information that authorizing and appropriations committees need from the agencies and how their unmet needs could be met, either through performance plans or through other means. We included program performance information available from sources other than annual performance plans because agencies communicate with congressional committees using a variety of modes—reports, agency Internet sites, hearings, briefings, telephone consultations, e-Mail messages, and other means. We did not assume that annual GPRA performance plans or performance reports are the best or only vehicle for conveying all kinds of performance information to Congress. We conducted our work between May and November 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretaries of Education, Labor, and Health and Human Services and the Director of the Office of Management and Budget. HHS and Labor provided written comments that are reprinted in appendixes II and III. The other agencies either had no comments or provided technical comments. The agencies’ comments are discussed at the end of this letter. We also requested comments from the congressional staff members we interviewed on our characterization of their concerns, and we incorporated the clarifying changes they suggested. Health Surveillance. The Centers for Disease Control and Prevention (CDC) in the Department of Health and Human Services (HHS) supports— through a number of programs—a system of health surveillance activities to monitor, and help prevent and control, infectious and chronic diseases. By working with the states and other partners, CDC—primarily the National Center for Infectious Diseases and the National Center for Chronic Disease Prevention and Health Promotion—provides leadership and funding through grants to state and local public health departments. Grants support research to develop diagnostic tests, prevention interventions, local and state public health laboratories, and information sharing and other infrastructure to facilitate a nationwide surveillance system. CDC centers support critical disease registries (such as the cancer registries) and surveillance tools (such as the Behavioral Risk Factor Survey) and disseminate public health surveillance data. Pensions Oversight. In the Department of Labor (DOL), the Pension and Welfare Benefits Administration (PWBA) oversees the integrity of private sector pensions (as well as health and other welfare benefits) and seeks to increase employer-sponsored pension coverage in the workforce. The Employee Retirement Income Security Act (ERISA) sets minimum standards to ensure that private employee pension plans are established and maintained in a fair and financially sound manner. Employers also have an obligation to provide promised benefits and to satisfy ERISA requirements for managing and administering private pension plans. PWBA tracks and collects annual reports by plan managers on the plan operations, funding, assets, and investments. It develops regulations and conducts enforcement investigations and compliance reviews to deter pension fund mismanagement. PWBA also provides information and customer assistance, such as brochures targeted to women, small businesses, and minorities with low participation rates in pension plans, to encourage the growth of employment-based benefits. Postsecondary Student Loans. The Department of Education’s Office of Student Financial Assistance (OSFA), a newly created performance-based organization, manages operations of the direct loan program (William D. Ford Federal Direct Student Loan Program) and guaranteed loan program (Federal Family Education Loan Program) that are major student financial assistance programs.These and other programs under the Higher Education Act of 1965, as amended, aim to help undergraduate and graduate students meet the cost of their education. The agency provides loans to students (or families) either directly through the direct loan program or under the guaranteed loan program, through private banks that lend the money at a federally subsidized rate. In the direct loan program, the student applies through the school to the agency that transfers funds to the school. Later, a loan servicer (under agency contract) tracks and collects payments on the loan. In the guaranteed loan program, the student applies for the loan through a private lender that then tracks and collects the loan payments. The agency subsidizes the interest rate paid by the borrower. If a borrower defaults, a local guaranty agency reimburses the bank for the defaulted loan, and the department pays the guaranty agency. Congressional staff identified a great diversity of information they wanted to have to enable them to address key questions about program performance—either on a regular basis, to answer recurring questions, or in response to ad hoc inquiries as issues arose. Agencies met some, but not all, of these information needs through a variety of formal and informal means, such as formal reports and hearings and informal consultations. Congressional staff identified a number of recurring information needs, some of which were met through annual documents, such as agencies’ budget justification materials, GPRA annual performance plans, or other annual reports. The recurring information needs fell into four broad categories: allocation of program personnel and expenditures across activities; data on the quantity, quality, and efficiency of operations or services; characteristics of the populations or entities served or regulated; and indicators of progress in meeting objectives and side effects. Both authorizing and appropriations staff wanted regular information on how personnel and expenditures were allocated across activities, both for the purpose of learning what was actually spent on a program or activity as well as to understand priorities within a program. This information was typically provided to their appropriations committees in the detailed budget justification documents that agencies submit each year with their budget requests. An appropriations staff member indicated that the routine data he wanted on PWBA’s program staffing and expenditures were provided by the agency’s budget justification documents, and that the agency was forthcoming in responding to requests for additional information. Congressional staff also described wanting information on the quantity, quality, and efficiency of the activities or services provided. This information was needed to inform them of the nature and scope of a program’s activities, as well as to address questions about how well a program was being implemented or administered. They said they found this kind of information in both agency budget justification documents and performance plans. For example, both authorizing and appropriations staff members noted that the Department of Education’s budget justification documents and its departmental performance plan met their needs for basic information on trends in program expenditures and the volume and size of student loans and grants-in-aid over time. This data provided them with information about the change over time in the use of different financing options, revealing the potential for an increase in student debt burden. In addition, the department’s performance plan included performance indicators and targets for OSFA’s response times in processing loan applications, an issue of concern to congressional staff because backlogs in loans being consolidated under the direct loan program had been identified and targeted for increased attention. In this case, Education officials said that a committee report required a biweekly report for 18 months on its loan processing so that the committee could monitor their progress in resolving the backlog. Officials said that this report was provided to a total of six committees—the authorizing, appropriations, and budget committees—in both the Senate and House. All three agencies also described their major programs (with some information on program activities and services provided) on their agency Internet sites. Similarly, congressional staff also wanted regular information on the characteristics of the persons or entities the programs serve or regulate. In addition to providing a picture of who benefits from the program, such information can help answer questions about how well program services are targeted to the population most in need of service and how well those targeted populations are reached. The congressional staff described PWBA as good at providing statistics on the private pension plans and participants covered by ERISA in an annual report issued separately from the GPRA requirements. This report, the Private Pension Plan Bulletin, provides their most recent as well as historical data on plans and participants and detailed data on employee coverage and other characteristics by employer size. Finally, the congressional staff also wanted regular information on the program’s progress in meeting its objectives and any important side effects that the program might have. The Department of Labor’s fiscal year 2000 performance plan supplied information on one of PWBA’ s goals—to increase the number of employees covered by private pension plans— derived from a survey conducted by the Bureau of the Census (Census). Congressional staff noted their satisfaction with the inclusion of program data on the student loan default rate and default recovery rate as performance measures in the Department of Education’s performance plan. The plan also provided data on whether low- and middle-income students’ access to postsecondary education was improving over time relative to high-income students’ access. These and other measures in the plan of unmet need for student financial aid, college enrollment rates, and size of debt repayments were derived from special surveys conducted by the Department of Education or by Census. Congressional staff identified a number of ad hoc information needs that arose periodically as “hot issues” came up for congressional consideration. Some of the needs were met through existing documents, and many others through informal consultations in response to a request from congressional staff, while still other needs were not met. The ad hoc information needs were similar to but somewhat different from recurring information needs and fell into five broad categories: details about a program’s activities and authority, news of impending change in the program, assessments of emerging issues, projected effects of proposed program changes, and effects and side effects of existing programs. Congressional staff often wanted details about the scope of a program’s activities and authority that were not readily available from the general documents they had. Questions might have been raised by a constituent request or a legislative proposal, in which case the staff member wanted a fairly rapid response to a targeted question. In such cases, congressional staff said they often called the agency’s congressional liaison office, which either handled the request itself or forwarded it to knowledgeable program officials who, in turn, either returned the call to the requester or forwarded the information through the liaison. CDC officials also described referring requesters to the brief program descriptions they maintain on their Internet site. Congressional staff noted that they wanted the agency to proactively inform them, in advance, when there was news of significant impending change in their Member’s district or to a program in which they had been involved. In one case, they wanted to have an opportunity to influence the policy discussions; in another case, they wanted to be prepared when the news appeared in the press. An authorizing committee staff member found that CDC’s targeted distribution of “alerts” provided a very useful “heads up” before the agency issued a press release about a public health concern. The alerts were distributed by e-Mail or faxed to the interested committee staff member or congressional members. During the recent appearance of a rare form of encephalitis in New York City, for example, CDC said that it informed congressional members and interested staff members from that region (as well as their authorizing and appropriations committees) about its findings regarding the source of the disease and explained what CDC was doing about it. Another type of ad hoc information request was for assessments of an issue’s potential threat. Congressional staff described several occasions when a negative incident—such as a disease outbreak—occurred that raised questions about how frequently such incidents occur, how well the public is protected against them, and whether a congressional or legislative response was warranted. Because of the highly specific nature of such requests, the staff said they were usually made by telephone to the agency’s congressional liaison and responded to with a brief, informal consultation or a formal briefing. On one occasion, CDC officials testified at a congressional hearing summarizing their research into antimicrobial-resistant diseases and how CDC’s surveillance programs track and respond to the problem. In another example, in response to a proposed merger of two large private corporations, a staff member wanted to know what the new owner’s obligations were to its holdover employees and how this would affect those employees’ pension benefits. In addition, in order to ensure the protection of those employees’ rights, the staff member wanted to know what enforcement options were available to the agency. The staff member indicated that PWBA officials provided this technical assessment and consultation in a timely manner. As either the legislative or executive branch proposed changes to a program, congressional staff wanted projections of the effects of those proposed changes, not only as to whether (and how) the change would fix the problem identified, but also whether it would have undesired side effects. As committee staff discuss proposals, they said they often asked agency officials for informal consultations. If hearings or other more formal deliberations were planned, some kind of formal document might be requested. When an agency proposed a regulation or amended regulation, the agency prepared a formal document for public comment that provided a justification for the change. For example, to reduce the cost of loans to student borrowers, a congressional committee considered reducing the interest rate. However, some lenders expressed concern that a rate reduction would cut into their profit margins, forcing some to drop out of the program. To assess the likelihood of this projected result, the committee staff turned to the estimates of lenders’ profit margins produced by the Office of Management and Budget (OMB) and the Treasury Department. Similarly, as new provisions are implemented, congressional staff might have questions about whether the provisions are operating as planned and having the effects hoped for or the side effects feared. In December 1998, OSFA was designated a performance-based organization (PBO), given increased administrative flexibility, and charged with modernizing the Department’s information systems and improving day-to-day operations. OSFA has provided authorizing and appropriations committee staff with regular reports on its Interim Performance Objectives (also available on its Internet site) that provide measures of efficiency in processing loan and loan consolidation applications and measures of borrower and institutional satisfaction. OSFA has also initiated cost accounting improvements to obtain better data on loans made, serviced, and collected under both the direct and guaranteed loan programs in order to provide baseline data against which to measure its progress in improving operational efficiency. Information needs that congressional staff reported as unmet were similar in content to, but often more specific or detailed than, those that were met. The information needs that congressional staff described as having been met tended to be general, descriptive information about a program’s activities and expenditures (such as those that might support their budget request) or descriptive information about the agency’s activities in response to a specific, often emerging, issue. This information was often provided in a formal report or presentation (such as a briefing or hearing). The information needs that congressional staff described as typically unmet were detailed information on the allocation of funds for activities, descriptive information about the program’s strategies and the issues they addressed, and analyses showing the program’s effects on its objectives. The key factors accounting for the gaps in meeting congressional information needs were the following: the presentations of information were not clear, sufficiently detailed, or the information was not readily available to congressional staff; or the information was not available to the agency. In some cases, information on the topics was available or provided, but its presentation was not as useful as it could have been. Congressional staff members noted that neither the budget submission nor the departmental strategic plan demonstrated the link between a CDC cancer screening program, the dollars appropriated for it in the budget, and how this program contributed to meeting the department’s strategic objectives. A CDC official noted that, in combination, CDC’s performance plan and budget submission did link the strategic objectives with the budget. They explained that this was in part due to CDC’s budget being structured differently from its organization of centers and institutes. A CDC budget work group, formed in early 1999 in response to similar concerns, met with its congressional stakeholders and program partners and is developing a revised budget display that the group hopes will make this information more understandable in CDC’s next budget submission. In another situation, congressional staff looked to the performance plan for a clear presentation of PWBA’s regulatory strategy that showed how the agency planned to balance its various activities—litigation, enforcement, guidelines, regulations, assistance, and employee education—and how those activities would meet PWBA’s strategic goals. The congressional staff wanted to know what PWBA’s regulatory priorities were, as well as how PWBA expected the different activities to achieve its goals. However, the departmental plan did not provide a comprehensive picture of PWBA and described only isolated PWBA activities to the extent that they supported departmental goals. Some agency reports did not provide enough detail on issues of concern to the committee. Congressional staff members concerned about PWBA’s enforcement efforts wanted detailed information on the patterns of violations to show how many were serious threats to plans and their financial assets, rather than paperwork filing problems. A PWBA official indicated that PWBA could disaggregate its data on violations to show the distribution of various types of violations, but that there would need to be some discussion with the committee staff about what constituted a “paperwork” rather than a “serious” violation. In another case, a congressional staff member was concerned that some patients were experiencing significant delays in obtaining cancer treatment after being screened under the National Breast and Cervical Cancer Early Detection Program. The program focuses on screening and diagnosis, while participating health agencies are to identify and secure other resources to obtain treatment for women in need. Staff wanted to see the distribution of the number of days between screening and beginning treatment, in addition to the median period, in order to assess how many women experienced significant delays. When this issue was raised in a hearing, CDC officials provided the median periods as well as the results of surveillance data that showed that 92 percent of the women diagnosed with breast cancer and invasive cervical cancer had initiated treatment. Some responses to congressional inquiries were not adequately tailored to meet congressional staff’s concerns. For example, in preparing legislation, a congressional staff member needed immediately very specific information about the scope and authority of a program in order to assess whether a proposed legislative remedy was needed. However, he said he received documents containing general descriptive information on the issue instead, which he did not consider relevant to his question. An agency official indicated that this response suggested that the congressional query may not have been specific enough, or that the responding agency official did not have the answer and hoped that those documents would satisfy the requestor. In other cases, staff indicated they obtained this type of information succinctly through a telephone call to the agency’s congressional affairs office, which might direct them to a brief description of the program’s authority, scope, and activities on the agency’s Internet site or refer them to a knowledgeable agency official. One authorizing committee staff person noted that, although the committee staff assigned to an issue develops background on these programs over time, there is rapid turnover in Members’ staff representatives to a committee. Moreover, because these staff are expected to cover a broad range of topics, she thought that they would find particularly useful brief documents that articulate the program’s authority, scope, and major issues, to draw upon as needed. Some congressional information needs were unmet because the information was not readily available, either because it was not requested or reported, or because staff were not informed that it was available. In one instance, concerned about the safety of multiemployer pension plans, congressional staff wanted disaggregated data on the results of enforcement reviews for that type of plan. PWBA officials explained that the ERISA Annual Report to Congress does not highlight enforcement results for particular types of plans. However, they said that they could provide this information if congressional staff specifically requested it. In several cases, the agencies thought that they had made information available by placing a document on the agency’s Internet site, but they had not informed all interested committee staff of the existence or specific location of those documents. For instance, an authorizing committee staff member had heard of long delays in PWBA’s responses to requests for assistance and wanted to know how frequently these delays occurred. In its own agency performance and strategic plans, PWBA included performance measures of its response times to customers requesting assistance and interpretations. But, because those measures were not adopted as part of the departmental performance plan and PWBA did not provide its own performance plan to the authorizing committee staff, this information was not available to those staff. Agency officials said that this information was available because they had posted their strategic plan on the agency’s Internet site. However, the committee staff person was unaware of this document’s presence on the site and thus was unaware that such a measure existed. In some instances, the desired information was not available to the agency. This was because either special data collection was required, it was too early to get the information, the data were controlled by another agency, or some forms of information were difficult to obtain. Where congressional questions extend across program or agency boundaries, special studies, coordinated at the department level, might be required to obtain the answers. For example, to address a policy question about how well prenatal services were directed to pockets of need, congressional staff wanted a comparison of the geographic distribution of the incidence of low birth-weight babies with areas served by prenatal programs and with the availability of ultrasound testing. HHS officials explained that although CDC and the National Center for Health Statistics had information on the regional incidence of low birth-weight babies through birth certificate data, these agencies did not have the information on the availability of prenatal services. The Health Resources and Services Administration (another HHS agency), which is concerned with such services, does not have information on the location of all prenatal programs or the availability of ultrasound equipment to link with the birth certificate data on low birth-weight. HHS officials indicated that, if this analysis were requested, the department would need to initiate a special study to collect data on the availability of services to match with existing vital statistics. Some congressional information needs extend beyond what a program collects as part of its operations and thus would require supplemental information or a special data collection effort to obtain. For example, because a student’s race is not collected as part of loan applications, the Department of Education supplements its own records on the use of different student finance options with periodic special studies of student borrowers that do collect racial information. Because the different student loan programs maintain their records in separate databases, the office relies on special studies, conducted every 3 years since school year 1986- 1987, to examine the full package of financial options students and their families use to pay for postsecondary education. The congressional staff also wanted to obtain trend data on the extent to which all forms of student aid received (e.g., grants, loans, and tax credits) cover the cost of school attendance for low-income students. Education officials said that if published data from these special studies were not adequate, specialized data tabulations could be obtained. In the meantime, OSFA issued a 5-year performance plan in October 1999 that showed how it plans to improve the information systems for the student loan programs in order to improve operations and interconnectivity among the programs. As programs are revised, questions naturally arise about whether the new provisions are operating as planned and having the desired effects or unwanted side effects. Congressional staff identified several questions of this type for the student loan programs due to changes created by the 1998 reauthorization of the Higher Education Act and the separate enactment of a new tuition tax credit: How many students will select each of the new loan repayment options? Which students benefit more from the new tax credit, low- or middle-income? Will the need to verify a family’s educational expenses create a new burden for schools’ financial aid offices? In our discussions with OSFA, officials told us that they will report information on use of the new repayment options in their next annual budget submission, and that they believed the Internal Revenue Service (IRS) would include analyses of who used the tuition tax credit (similar to its analyses of other personal income tax credits) in its publication series, Statistics of Income. Because OSFA does not administer the tax credit, OSFA officials suggested to us that IRS would be responsible for estimates of any reporting burden for schools related to the tax credit. Lastly, some information was not available because it is difficult to obtain. There has been congressional interest in whether a provision that cancels loan obligations for those who enter public school teaching or other public service leads more student borrowers to choose public service careers. Education officials said that a design for a special evaluation had been prepared, but that they had discovered that, because only a small number of student borrowers benefited from this provision, they were unable to obtain a statistically valid sample of these borrowers through national surveys. Determining the effectiveness of federally funded state and local projects in achieving federal goals can be challenging for federal agencies. A CDC official told us that CDC conducts many studies evaluating whether a specific health prevention or promotion practice is effective or not, but that it expects it will take a combination of such practices to produce populationwide health effects. However, it is much more difficult to measure the effects of a combination of practices, especially when such practices are carried out in the context of other state and local health initiatives, than to test the efficacy of one specific health practice at a time. In addition, measuring the effectiveness of health promotion and disease prevention programs related to chronic disease can be difficult in the short term, given the nature of chronic diseases. To help ensure that congressional stakeholders obtain the information they want requires communication and planning—to understand the form and content of the desired information as well as what can feasibly be obtained, and to arrange to obtain the information. Our analysis uncovered a range of options that agency and congressional staffs could choose from—depending on the circumstances—to improve the usefulness of agency performance information to these congressional staffs. Improved communication might help increase congressional access to existing information, improve the quality and usefulness of existing reports, and plan for obtaining supplemental data in the future. Agency officials said that increased communication between agency and congressional staff could have prevented some of the unmet information needs because they believed that, if requested, they could have provided most of the information congressional staff said they wanted, or arranged for the special analysis required. Increased two-way communication might also make clear what information is and is not available. Each agency has protocols for communication between congressional staff and agency officials, typically requiring the involvement of congressional liaison offices to ensure departmental review and coordination of policy. Agency congressional liaisons and other officials said that they answered some ad hoc inquiries directly or referred congressional staff to existing documents or program specialists. Congressional staff said that they were generally able to get responses to their formal and informal inquiries through these channels, but several noted that communication was often very formal and controlled in these settings. Some congressional staff and agency officials found that the informal discussions they had had were very helpful. In one case, agency officials were asked to discuss their program informally with appropriations committee staff; in another case, the incoming agency director scheduled a visit with a subcommittee chair and his staff to describe his plans and learn of their interests. It is our opinion that when key agency or committee staff changes occur, introductory briefings or discussions might help ensure continuity of understanding and open lines of communication that could help smooth the process of obtaining information on a recurring and on an ad hoc basis. Discussion of what might be the most appropriate distribution options for different types of documents might help ensure that the information agencies make available is actually found. For example, authorizing committees might want to routinely receive agencies’ annual budget justification documents, which contain detailed information on allocations of resources. Also, although the three agencies aimed to increase the volume of material that was publicly available by posting it on their Internet sites, the information was often not available to congressional staff unless they knew that it existed and where to look for it. For relatively brief and broadly applicable material, like CDC’s summary of cost-effective health promotion practices, an agency may decide, as CDC did, to send copies to all congressional offices. Alternatively, to avoid overwhelming congressional staffs with publications, CDC officials sent e- Mail or fax alerts to contacts at relevant committees about newly released publications and other recent or upcoming events of potential interest. Our analysis of the types of information the congressional staffs said they wanted on a recurring basis suggests ways the agencies might improve the usefulness of their performance plans and other reports to these committees. In addition, increased communication about the specifics of congressional information needs might help ensure that those needs are understood and addressed. The congressional staff said that they wanted a clear depiction at the program level of the linkages between program resources, strategies, and the objectives they aim to achieve. Of our three case studies, congressional staff indicated that only the Education Department’s performance plan provided adequate detail at the program level—the level that they were interested in. As we previously reported, most federal agencies’ fiscal year 2000 plans do not consistently show how the program activity funding in their budget accounts would be allocated to agencies’ performance goals.And, although most agencies attempted to relate strategies and program goals, few agencies indicated how the strategies would contribute to accomplishing the expected level of performance. One option would be for agencies to consider developing performance plans for their major bureaus or programs and incorporating this information in their department’s plan. For example, the HHS Fiscal Year 2000 Performance Plan consisted of a departmentwide summary as well as the annual performance plans developed by its component agencies and submitted as part of the agencies’ budget justifications. Alternatively, departments that prefer to submit a consolidated plan keyed to departmentwide goals could refer readers to where more specific data could be found in supplementary documents. OMB’s Circular No. A-11 guidance asks agencies to develop a single plan covering an entire agency but notes that, for some agencies, the plan will describe performance on a macro scale by summarizing more detailed information available at different levels in the agency. In these instances, OMB instructs agencies to have ready their more detailed plans specific to a program or component to respond to inquiries for more refined levels of performance information. The congressional staff also said that they wanted, on a recurring basis, data on the quantity, quality, and efficiency of a program’s activities; the characteristics of the population served; and indicators of a program’s progress in meeting its objectives. These categories are consistent with those identified in our 1995 report as the information Congress wants on a routine basis. (Appendix I contains the categories of information and the list of core questions that we proposed committees select from and adapt to meet their needs when requesting information.) Although all three agencies consulted with congressional committees on their strategic plans as required by GPRA, only one consulted with our congressional interviewees on the development of its performance plan and choice of indicators. As we previously reported, agency consultation with both authorizing and appropriations committees as performance measures are selected is likely to make the agencies’ performance plans more useful to those committees. The three agencies’ planned and ongoing efforts in data collection and analysis improvements may improve the quality and responsiveness of their reported information. However, without feedback from the congressional staffs on where presentations were unclear, or where additional detail or content is desired, the reports may still not meet congressional needs. Discussing information needs could also help identify which needs could be addressed in an annual or other recurring report and which could be addressed more feasibly through some other means. In addition to performance plans and reports, the congressional staff also described a need for readily accessible background information on individual programs’ authority, scope, and major issues. Committee staff noted that rapid turnover in Members’ staff representatives to a committee results in some of their colleagues needing a quick introduction to complex programs and their issues. Some of the program and agency descriptions on agency Internet sites were designed for the general public and were not detailed enough to meet the congressional staffs’ needs. To obtain new information about special subpopulations or emerging issues, congressional staff would have to make direct requests of the agency. Agency officials told us that they welcomed these requests and would do what they could to meet them. However, depending on the information requested and the time period in which a response is needed, it might not be possible for the agency to obtain it in time. Therefore, discussion between congressional staff and agency officials concerning the information needed is important to clarify what is desired and what is feasible to obtain, as well as to arrange for obtaining the information. In some cases, the agencies said that they were able to conduct special tabulations to obtain the desired information. In other cases, they said that more data collection or analysis efforts might be required and that they would need some initial planning to determine how much time and resources it would take to obtain the requested information. Because it can be costly to obtain some information, advance agreement on the information content and format might avoid some frustration on both sides by clarifying expectations. In a couple of cases, when congressional staff members learned that the information was not readily available and would be costly to obtain, they were satisfied to accept a less precise or less detailed response. Where congressional staff expect certain information will be important in future congressional considerations, advance planning for its collection would help ensure its availability in the desired format when it is needed. In some cases, agencies may be able to alter their information systems to track some new provision; in others, they may have to plan new data collection efforts. As stated in our 1995 report, communication is critical at two points in obtaining special studies: when a Committee frames a request for information, to ensure that the agency understands what is wanted and thus can alert the Committee to issues of content or feasibility that need resolution; and as report drafting begins, to assist the agency in understanding the issues that will be before the Committee and what kind of presentation format is thus likely to be most useful. The Departments of Health and Human Services and Labor provided written comments on a draft of this report, which are reprinted in appendixes II and III. Both HHS and Labor stated that, in general, the report is balanced and contains useful ideas for improving communications between federal agencies and congressional committees. HHS also expressed two concerns. One concern was that the report suggested that the Department did not provide performance information at the program level. It said its component agencies provided this information in their own performance plans, which are presented as part of their congressional budget justifications. We have changed the text to clarify that the HHS Fiscal Year 2000 Performance Plan consisted of a departmentwide summary as well as the performance plans submitted as part of its component agencies’ congressional budget justifications. However, because we understand that these budget justifications were not widely distributed beyond the appropriations committees, we remain concerned that this performance information was not made readily available to authorizing committee staff. HHS’ other concern was that the opening paragraphs of the report implied that it would emphasize GPRA as the primary medium for disseminating agency performance information although, it noted, the scope of the report is appropriately much broader. The Committee’s expectations for and concerns about agencies’ performance plans prepared under GPRA were the impetus for this report. However, the Committee also recognized that these plans and reports are only one mechanism to provide performance information to Congress and thus broadened the focus of our work. Officials at the Department of Education suggested no changes and said that they appreciated recognition of their efforts to work collaboratively with Congress and provide good management for the department’s programs. OMB, HHS, and PWBA provided technical comments that we incorporated where appropriate throughout the text. To explore how agencies might improve the usefulness of the performance information they provide Congress, we conducted case studies of the extent to which the relevant authorizing and appropriations committee staffs obtained the information they wanted about three program areas. These cases were selected in consultation with the requesting committee’s staff to represent programs whose performance information they felt could be improved and to represent a range of program structures and departments under the Committee’s jurisdiction. For example, one selection (pension oversight) is a regulatory program in the Department of Labor; the other two (student loans and health surveillance) represent service programs in the Departments of Education and Health and Human Services. Pension oversight represents the direct operations of a federal agency, while the other cases operate through state and local agencies or the private sector. Each case represents a program or cluster of programs administered by an agency within these departments. To identify congressional information needs and the extent to which they were met, we interviewed staff members recommended by the minority and majority staff directors of the authorizing and appropriation committees for the selected agencies. We asked the staffs to identify what information they needed to address the key policy questions or decisions they faced over the preceding 2 years, and whether their information needs were met. To identify the reasons for the information gaps and how in practice the agencies might better meet those congressional information needs, we interviewed both agency officials and congressional staff; reviewed agency materials; and drew upon our experience with various data collection, analysis, and reporting strategies. We are sending copies of this report to Senator Edward Kennedy, Ranking Minority Member of your committee; Senator Ted Stevens, Chairman, and Senator Robert Byrd, Ranking Minority Member, Senate Committee on Appropriations; Representative William Goodling, Chairman, and Representative William Clay, Ranking Minority Member, House Committee on Education and the Workforce; Representative Tom Bliley, Chairman, and Representative John Dingell, Ranking Minority Member, House Committee on Commerce; and Representative Bill Young, Chairman, and Representative David Obey, Ranking Minority Member, House Committee on Appropriations. We are also sending copies of this report to the Honorable Alexis Herman, Secretary of Labor; the Honorable Donna Shalala, Secretary of Health and Human Services; the Honorable Richard Wiley, Secretary of Education; and the Honorable Jacob Lew, Director, Office of Management and Budget. We will also make copies available to others on request. If you have any questions concerning this report, please call me or Stephanie Shipman at (202) 512-7997. Another major contributor to this report was Elaine Vaurio, Project Manager. Overall, what activities are conducted? By whom? How extensive and costly are the activities, and whom do they reach? If conditions, activities, and purposes are not uniform throughout the program, in what significant respects do they vary across program components, providers, or subgroups of clients? What progress has been made in implementing new provisions? Have feasibility or management problems become evident? If activities and products are expected to conform to professional standards or to program specifications, have they done so? Have program activities or products focused on appropriate issues or problems? To what extent have they reached the appropriate people or organizations? Do current targeting practices leave significant needs unmet (problems not addressed, clients not reached)? Overall, has the program led to improvements consistent with its purpose? If impact has not been uniform, how has it varied across program components, approaches, providers, or client subgroups? Are there components or providers that consistently have failed to show an impact? Have program activities had important positive or negative side effects, either for program participants or outside the program? Is this program’s strategy more effective in relation to its costs than others that serve the same purpose? Performance Budgeting: Fiscal Year 2000 Progress in Linking Plans With Budgets (GAO/AIMD-99-239R, July 30, 1999). Performance Plans: Selected Approaches for Verification and Validation of Agency Performance Information (GAO/GGD-99-139, July 30, 1999). Managing for Results: Opportunities for Continued Improvements in Agencies’ Performance Plans (GAO/GGD/AIMD-99-215, July 20, 1999). Regulatory Accounting: Analysis of OMB’s Reports on the Costs and Benefits of Federal Regulation (GAO/GGD-99-59, Apr. 20, 1999). Performance Budgeting: Initial Experiences Under the Results Act in Linking Plans With Budgets (GAO/AIMD-99-67, Apr. 12, 1999). Emerging Infectious Diseases: Consensus on Needed Laboratory Capacity Could Strengthen Surveillance (GAO/HEHS-99-26, Feb. 5, 1999). Managing for Results: Measuring Program Results That Are Under Limited Federal Control (GAO/GGD-99-16, Dec. 11, 1998). Pension Benefit Guaranty Corporation: Financial Condition Improving, but Long-Term Risks Remain (GAO/HEHS-99-5, Oct. 16, 1998). Managing for Results: An Agenda to Improve the Usefulness of Agencies’ Annual Performance Plans (GAO/GGD/AIMD-98-228, Sept. 8, 1998). Student Loans: Characteristics of Students and Default Rates at Historically Black Colleges and Universities (GAO/HEHS-98-90, Apr. 9, 1998). Credit Reform: Greater Effort Needed to Overcome Persistent Cost Estimation Problems (GAO/AIMD-98-14, Mar. 30, 1998). Managing for Results: Critical Issues for Improving Federal Agencies’ Strategic Plans (GAO/GGD-97-180, Sept. 16, 1997). Direct Student Loans: Analyses of the Income Contingent Repayment Option (GAO/HEHS-97-155, Aug. 21, 1997). Student Financial Aid Information: Systems Architecture Needed to Improve Programs’ Efficiency (GAO/AIMD-97-122, July 29, 1997). Managing for Results: Analytic Challenges in Measuring Performance (GAO/HEHS/GGD-97-138, May 30, 1997). High-Risk Series: Student Financial Aid (GAO/HR-97-11, Feb. 1997). Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996). Program Evaluation: Improving the Flow of Information to the Congress (GAO/PEMD-95-1, Jan. 30, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed three agencies' annual performance plans to determine whether the plans met congressional requirements, focusing on: (1) which aspects of congressional information needs were met by the agency's annual performance plan or some other source; (2) where those needs were not met, and what accounted for the discrepancies or gaps in the information provided; and (3) what options agencies could use to practically and efficiently provide the desired performance information. GAO noted that: (1) the congressional staff GAO interviewed identified a great diversity of information they would like to have to address key questions about program performance; (2) the agencies GAO studied met some, but not all, of these recurring and ad hoc congressional information needs through both formal and informal means; (3) the congressional staffs were looking for recurring information on spending priorities within programs, the quality, quantity, and efficiency of program operations, the populations served or regulated, as well as the program's progress in meeting its objectives; (4) some of these recurring needs were met through formal agency documents, such as annual budget request justification materials, annual performance plans, or other recurring reports; (5) other congressional information needs were ad hoc, requiring more detailed information or analysis as issues arose for congressional consideration; (6) information needs that the congressional staffs reported as unmet were similar in content to, but often more specific or detailed than, those that were met; (7) several factors accounted for the gaps in meeting congressional information needs; (8) some information the agencies provided did not fully meet the congressional staffs' needs because the presentation was not clear, directly relevant, or sufficiently detailed; (9) other information was not readily available to the congressional staffs; (10) in some cases, the agencies said they did not have the information because it was either too soon or too difficult to obtain it; (11) improved communication between congressional staff and agency officials might help ensure that congressional information needs are understood, and that arrangements are made to meet them; (12) greater consultation on how best to distribute agency documents might improve congressional access to existing reports; (13) posting publications on Internet sites can increase congressional staffs' access to agency information without their having to specifically request it, but staff still need to learn that the information exists and where to look for it; and (14) agencies' annual Government Performance and Results Act performance plans and other reports might be more useful to congressional committees if they addressed the issues congressional staff said they wanted addressed on a recurring bases, and if agency staff consulted with the committees on their choice of performance measures.
You are an expert at summarizing long articles. Proceed to summarize the following text: FDA is responsible for overseeing the safety and effectiveness of medical devices that are marketed in the United States, whether manufactured in domestic or foreign establishments. All establishments that manufacture medical devices for marketing in the United States must register with FDA. As part of its efforts to ensure the safety, effectiveness, and quality of medical devices, FDA is responsible for inspecting certain domestic and foreign establishments to ensure that they meet manufacturing standards established in FDA’s quality system regulation. FDA does not have authority to require foreign establishments to allow the agency to inspect their facilities. However, FDA has the authority to prevent the importation of products manufactured at establishments that refuse to allow an FDA inspection. Unlike food, for which FDA primarily relies on inspections at the border, physical inspection of manufacturing establishments is a critical mechanism in FDA’s process to ensure that medical devices and drugs are safe and effective and that manufacturers adhere to good manufacturing practices. Within FDA, CDRH assures the safety and effectiveness of medical devices. Among other things, CDRH works with ORA, which conducts inspections of both domestic and foreign establishments to ensure that devices are produced in conformance with federal statutes and regulations, including the quality system regulation. FDA may conduct inspections before and after medical devices are approved or otherwise cleared to be marketed in the United States. Premarket inspections are conducted before FDA will approve U.S. marketing of a new medical device that is not substantially equivalent to one that is already on the market. Premarket inspections primarily assess manufacturing facilities, methods, and controls and may verify pertinent records. Postmarket inspections are conducted after a medical device has been approved or otherwise cleared to be marketed in the United States and include several types of inspections: (1) Quality system inspections are conducted to assess compliance with applicable FDA regulations, including the quality system regulation to ensure good manufacturing practices and the regulation requiring reporting of adverse events. These inspections may be comprehensive or abbreviated, which differ in the scope of inspectional activity. Comprehensive postmarket inspections assess multiple aspects of the manufacturer’s quality system, including management controls, design controls, corrective and preventative actions, and production and process controls. Abbreviated postmarket inspections assess only some of these aspects, but always assess corrective and preventative actions. (2) For-cause and compliance follow- up inspections are initiated in response to specific information that raises questions or problems associated with a particular establishment. (3) Postmarket audit inspections are conducted within 8 to 12 months of a premarket application’s approval to examine any changes in the design, manufacturing process, or quality assurance systems. FDA determines which establishments to inspect using a risk-based strategy. High priority inspections include premarket approval inspections for class III devices, for-cause inspections, inspections of establishments that have had a high frequency of device recalls, and other devices and manufacturers FDA considers high risk. The establishment’s inspection history may also be considered. A provision in FDAAA may assist FDA in making decisions about which establishments to inspect because it authorizes the agency to accept voluntary submissions of audit reports addressing manufacturers’ conformance with internationally established standards for the purpose of setting risk-based inspectional priorities. FDA’s programs for domestic and foreign inspections by accredited third parties provide an alternative to the traditional FDA-conducted comprehensive postmarket quality system inspection for eligible manufacturers of class II and III medical devices. MDUFMA required FDA to accredit third persons—which are organizations—to conduct inspections of certain establishments. In describing this requirement, the House of Representatives Committee on Energy and Commerce noted that some manufacturers have faced an increase in the number of inspections required by foreign countries, and that the number of inspections could be reduced if the manufacturers could contract with a third-party organization to conduct a single inspection that would satisfy the requirements of both FDA and foreign countries. Manufacturers that meet eligibility requirements may request a postmarket inspection by an FDA-accredited organization. The eligibility criteria for requesting an inspection of an establishment by an accredited organization include that the manufacturer markets (or intends to market) a medical device in a foreign country and the establishment to be inspected must not have received warnings for significant deviations from compliance requirements on its last inspection. MDUFMA also established minimum requirements for organizations to be accredited to conduct third-party inspections, including protecting against financial conflicts of interest and ensuring the competence of the organization to conduct inspections. FDA developed a training program for inspectors from accredited organizations that involves both formal classroom training and completion of three joint training inspections with FDA. Each individual inspector from an accredited organization must complete all training requirements successfully before being cleared to conduct independent inspections. FDA relies on manufacturers to volunteer to host these joint inspections, which count as FDA postmarket quality system inspections. A manufacturer that is cleared to have an inspection by an accredited third party enters an agreement with the approved accredited organization and schedules an inspection. Once the accredited organization completes its inspection, it prepares a report and submits it to FDA, which makes the final assessment of compliance with applicable requirements. FDAAA added a requirement that accredited organizations notify FDA of any withdrawal, suspension, restriction, or expiration of certificate of conformance with quality systems standards (such as those established by the International Organization for Standardization) for establishments they inspected for FDA. In addition to the Accredited Persons Inspection Program, FDA has a second program for accredited third-party inspections of medical device establishments. On September 7, 2006, FDA and Health Canada announced the establishment of PMAP. This pilot program was designed to allow qualified third-party organizations to perform a single inspection that would meet the regulatory requirements of both the United States and Canada. The third-party organizations eligible to conduct inspections through PMAP are those that FDA accredited for its Accredited Persons Inspection Program (and that completed all required training for that program) and that are also authorized to conduct inspections of medical device establishments for Health Canada. To be eligible to have a third- party inspection through PMAP, manufacturers must meet all criteria established for the Accredited Persons Inspection Program. As with the Accredited Persons Inspection Program, manufacturers must apply to participate and be willing to pay an accredited organization to conduct the inspection. FDA relies on multiple databases to manage its program for inspecting medical device manufacturing establishments. DRLS contains information on domestic and foreign medical device establishments that have registered with FDA. Establishments that are involved in the manufacture of medical devices intended for commercial distribution in the United States are required to register annually with FDA. These establishments provide information to FDA, such as establishment name and address and the medical devices they manufacture. As of October 1, 2007, establishments are required to register electronically through FDA’s Unified Registration and Listing System and certain medical device establishments pay an annual establishment registration fee, which in fiscal year 2008 is $1,706. OASIS contains information on medical devices and other FDA-regulated products imported into the United States, including information on the establishment that manufactured the medical device. The information in OASIS is automatically generated from data managed by U.S. Customs and Border Protection, which are originally entered by customs brokers based on the information available from the importer. FACTS contains information on FDA’s inspections, including those of domestic and foreign medical device establishments. FDA investigators enter information into FACTS following completion of an inspection. According to FDA data, more than 23,600 establishments that manufacture medical devices were registered as of September 2007, of which 10,600 reported that they manufacture class II or III medical devices. More than half—about 5,600—of these establishments were located in the United States. As of September 2007, there were more registered establishments in China and Germany reporting that they manufacture class II or III medical devices than in any other foreign countries. Canada, Taiwan, and the United Kingdom also had a large number of registered establishments. (See fig. 1.) Registered foreign establishments reported that they manufacture a variety of class II and III medical devices for the U.S. market. For example, common class III medical devices included coronary stents, pacemakers, and contact lenses. FDA has not met the statutory requirement to inspect domestic establishments manufacturing class II or III medical devices every 2 years. The agency conducted relatively few inspections of foreign establishments. The databases that provide FDA with data about the number of foreign establishments manufacturing medical devices for the U.S. market contain inaccuracies. In addition, inspections of foreign medical device manufacturing establishments pose unique challenges to FDA—both in human resources and logistics. From fiscal year 2002 through fiscal year 2007, FDA primarily inspected establishments located in the United States, where more than half of the 10,600 registered establishments that reported manufacturing class II or III medical devices are located. In contrast, FDA inspected relatively few foreign medical device establishments. During this period, FDA conducted an average of 1,494 domestic and 247 foreign establishment inspections each year. This suggests that each year FDA inspects about 27 percent of registered domestic establishments that reported manufacturing class II or class III medical devices and about 5 percent of such foreign establishments. The inspected establishments were in the United States and 44 foreign countries. Of the foreign inspections, more than two-thirds were in 10 countries. Most of the countries with the highest number of inspections were also among those with the largest number of registered establishments that reported manufacturing class II or III medical devices. The lowest rate of inspections in these 10 countries was in China, where 64 inspections were conducted in this 6-year period and almost 700 establishments were registered. (See table 1.) Despite its focus on domestic inspections, FDA has not met the statutory requirement to inspect domestic establishments manufacturing class II or III medical devices every 2 years. For domestic establishments, FDA officials estimated that, on average, the agency inspects class II manufacturers every 5 years and class III manufacturers every 3 years. For foreign establishments—for which there is no comparable inspection requirement—FDA officials estimated that the agency inspects class II manufacturers every 27 years and class III manufacturers every 6 years. FDA’s inspections of medical device establishments, both domestic and foreign, are primarily postmarket inspections. While premarket inspections are generally FDA’s highest priority, relatively few have to be performed in any given year. Therefore, FDA focuses its resources on postmarket inspections. From fiscal year 2002 through fiscal year 2007, 95 percent of the 8,962 domestic establishment inspections and 89 percent of the 1,481 foreign establishment inspections were for postmarket purposes. (See fig. 2.) FDA’s databases on registration and imported products provide divergent estimates regarding the number of foreign medical device manufacturing establishments. DRLS provides FDA with information about domestic and foreign medical device establishments and the products they manufacture for the U.S. market. According to DRLS, as of September 2007, 5,616 domestic and 4,983 foreign establishments that reported manufacturing a class II or III medical device for the U.S. market had registered with FDA. However, these data contain inaccuracies because establishments may register with FDA but not actually manufacture a medical device or may manufacture a medical device that is not marketed in the United States. FDA officials told us that their more frequent inspections of domestic establishments allow them to more easily update information about whether a domestic establishment is subject to inspection. In addition to DRLS, FDA obtains information on foreign establishments from OASIS, which tracks the import of medical devices. While not intended to provide a count of establishments, OASIS does contain information about the medical devices actually being imported into the United States and the establishments manufacturing them. However, inaccuracies in OASIS prevent FDA from using it to develop a list of establishments subject to inspection. OASIS contains duplicate records for a single establishment because of inaccurate data entry by customs brokers at the border. According to OASIS, in fiscal year 2007, there were as many as 22,008 foreign establishments that manufactured class II medical devices for the U.S. market and 3,575 foreign establishments that manufactured class III medical devices for the U.S. market. Despite the divergent estimates of foreign establishments generated by DRLS and OASIS, FDA does not routinely verify the data within each database. Although comparing information from these two databases could help FDA determine the number of foreign establishments marketing medical devices in the United States, the databases cannot exchange information to be compared electronically and any comparisons are done manually. Efforts are underway that could improve FDA’s databases. FDA officials suggested that, because manufacturers are now required to pay an annual establishment registration fee, manufacturers may be more concerned about the accuracy of the registration data they submit. They also told us that, because of the registration fee, manufacturers may be less likely to register if they do not actually manufacture a medical device for the U.S. market. In addition, FDA officials stated that the agency is pursuing various initiatives to try to address the inaccuracies in OASIS, such as providing a unique identifier for each foreign establishment to reduce duplicate entries for individual establishments. Inspections of foreign establishments pose unique challenges to FDA— both in human resources and logistics. FDA does not have a dedicated cadre of investigators that only conduct foreign medical device establishment inspections; those staff who inspect foreign establishments also inspect domestic establishments. Among those qualified to inspect foreign establishments, FDA relies on staff to volunteer to conduct inspections. FDA officials told us that it is difficult to recruit investigators to voluntarily travel to certain countries. However, they added that if the agency could not find an individual to volunteer for a foreign inspection trip, it would mandate the travel. Logistically, foreign medical device establishment inspections are difficult to extend even if problems are identified because the trips are scheduled in advance. Foreign medical device establishment inspections are also logistically challenging because investigators do not receive independent translational support from FDA or the State Department and may rely on English-speaking employees of the inspected establishment or the establishment’s U.S. agent to translate during an inspection. Few inspections of medical device manufacturing establishments have been conducted through FDA’s two accredited third-party inspection programs—the Accredited Persons Inspection Program and PMAP. FDAAA specified several changes to the requirements for inspections by accredited third parties that could result in increased participation by manufacturers. Few inspections have been conducted through FDA’s Accredited Persons Inspection Program since March 11, 2004—the date when FDA first cleared an accredited organization to conduct independent inspections. Through January 11, 2008, five inspections had been conducted independently by accredited organizations (two inspections of domestic establishments and three inspections of foreign establishments), an increase of three since we reported on this program one year ago. As of January 11, 2008, 16 third-party organizations were accredited, and individuals from 8 of these organizations had completed FDA’s training requirements and been cleared to conduct independent inspections. As of January 8, 2008, FDA and accredited organizations had conducted 44 joint training inspections. Fewer manufacturers volunteered to host training inspections than have been needed for all of the accredited organizations to complete their training. Moreover, scheduling these joint training inspections has been difficult. FDA officials told us that, when appropriate, staff are instructed to ask manufacturers to host a joint training inspection at the time they notify the manufacturers of a pending inspection. FDA schedules inspections a relatively short time prior to an actual inspection, and as we reported in January 2007, some accredited organizations have not been able to participate because they had prior commitments. As we reported in January 2007, manufacturers’ decisions to request an inspection by an accredited organization might be influenced by both potential incentives and disincentives. According to FDA officials and representatives of affected entities, potential incentives to participation include the opportunity to reduce the number of inspections conducted to meet FDA and other countries’ requirements. For example, one inspection conducted by an accredited organization was a single inspection designed to meet the requirements of FDA, the European Union, and Canada. Another potential incentive mentioned by FDA officials and representatives of affected entities is the opportunity to control the scheduling of the inspection by an accredited organization by working with the accredited organization. FDA officials and representatives of affected entities also mentioned potential disincentives to having an inspection by an accredited organization. These potential disincentives include bearing the cost for the inspection, doubts about whether accredited organizations can cover multiple requirements in a single inspection, and uncertainty about the potential consequences of an inspection that otherwise may not occur in the near future—consequences that could involve regulatory action. Changes specified by FDAAA have the potential to eliminate certain obstacles to manufacturers’ participation in FDA’s programs for inspections by accredited third parties that were associated with manufacturers’ eligibility. For example, an eligibility requirement that foreign establishments be periodically inspected by FDA was eliminated. Representatives of the two organizations that represent medical device manufacturers with whom we spoke about FDAAA told us that the changes in eligibility requirements could eliminate certain obstacles and therefore potentially increase their participation. These representatives also noted that key incentives and disincentives to manufacturers’ participation remain. FDA officials told us that they are currently revising their guidance to industry in light of FDAAA and expect to issue the revised guidance during fiscal year 2008. It is too soon to tell what impact these changes will have on manufacturers’ participation. FDA officials acknowledged that manufacturers’ participation in the Accredited Persons Inspection Program has been limited. In December 2007, FDA established a working group to assess the successes and failures of this program and to identify ways to increase participation. Representatives of the two organizations that represent medical device manufacturers with whom we recently spoke stated that they believe manufacturers remain interested in the Accredited Persons Inspection Program. The representative of one large, global manufacturer of medical devices told us that it is in the process of arranging to have 20 of its domestic and foreign device manufacturing establishments inspected by accredited third parties. As of January 11, 2008, two inspections, both of domestic establishments, had been conducted through PMAP, FDA’s second program for inspections by accredited third parties. Although it is too soon to tell what the benefits of PMAP will be, the program is more limited than the Accredited Persons Inspection Program and may pose additional disincentives to participation by both manufacturers and accredited organizations. Specifically, inspections through PMAP would be designed to meet the requirements of the United States and Canada, whereas inspections conducted through the Accredited Persons Inspection Program could be designed to meet the requirements of other countries. In addition, two of the five representatives of affected entities noted that in contrast to inspections conducted through the Accredited Persons Inspection Program, inspections conducted through PMAP could undergo additional review by Health Canada. Health Canada will review inspection reports submitted through this pilot program to ensure they meet its standards. This extra review poses a greater risk of unexpected outcomes for the manufacturer and the accredited organization, which could be a disincentive to participation in PMAP that is not present with the Accredited Persons Inspection Program. Americans depend on FDA to ensure the safety and effectiveness of medical products, including medical devices, manufactured throughout the world. However, our findings regarding inspections of medical device manufacturers indicate weaknesses that mirror those presented in our November 2007 testimony regarding inspections of foreign drug manufacturers. In addition, they are consistent with the FDA Science Board’s findings that FDA’s ability to fulfill its regulatory responsibilities is jeopardized, in part, by information technology and human resources challenges. We recognize that FDA has expressed the intention to improve its data management, but it is too early to tell whether the intended changes will ultimately enhance the agency’s ability to manage its inspection programs. We and others have suggested that the use of accredited third parties could improve FDA’s ability to meet its inspection responsibilities. However, the implementation of its programs for inspecting medical device manufacturers has resulted in little progress. To date, its programs for inspections by accredited third parties have not assisted FDA in meeting its regulatory responsibilities nor have they provided a rapid or substantial increase in the number of inspections performed by these organizations, as originally intended. Although recent statutory changes to the requirements for inspections by accredited third parties may encourage greater participation in these programs, the lack of meaningful progress raises questions about the practicality and effectiveness of establishing similar programs that rely on third parties to quickly help FDA fulfill other responsibilities. Mr. Chairman, this completes my prepared statement, I would be happy to respond to any questions you or the other Members of the subcommittee may have at this time. For further information about this testimony, please contact Marcia Crosse at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may found on the last page of this testimony. Geraldine Redican-Bigott, Assistant Director; Kristen Joan Anderson; Katherine Clark; Robert Copeland; William Hadley; Cathy Hamann; Mollie Hertel; Julian Klazkin; Lisa Motley; Daniel Ries; and Suzanne Worth made key contributions to this testimony. In congressional testimony in November 2007, we presented our preliminary findings on the Food and Drug Administration’s (FDA) program for inspecting foreign drug manufacturers. We found that (1) FDA’s effectiveness in managing the foreign drug inspection program continued to be hindered by weaknesses in its databases; (2) FDA inspected relatively few foreign establishments; and (3) the foreign inspection process involved unique circumstances that were not encountered domestically. Our preliminary findings indicated that more than 9 years after we issued our last report on FDA’s foreign drug inspection program, FDA’s effectiveness in managing this program continued to be hindered by weaknesses in its databases. FDA did not know how many foreign establishments were subject to inspection. Instead of maintaining a list of such establishments, FDA relied on information from several databases that were not designed for this purpose. One of these databases contained information on foreign establishments that had registered to market drugs in the United States, while another contained information on drugs imported into the United States. One database indicated about 3,000 foreign establishments could have been subject to inspection in fiscal year 2007, while another indicated that about 6,800 foreign establishments could have been subject to inspection in that year. Despite the divergent estimates of foreign establishments subject to inspection generated by these two databases, FDA did not verify the data within each database. For example, the agency did not routinely confirm that a registered establishment actually manufactured a drug for the U.S. market. However, FDA used these data to generate a list of 3,249 foreign establishments from which it prioritized establishments for inspection. Because FDA was not certain how many foreign drug establishments were actually subject to inspection, the percentage of such establishments that had been inspected could not be calculated with certainty. We found that FDA inspected relatively few foreign drug establishments, as shown in table 2. Using the list of 3,249 foreign drug establishments from which FDA prioritized establishments for inspection, we found that the agency may inspect about 7 percent of foreign drug establishments in a given year. At this rate, it would take FDA more than 13 years to inspect each foreign drug establishment on this list once, assuming that no additional establishments are subject to inspection. FDA’s data indicated that some foreign drug manufacturers had not received an inspection, but FDA could not provide the exact number of foreign drug establishments that had never been inspected. Most of the foreign drug inspections were conducted as part of processing a new drug application or an abbreviated new drug application, rather than as current good manufacturing practices (GMP) surveillance inspections, which are used to monitor the quality of marketed drugs. FDA used a risk-based process, based in part on data from its registration and import databases, to develop a prioritized list of foreign drug establishments for GMP surveillance inspections in fiscal year 2007. According to FDA, about 30 such inspections were completed in fiscal year 2007, and at least 50 were targeted for inspection in fiscal year 2008. Further, inaccuracies in the data on which this risk-based process depended limited its effectiveness. Finally, the very nature of the foreign drug inspection process involved unique circumstances that were not encountered domestically. For example, FDA did not have a dedicated staff to conduct foreign drug inspections and relied on those inspecting domestic establishments to volunteer for foreign inspections. While FDA may conduct unannounced GMP inspections of domestic establishments, it did not arrive unannounced at foreign establishments. It also lacked the flexibility to easily extend foreign inspections if problems were encountered due to the need to adhere to an itinerary that typically involved multiple inspections in the same country. Finally, language barriers can make foreign inspections more difficult to conduct than domestic ones. FDA did not generally provide translators to its inspection teams. Instead, they may have had to rely on an English-speaking representative of the foreign establishment being inspected, rather than an independent translator. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As part of the Food and Drug Administration's (FDA) oversight of the safety and effectiveness of medical devices marketed in the United States, it inspects domestic and foreign establishments where these devices are manufactured. To help FDA address shortcomings in its inspection program, the Medical Device User Fee and Modernization Act of 2002 required FDA to accredit third parties to inspect certain establishments. In response, FDA has implemented two such voluntary programs. GAO previously reported on the status of one of these programs, citing concerns regarding its implementation and factors that may influence manufacturers' participation. (Medical Devices: Status of FDA's Program for Inspections by Accredited Organizations, GAO-07-157 , January 2007.) This statement (1) assesses FDA's management of inspections of establishments--particularly those in foreign countries--manufacturing devices for the U.S. market, and (2) provides the status of FDA's programs for third-party inspections of medical device manufacturing establishments. GAO interviewed FDA officials; reviewed pertinent statutes, regulations, guidance, and reports; and analyzed information from FDA databases. GAO also updated its previous work on FDA's programs for inspections by accredited third parties. FDA has not met the statutory requirement to inspect certain domestic establishments manufacturing medical devices every 2 years, and the agency faces challenges inspecting foreign establishments. FDA primarily inspected establishments located in the United States. The agency has not met the biennial inspection requirement for domestic establishments manufacturing medical devices that FDA has classified as high risk, such as pacemakers, or medium risk, such as hearing aids. FDA officials estimated that the agency has inspected these establishments every 3 years (for high risk devices) or 5 years (for medium risk devices). There is no comparable requirement to inspect foreign establishments, and agency officials estimate that these establishments have been inspected every 6 years (for high risk devices) or 27 years (for medium risk devices). FDA faces challenges in managing its inspections of foreign medical device establishments. Two databases that provide FDA with information about foreign medical device establishments and the products they manufacture for the U.S. market contain inaccuracies that create disparate estimates of establishments subject to FDA inspection. Although comparing information from these two databases could help FDA determine the number of foreign establishments marketing medical devices in the United States, these databases cannot exchange information and any comparisons must be done manually. Finally, inspections of foreign medical device manufacturing establishments pose unique challenges to FDA in human resources and logistics. Few inspections of medical device manufacturing establishments have been conducted through FDA's two accredited third-party inspection programs--the Accredited Persons Inspection Program and the Pilot Multi-purpose Audit Program (PMAP). From March 11, 2004--the date when FDA first cleared an accredited organization to conduct independent inspections--through January 11, 2008, five inspections have been conducted by accredited organizations through FDA's Accredited Persons Inspection Program. An incentive to participation in the program is the opportunity to reduce the number of inspections conducted to meet FDA and other countries' requirements. Disincentives include bearing the cost for the inspection, particularly when the consequences of an inspection that otherwise might not occur in the near future could involve regulatory action. The Food and Drug Administration Amendments Act of 2007 made several changes to program eligibility requirements that could result in increased participation by manufacturers. PMAP was established on September 7, 2006, and as of January 11, 2008, two inspections had been conducted by an accredited organization through this program, which is more limited than the Accredited Persons Inspection Program. The small number of inspections completed to date by accredited third-party organizations raises questions about the practicality and effectiveness of establishing similar programs that rely on third parties to quickly help FDA fulfill its responsibilities.
You are an expert at summarizing long articles. Proceed to summarize the following text: In 1996, the federal government spent $1.4 trillion in U.S. states and territories to procure products and services, to fund grants and other assistance, to pay salaries and wages to federal employees, to provide public assistance, and to fund federal retirement programs and Social Security, among other things. Some states rank relatively high on the per capita distribution of different types of federal dollars. Government reports indicate that in 1996, Maryland, Virginia, and Alaska were the only three states to rank among the top five in each of the following categories: (1) total federal expenditures, (2) total federal procurement expenditures, and (3) total salary and wage expenditures for federal workers. The only other state that ranked among the top 10 states in all these categories was New Mexico. Interest in the economic magnitude of defense and other federal expenditures in states has been amplified by concerns over anticipated outcomes of the post-Cold War drawdown. In hearings before the Joint Economic Committee of the 101st Congress, 12 state governors submitted to the leadership of the Senate and House a plan for responding to expected adverse economic impacts in states that were believed to be particularly vulnerable to reductions in defense spending. In 1992, President Bush issued Executive Order 12788, requiring the Secretary of Defense to identify the problems of states, regions, and other areas that result from base closures and Department of Defense (DOD) contract-related adjustments. The Office of Economic Adjustment is DOD’s primary office responsible for providing assistance to communities, regions, and states “adversely impacted by significant Defense program changes.” The federal government tracks defense-related and other federal spending and associated employment through various sources. Centralized reporting of this information is done by the Census Bureau in its Consolidated Federal Funds Report (CFFR) series. The CFFR includes the Federal Expenditures by State (FES) report and a separate two-report volume that presents information at the county and subcounty level. The FES report presents the most comprehensive information on federal expenditures at the state level that can actually be attributed to specific federal agencies or programs. Agencies involved in collecting and reporting various types of employment information include the Office of Personnel Management (OPM) and the Bureau of Labor Statistics. Expenditure information reported in the CFFR also appears in agency-specific publications or data sources. DOD reports information on its total procurement expenditures and the salaries and wages paid to DOD personnel, by state, in the Atlas/Data Abstract for the United States and Selected Areas. In compiling information for the CFFR, DOD’s procurement data are first sent to the Federal Procurement Data System (FPDS) and then sent to Census. Therefore, Census, DOD, and the FPDS can and do report DOD procurement expenditures. Federal expenditure and employment data are available to users in and outside the government and are regularly used in policy formulation and evaluation. DOD contractors, including the Logistics Management Institute, have used federal government data in support of their work for DOD on the economic impacts of base realignment and closure actions. The Office of Economic Conversion Information, a collaborative effort between the Economic Development Administration of the Department of Commerce and DOD, uses existing federal data to provide information to communities, businesses, and individuals adjusting to the effects of defense downsizing and other changing economic conditions. The Congressional Budget Office and the Congressional Research Service have also used DOD procurement expenditure data in examining the expected effects of planned reductions in the national defense budget. DOD uses its prime contract award expenditure data to track the status and progress of goals associated with contracts made to small businesses. Researchers at think tanks, universities, and state government offices also use government data in a wide array of research projects and publications. DOE and DOD military activities have contributed substantially to the economy of New Mexico for about 50 years. Government data show that between 1988 and 1996, New Mexico was ranked second, third, or fourth, among U.S. states in per capita distribution of federal dollars. In terms of per capita federal procurement expenditures only, New Mexico was ranked first among U.S. states during 1988-94 and second in 1995-96. In 1996, New Mexico was ranked first among states in return on federal tax dollars, receiving $1.93 in federal outlays for every $1.00 in federal taxes paid. The state was also ranked first in return on federal tax dollars in 1995. In 1996, 5 of the 6 major federal facilities were among the top 10 employers in the state. This federal revenue comes largely from the six major federal facilities in New Mexico, including two DOE national laboratories, Los Alamos National Laboratory and Sandia National Laboratory; Cannon, Holloman, and Kirtland Air Force Bases; and White Sands Missile Range, a test range that supports missile development and test programs for all the services, the National Aeronautics and Space Administration (NASA); and other government agencies and private industry. New Mexico’s geography and climate, including relative isolation from major population centers, year-round good weather, and open airspace, have made the state attractive for some military activities. In May 1996, the Secretary of Defense and the German Defense Minister activated the German Air Force Tactical Training Center at Holloman Air Force Base in Alamogordo. The training opportunities provided by the vast airspace in and around Holloman and its proximity to Fort Bliss, Texas—the headquarters location for German air force operations in North America—were factors in Germany’s decision to invest in a tactical training center at the base. State officials estimate that the training center will result in a population increase to the Alamogordo area of about 7 percent and investment by Germany of $155 million by 1999. Services and trade are distinct components of New Mexico’s economy. In 1993, the largest employment sectors in New Mexico were services, government, and trade: these were reported as accounting for approximately 76 percent of the total average annual state employment.Businesses involved in trade and/or services accounted for 67 percent of all businesses in New Mexico in 1993. Revenue from the gross receipts tax is the highest source of tax revenue in New Mexico, and in 1996, gross receipt taxes from services and trade accounted for more than half of all gross receipts tax revenue. DOE reports show that between 1990 and 1995, it made more expenditures in the services and trade sectors of the New Mexico economy. New Mexico Department of Labor projections indicate that by 2005, the services sector will alone account for about 41 percent of total employment while employment in the trade sector is projected to remain stable and government employment is expected to decline. The projections indicate that jobs in services and trade will account for 70 percent of the new jobs between 1993 and 2005. New Mexico state officials have been focusing on “achieving economic diversification to protect against dramatic negative changes in the state’s economy,” believed to be linked to changes in federal spending in the state. Efforts in 1996 to recruit select industries to the state have initially resulted in at least 7 businesses locating to New Mexico, creating 230 new jobs. In terms of other efforts, New Mexico was 8th among U.S. states in high-technology employment growth between 1990 and 1995. The single leading high-technology industry in the state is semiconductor manufacturing, which accounts for 34 percent of total high-technology jobs. Intel Corporation has three advanced computer chip manufacturing sites that employ at least 6,500 people making it the state’s second-largest private sector employer and contributing to the growth in New Mexico’s high-technology employment. In 1995, Intel was also the leading manufacturing employer in the state. High-technology exports account for the largest percentage of New Mexico exports to other countries, with exports to Korea leading other nations. Currently, about 10 percent of all New Mexico manufacturers are exporting. The leading exporters in New Mexico are Intel, Motorola, and Honeywell Defense Avionics. A comparison of the percent change in New Mexico’s per capita income and total defense-related spending (DOE and DOD) in the state during 1990-94 shows that real growth occurred in per capita income, while total defense expenditures declined (see fig.1). A comparison between percent real growth in New Mexico’s gross state product and total defense-related federal expenditures reveals the same pattern, suggesting that efforts to diversify the state’s economy may be having a positive effect (see fig. 2). Based on the average rate of growth in the gross state product during 1987-94, the Bureau of Economic Analysis identified New Mexico as the third-fastest-growing state. Available federal data provides a segmented and rough snapshot of federal money spent in states and the employment linked to those expenditures that is relevant to gauging some trends and patterns. For example, government data indicates that in 1996, the federal government spent about $12 billion in New Mexico. Direct expenditures for procurement, salaries and wages for federal workers, and grants accounted for 60 percent, or about $7.3 billion, of the total. Direct payments to individuals, the single largest category of federal expenditures, accounted for approximately 37 percent, or about $4.4 billion, of total 1996 federal expenditures (see fig. 3). Appendix II includes additional descriptions of federal spending and employment in New Mexico. The top five agencies making procurement expenditures in New Mexico during 1993-96, were DOE, DOD, the Department of Interior, NASA, and the Postal Service. The defense-related agencies (DOE and DOD), compared to the nondefense-related ones, accounted for 90 percent, or $14.1 billion, of the $15.5 billion total spent during 1993-96. Specifically, DOE accounted for 80 percent of the total federal defense-related procurement expenditures, or about $11.2 billion of the 1993-96 total of $14.1 billion. Between 1993 and 1996, the top five federal agencies that accounted for the largest dollar amount of expenditures to pay salaries and wages of federal workers in New Mexico were DOD; the Postal Service; and the Departments of Interior, Health and Human Services, and Veterans Affairs. Salaries and wages paid to federal employees of the defense-related agencies account for about $7 billion, or 54 percent, of the total $13 billion spent in New Mexico. Specifically, between 1988 and 1996 DOD accounted for about $6.5 billion, or 93 percent, of the $7 billion total defense-related federal salaries and wages. Payments to workers retired from defense-related agencies also accounted for more of the total annuities to retired federal workers living in New Mexico during 1990-96. Payments to retired defense-related federal workers accounted for $3.2 billion, or 68 percent, of the total $4.7 billion in annuitant expenditures. Payments to former DOD workers accounted for 98 percent of the total payments to retired defense-related workers. Figure 4 shows the percent of defense-related expenditures for procurement, federal workers’ salary and wages, and retirement payments accounted for by DOE and DOD, respectively. Between 1988 and 1996, the Departments of Defense, the Interior, Health and Human Services, Veterans Affairs, and Agriculture were the top five agencies in terms of total federal employees in New Mexico. Between 1988-1996, defense-related jobs were about 72 percent, or 300,000 jobs, of the total 420,000 federal jobs in New Mexico. Specifically, DOD accounted for 97 percent, or about 292,000 of these jobs, over the period 1988-96. Thus, DOD federal jobs were more of the total federal jobs and more of the defense-related federal jobs in New Mexico. Federal retirees of defense-related agencies also comprised more of the retired federal workers living in New Mexico: 68 percent of the total between 1990 and 1996. Specifically, DOD accounted for 99 percent of all retirees from the defense-related agencies. Figure 5 shows the percent of defense-related jobs and retirees in New Mexico accounted for by DOE and DOD. The existing data provides information on federal employees only. This is an important point because although the overall ratio of DOD federal workers to DOE federal workers was 44:1 between 1988 and 1996, our research also shows that more of the DOE employment is linked to private contractors that manage and operate the laboratories and other DOE facilities than to the number of DOE federal employees. Private contractors working on government contracts are not considered or counted as federal employees. However, even when we compared the total DOE employment, which included direct DOE prime contractor, subcontractor, and federal employees, to the total DOD federal employment DOD’s direct federal employment was higher than DOE’s in each year between 1990 and 1996. Of the DOD employment, more of the federal jobs were DOD military than DOD civilians. Between 1988-96 about 42 percent of the total DOD federal jobs in New Mexico were held by active duty military members, 33 percent were held by inactive duty military (national guard and reserves), and 25 percent were held by DOD civilians. Similarly, more of the federal wages were associated with active duty military. Active duty military members accounted for 55 percent, inactive members accounted for 5 percent, and DOD civilians accounted for 40 percent of the total salaries and wages between 1988-96. A comparison of the occupations represented by the defense-related federal jobs in New Mexico indicates that during 1988-96 the largest number of jobs were blue-collar and technical. This finding, however, largely represents the patterns for the DOD active duty employment in New Mexico, for which technical and blue-collar jobs comprise about 70 percent of the total jobs. Among DOD civilian employees, the two categories that accounted for the largest number of jobs over the period 1988-96 were professional (23 percent of the total jobs) and blue-collar (20 percent of the total jobs). The two occupational categories that account for more of the DOE direct federal employment in New Mexico are administrative (30 percent of total jobs) and professional (37 percent of total jobs). Official federal data sources are useful for gaining a preliminary understanding of the composition of federal expenditures in states. However, fundamental characteristics of the federal data make it difficult to determine the direct economic impact of federal activities on states. For example, our analysis of defense-related expenditures and employment did not include information on DOD contractor employment because there is no official DOD or other federal source of such information. Federal government data sources provide insufficient evidence for determining where federal dollars are actually spent, how much is actually spent, and the number or type of jobs that the federal dollars directly generate because of numerous limitations in scope and coverage and in reporting requirements or procedures. Our related findings that pertain to the data sources used and reviewed in our work are summarized in tables 1 and 2. To gain further insights into the reliability of the federal government’s data we focused on characteristics of existing DOD data. Although DOD’s procurement expenditure data (DD350) is used in broad policy contexts and used to evaluate the status of programs that are believed to be important to economic security, the form is not designed to provide information on all DOD expenditures in a single state or at the national level. Procurement contracts under $25,000 are not included, no information on DOD subcontracts of any value are included, and financial data related to classified programs may or may not be reported or be accurate. DOD acknowledges that the DD350 does not completely account for all procurement expenditures, and although this limitation is generally understood and acknowledged by informed users, the possible implications are not. We surveyed the top five DOD contractors in New Mexico to determine how much money they received in DOD prime contracts and subcontracts and compared their responses to DOD’s records (the DD350 data) of their total contracts. The comparisons revealed that in no case were the DOD records of the dollar value of contracts awarded to these companies the same as the contractors’ records. Differences between DOD and contractors’ records ranged from $20 million for prime contracts to $80 million for total contracts. In some cases, the DOD records appeared to overstate the amount the contractors received, while in other cases the DOD records appeared to understate the amount. Our research suggests several possible reasons for the inconsistencies between contractor records and DOD records. For example, expenditures associated with procurement contracts can leak from a state’s economy if a company subcontracts part of the work elsewhere. One study reported that of $5.2 billion in DOD prime contracts received by McDonnell Douglas in St. Louis, Missouri, less than 3 percent, or $156 million, stayed in Missouri due to out-of-state subcontracting. However, from our survey of contractors in New Mexico we determined that leakages were more prevalent for certain types of procurement contracts. While our survey showed overall that more than 80 percent of the total DOD prime contract dollars remained in the state, for every year between 1988 and 1996, it also showed that the businesses that predominantly received service contracts, rather than supply and equipment contracts (i.e., major hard goods/weapons), kept nearly all of the DOD contract money they received in the state. This is particularly relevant because other DOD data indicate that in every year between 1988 and 1996, DOD procurement contracts for services account for the largest dollar volume of contracts to New Mexico. Also, service contracts may more likely be under DOD’s $25,000 reporting threshold and therefore excluded from total expenditures as officially reported by DOD. Furthermore, injections of dollars from subcontracts with out-of-state firms or with other in-state firms are not tracked by DOD, yet would have been included in the contractors’ records. Finally, the DOD Inspector General reported in 1989 that the DD350 data had reliability problems due to instances of unreported contract obligations and other errors in reported data. The Inspector General made no recommendations and has not assessed the reliability and validity of the DD350 contract tracking system since then. The existing data that track defense-related employment are limited in their scope, coverage, and reliability. Among the most notable limitation in the data is the lack of a central or official source of data on private-sector employment associated with DOD contracts. Information on the number of jobs associated with particular defense contracts or weapon programs are repeatedly discussed in the media and in Congress. Further, DOD has stated that defense procurement dollars promote the creation of jobs. However, DOD officials have also indicated that they do not collect information on the job impacts of particular DOD budget decisions. To obtain information on the employment associated with defense contracts or the employment linked to particular defense programs, it is necessary to contact individual defense contractors and/or DOD system program offices directly. The contractor employment data we obtained from our survey of defense contractors in New Mexico is summarized in appendix III, along with other survey findings. The responses from the top four contractors who provided us data indicated that the total number of direct jobs associated with DOD contracts was approximately 19,200 during 1988-96. The total DOD federal employment (active duty, inactive, and civilians) in the state for the same period (1989 data included) was approximately 328,000. A comparison of employment data from three top DOE prime contractors to the data from the top four DOD prime contractors indicates that, over the period 1994-96, DOE had about eight prime contractor employees to every one DOD prime contractor employee in New Mexico. We also obtained employment and expenditure data for a sample of specific defense programs that were known to have some involvement with New Mexico contractors (see table 3). The available data indicate that the state of New Mexico receives relatively large amounts of federal dollars. Defense-related federal activities in the state have contributed to the development of the economy, and recent efforts to diversify the economic base appear linked to continued growth. The best available data indicate that in New Mexico DOE and DOD account for about 90 percent of all federal procurement spending (1993-96), 54 percent of expenditures for federal worker salary and wages (1988-96), 72 percent of all federal jobs in the state (1988-96), and 68 percent of all retired federal workers living in the state (1990-96). Specifically, DOE accounts for 80 percent of the defense-related procurement expenditures, and DOD accounts for 93 percent of the defense-related salary and wage expenditures, 97 percent of the defense-related federal jobs, and 99 percent of the federal workers retired from defense-related agencies and living in New Mexico. The largest component of DOE employment is private contractor employment, while the largest component of DOD employment is federal employment, namely active duty military members. On one hand, determining the full and complete economic magnitude of federal expenditures in states, whether defense or nondefense, and the related employment is not possible with existing data. Trying to reconcile differences among data sources and account for gaps or questionable data is very resource-intensive and does not necessarily yield benefits in precision or accuracy. On the other hand, the existing data are not without value, nor should the government necessarily strive for increased data collection that could actually entail more costs than benefits. The limitations in federal data may, in part, reflect the fact that data collection trails behind changes in federal policy or shifts in policy relevance. Those who rely on federal data need to be alert to their drawbacks and exercise discretion when using them. In oral comments on a draft of this report, DOD concurred with our findings and conclusions. It also provided several technical comments, which we incorporated in the text where appropriate. In conducting our work, we contacted and interviewed officials and experts from federal and state government offices and the private sector. Because the scope of the work covered all federal expenditures and related employment in New Mexico over an 8-year period, there was a large range and number of contacts and outreach efforts we made in completing our work. We made over 50 contacts throughout federal and state governments and the private sector. Our final results were produced from databases from four separate federal agencies; our survey of New Mexico defense contractors encompassing 8 years of financial and business information; information obtained from a review of more than 30 publications; and information we obtained from numerous documented interviews with key officials. A list of the offices we contacted is in appendix I. To determine the characteristics of the New Mexico economy and recent changes in the economy, we reviewed and analyzed economic data and information we obtained from interviews with New Mexico state officials, federal government officials, and available federal and state data sources, including the Bureau of Economic Analysis and the Bureau of Business and Economic Research at the University of New Mexico. To determine the direct defense-related and nondefense-related federal expenditures and employment in New Mexico over the period 1988-1996we contacted multiple federal offices and obtained official data from DOD and DOE. We obtained data on all other nondefense-related federal expenditures from the Census Bureau. All available data on DOD and DOE expenditures were categorized as defense-related. We obtained total nondefense-related employment data from OPM’s Central Personnel Data File. All expenditure figures were adjusted for inflation and are presented in constant 1996 dollars. Appendix II contains the complete overview and figures depicting our findings related to direct federal expenditures and employment in New Mexico. To determine the extent to which available government data provides reliable information on defense spending and employment, we evaluated the qualities of the existing federal data. We reviewed technical documentation for the sources used, interviewed agency officials about the data sources, conducted crosschecks of data that appeared in multiple sources but had been derived from the same source, and in the case of DOD procurement expenditures, compared the results of DOD data to our survey results. Survey results are discussed in appendix III. Given the outcome of our review, federal data limitations and data reliability concerns are discussed in our findings and reflected in the report’s conclusions. Our work was conducted between November 1996 and October 1997 in accordance with generally accepted government standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 14 days from its issue date. At that time, we will send copies of this report to other interested congressional committees and members. Copies will also be made available to others upon request. Please contact me at (202) 512-3092 if you or your staff have any questions concerning this report. Major contributors to this report were Carolyn Copper, John Oppenheim, and David Bernet. Professional Aerospace Contractors Association of New Mexico, Albuquerque, New Mexico Intel Corporation, Albuquerque, New Mexico American Electronics Association, Santa Clara, California Logistics Management Institute, McClean, Virginia Academy for State and Local Governments, Washington, D.C. National Council of State Governments, Washington, D.C. National Legislative Council, Washington, D.C. National Governors Association,Washington, D.C. RAND, Washington, D.C. This appendix presents 1988-96 (1) trends in total direct federal expenditures and employment in New Mexico and within specific spending categories, (2) defense-related and nondefense-related expenditures and employment, and (3) the Department of Energy’s (DOE) and the Department of Defense’s (DOD) share of the defense-related expenditures and employment. We used existing databases and a survey on how much money is directly spent and how many people are directly employed to determine expenditures and employment. We did not assess the indirect or induced effects of federal expenditures and employment. All expenditure data were adjusted for inflation and are presented in constant 1996 dollars. Data for all years were not always available. Federal expenditures in New Mexico fluctuated between about $10 billion and $12 billion, 1988 through 1996. The highest level of spending occurred in 1996 (see fig. II.1). Figure II.1: Federal Expenditures in New Mexico (1988-96) This increase in federal expenditures for New Mexico is consistent with nationwide trends. Total federal employment in New Mexico generally increased between 1988 and 1994, then declined to 1996. Total employment in 1996 is the lowest level of any year in the period (see fig. II.2). The decline in federal employment in New Mexico in the last several years is consistent with trends in declining nationwide federal employment. Figure II.2: New Mexico Federal Employment (1988-96) Figure II.3 shows the specific expenditure trends in procurement, grants, salaries and wages for federal workers, and direct payments to individuals. Figure II.3: Total Federal Spending on Procurement, Grants, Federal Employee Salaries and Wages, and Direct Payments in New Mexico (1988-96) Procurement expenditures in New Mexico have generally declined over time but did increase between 1989 and 1992. In the 1988-96 time frame, procurement expenditures were at their lowest in 1996. Expenditures on grants and direct payments have increased over time and have not shown periods of decline. This is consistent with national trends. Federal salary and wage trends are marked by small increases over time with periods of stability following an increase. Defense-related procurement expenditures far exceeded nondefense-related procurement expenditures during 1993-96. But both types of expenditures have been declining (see fig. II.4). The decline in defense-related expenditures is consistent with overall trends in declining DOD and DOE budgets. Figure II.4: Defense-Related and Nondefense-Related Federal Procurement Expenditures in New Mexico (1993-96) Nondefense-related agencies accounted for more of the expenditures for federal grants to New Mexico (see fig. II.5). The top five agencies in terms of expenditures on federal grants to New Mexico were the Departments of Health and Human Services (HHS), Transportation, Interior, Agriculture, and Education. Expenditures on nondefense-related grants were 99 percent of the total grant expenditures in each year between 1988 and 1996. Figure II.5: Defense-Related and Nondefense-Related Federal Grant Expenditures in New Mexico (1988-96) Defense-related agencies accounted for more of the total salaries and wages for federal workers than nondefense-related agencies between 1988 and 1996 (see fig. II.6). Figure II.6: Salaries and Wages to Defense-Related and Nondefense-Related Federal Workers in New Mexico (1988-96) Between 1988 and 1993 total expenditures on salaries and wages for nondefense-related workers increased steadily, slowly declining in the last 4 years. On the other hand, salary and wage expenditures for defense-related workers generally declined between 1988 and 1993 but increased slightly between 1995 and 1996. Salaries and wages were at their highest in 1996 for defense-related workers were and at their highest in 1993 for nondefense-related federal workers. It is not possible to make clear federal agency distinctions in direct payment expenditures. These expenditures are commonly reported by federal program, not by federal agency. Given the reporting criterion used, we determined which federal program accounted for most of the direct payments in New Mexico. In some but not all cases, this information is sufficient to determine which federal agency accounted for most of the expenditures. Programs administered by HHS accounted for over 50 percent of the total direct payment expenditures in New Mexico in each year between 1988 and 1996: the average was 63 percent (see fig. II.7). The programs included in the HHS roll-up include Social Security, Medicare, and Supplemental Security Income. Figure II.7: Distribution of Federal Direct Payments in New Mexico, by Federal Program (1988-96) Payments for federal retirement and disability made up the second largest category of direct payments in New Mexico in each year between 1988 and 1996. On average, these payments accounted for 18 percent of all direct payments made in New Mexico during 1988-96. The Food Stamp Program, administered by the Department of Agriculture, on average, accounted for 5 percent, and direct payments to individuals associated with all other programs, on average, accounted for 14 percent of the total direct payments over the same time period. We could not determine the breakdown between the defense-related and nondefense-related distribution of federal retirement payments directly from the Census data. Therefore, we obtained additional data from DOD and the Office of Personnel Management (OPM). Figure II.8 shows that payments to workers retired from the defense-related agencies account for the majority—on average 68 percent—of the total annuities for retired federal workers in New Mexico, between 1988 and 1996. Total annuities for defense and nondefense-related retired federal workers have increased over time. Figure II.8: Total Annuities for Federal Workers Living in New Mexico and Retired From Defense-Related and Nondefense-Related Agencies (1988-96) Federal workers from the defense-related agencies accounted for the majority of the total federal employment in New Mexico during 1988-96 (see fig. II.9). Federal jobs in the defense-related agencies, on average, accounted for 72 percent of the total federal jobs in New Mexico. Total federal employment declined by approximately 4,000 jobs between 1992 and 1996; about 84 percent of these jobs were in defense-related agencies. Figure II.9: Defense-Related and Nondefense-Related Federal Employment in New Mexico (1988-96) Defense-related agencies in New Mexico account for about 68 percent of the federal retirees, on average, between 1990 and 1996. The number of federal workers retired from defense and nondefense-related agencies and living in New Mexico has increased over time. Figure II.10: Federal Retired Workers From Defense and Nondefense-Related Agencies Living in New Mexico (1988-96) The defense-related agencies in New Mexico accounted for the majority of procurement expenditures, total annuities for retired federal workers, and salaries and wages for federal employees. In figures II.11, II.12, and II.14, we show the trends in the DOD and DOE share of the expenditures in each of these categories. We also show the number of DOD and DOE federal retirees in New Mexico (see fig. II.13). Between 1993 and 1996, DOE accounted for more of the defense procurement dollars that went to New Mexico than DOD (see fig. II.11). Consistent with overall declining DOE and DOD budgets, DOE and DOD procurement expenditures in New Mexico have declined in the last several years. Figure II.11: DOD and DOE Procurement Expenditures in New Mexico (1993-96) Figure II.12 shows that payments to DOD retired federal workers living in New Mexico account for most of the total annuities to federal workers retired from defense-related agencies between 1990 and 1996. On average, annuities to retired DOD workers accounted for 98 percent of total annuities between 1990 and 1996. Figure II.12: Annuities to Workers Retired From DOD and DOE and Living in New Mexico (1990-96) Also, more former DOD than DOE federal employees were living in New Mexico between 1990 and 1996 (see fig. II.13). Figure II.13: DOD and DOE Retired Federal Workers in New Mexico (1990-96) The increase in retired DOD workers in New Mexico is consistent with an overall increase in the number of retired active duty military members and DOD civilians. Figure II.14 shows that DOD also accounts for nearly all of the salary and wage expenditures for federal employees of defense-related agencies. Figure II.14: DOD and DOE Federal Employee Salary and Wage Expenditures in New Mexico (1988-96) On average, DOD accounted for 93 percent of the defense-related salaries and wages for federal employees. The total amount of DOD and DOE salary and wage expenditures has fluctuated some over the years, but no sharp increases or decreases have occurred. DOE mostly employs prime contractor employees, who are not counted as federal employees, thus, their numbers are not included in federal data. DOE data we obtained indicates that the salaries and wages for DOE prime contractor employees in New Mexico are greater than those of DOD federal employees in the state. For example, between 1990 and 1994 the total salaries and wages for DOD federal employees were about $4 billion and, for DOE prime contractors were about $6 billion. Comparable figures on the total compensation to DOD prime contractor employees in New Mexico were not available. However, the data we obtained from our survey of the top New Mexico contractors shows that the total compensation to their employees was $332 million between 1990 and 1994, or about $6.6 million per year. Defense-related federal employment in New Mexico is higher than nondefense-related employment. In this section, we show the DOD and DOE portions of defense-related employment over time, including DOD’s and DOE’s numbers and types of occupations. On average, DOD accounted for 97 percent of the total defense-related federal employment in New Mexico between 1988 and 1996 (see fig. II.15). Figure II.15: DOE and DOD Employment in New Mexico (1988-96) DOE In each year between 1988 and 1996, active duty military members were the single largest group of DOD federal employees in New Mexico. Inactive duty military and DOD civilian employees, respectively, accounted for the second and third largest component of DOD federal employment (see fig. II.16). Figure II.16: DOD Active, Inactive, and Civilian Employment in New Mexico (1988-96) Active duty and inactive duty military members, and DOD civilians ranked first, third, and second, respectively, in accounting for the largest share of salary and wages for DOD federal employees in New Mexico from 1988 to 1996 (see fig. II.17). Figure II.17: Salary and Wages for DOD Active and Inactive Duty Members and DOD Civilians in New Mexico (1988-96) Between 1988 and 1996 more of the DOD active duty military jobs in New Mexico were blue collar and technical compared to administrative, clerical, white collar, or professional job occupations (see fig. II.18). Figure II.18: Job Occupations of DOD Active Duty Military in New Mexico (1988-96) The job occupations of DOD civilians were more evenly dispersed across categories than DOD military jobs. Professional job occupations accounted for the most DOD civilian jobs in New Mexico between 1988 and 1996 (see fig. II.19). Figure II.19: Job Occupations of DOD Civilians in New Mexico (1988-96) The majority of DOE federal jobs in New Mexico between 1988 and 1996 were professional and administrative (see fig. II.20). Figure II.20: Job Occupations of DOE Federal Employees in New Mexico (1988-96) The principal purpose of our survey was to determine and characterize the flow of defense dollars to contractors and to illuminate and quantify the limitations of existing data sources that document defense spending in states. For our survey sample, we selected contractors who were among the top five in terms of the total dollar amount of DOD prime contracts awarded in fiscal year 1996. Time and resource constraints prevented us from surveying every business that was awarded a defense contract and performed work in New Mexico. For example, in 1996 alone, 471 businesses were awarded DOD contracts exceeding $25,000 for work principally done in New Mexico. We obtained DOD’s DD350 data to determine the total value of DOD prime contracts awarded to all businesses in 1996 with the principal place of work in New Mexico. From this population we selected five contractors: Honeywell, DynCorp, EG&G, Kit Pack Company, and Lockheed Martin. In 1996, prime contracts to these businesses accounted for 26 percent of the total value of all DOD prime contracts awarded to businesses in New Mexico. In the period covered by our survey, that is, 1988-96, the percentage of total DOD prime contract awards accounted for by the top five New Mexico contractors ranged from 26 to 46 percent. Different companies have been in the list of the top five over the years. However, over the survey period, Honeywell and DynCorp were consistently among the top five. Contractors were asked to complete several questions about DOD contracts they were awarded as a prime and subcontractor between 1988-96. We asked them to indicate the total value of all DOD contracts received, the dollar amount of contract work that was subcontracted or was interdivisional work, the amounts subcontracted in-state and out-of-state, the amount of salary and wages for all contracts completed by the contractor and by subcontractors, and the number of full-time equivalent (FTE) positions for work completed by the contractor and for subcontractors. As a group Honeywell, Lockheed Martin, DynCorp, and EG&G are large, diversified corporations with business establishments physically located in New Mexico but actual corporate headquarters located elsewhere in the country. Kit Pack is a relatively smaller company, with its business headquarters and all operations located in New Mexico. During the period of time covered by our survey, Honeywell’s principal DOD work in New Mexico was research, development, and testing and evaluation services for military aircraft and the manufacturing of aircraft avionics components. In 1996, DOD awarded prime contracts to Honeywell to provide automatic pilot mechanisms; flight instruments; and research, development, and testing and evaluation services related to aircraft engine manufacturing, among other things. Its survey data was completed by staff at Honeywell’s business establishment in Albuquerque. DynCorp is a large professional and technical services firm. DynCorp’s principal work in New Mexico is providing business services, which include aircraft maintenance and repair at military bases, and operations services provided at government-owned facilities. In 1996, DOD awarded prime contracts to DynCorp to provide maintenance and repair services to equipment and laboratory instruments, telecommunications services, and other services associated with operating a government-owned facility at White Sands Missile Range, among other things. DynCorp’s survey data was completed by staff at the corporate headquarters in Reston, Virginia. DynCorp’s responses were based on financial data for DynCorp and its subsidiaries that also operate in New Mexico (e.g., Aerotherm). EG&G’s principal DOD work in New Mexico is providing communications equipment; operating radar and navigation facilities at Holloman Air Force Base; and doing advanced research, development, testing and evaluation work. In 1996, DOD awarded prime contracts to EG&G to provide advanced development and exploratory research and development (including medical) services at Kirtland Air Force Base and to operate radar and navigation facilities at Holloman Air Force Base, among other things. EG&G’s survey data was completed by staff at the Albuquerque office and includes data only for EG&G Management Systems. Kit Pack Company is located in Las Cruces, south of Holloman Air Force Base near White Sands Missile Range. Kit Pack’s principal DOD work in New Mexico is providing aircraft spare parts and modification kits. In 1996, DOD awarded prime contracts to Kit Pack to provide aircraft hydraulics, vacuum and deicing system components, airframe structural components, and torque converters and speed changers, among other things. After it completed and returned the survey to us, Kit Pack officials informed us that it was currently operating under Chapter 11 bankruptcy due to the termination for default of an Army contract. Kit Pack had filed an appeal of the termination, which was pending when we completed our work. The company indicated that it has seen a severe reduction in the number of DOD contracts awarded since it filed for bankruptcy. Kit Pack staff in Las Cruces completed our survey. We were unable to obtain survey information from Lockheed Martin. Company officials indicated that they did not have the type of information we requested broken out by states or geographical locations. In a follow-up meeting, company officials provided us with information on their total expenditures to New Mexico suppliers, annual payroll for their employees in New Mexico and the number of employees in the state between 1992 and 1996. The information was developed by staff in Lockheed Martin’s Washington operations office. We could not use Lockheed Martin’s information because it was not broken out by specific federal agencies, nor could we determine whether the total expenditures, payroll, or employment were associated with government-funded work or whether they were part of the company’s commercial business. Over the course of several meetings and conversations with Lockheed Martin officials, we obtained detailed supplier expenditure information from the Lockheed Martin Consolidated Procurement Program which was broken out by specific Lockheed Martin business units. Company officials said that this would provide an indication of the type of business activity (e.g., DOD, DOE, NASA, and commercial) that the expenditures were made for. In addition, we were given information on corporate sales and payroll by staff in Lockheed Martin’s tax department. We discovered several discrepancies in the company’s financial information. When we discussed these with company officials, they indicated that the data provided by the Washington operations office were “less reliable” than other data. Company officials also indicated that their record-keeping had been challenged by the recent merger/acquisition activities (i.e., Lockheed and Martin Marietta in 1995 and the Loral acquisition in 1997). Lockheed Martin officials said that different companies had different information systems and that some information may have been lost during the recent merger. Our survey was not designed to specify or measure the exact amount of all DOD contract dollars that flow into New Mexico. Rather, its purpose was to reflect the nature of the flow of DOD prime and subcontract dollars to a sample of top New Mexico contractors and to compare these results to existing DOD data. Among the four contractors that completed the survey, none indicated that they could not provide reliable responses to the survey items. The most common limitation was the lack of information on FTEs and wages for subcontracted work. Specifically, contractors indicated the following limitations in their responses to us. Honeywell provided information on the dollar amount of the orders it received during the calendar year and estimates of subcontracted work and employees and wages associated with subcontracted work. Kit Pack did not have FTE or wage information on its subcontractors and indicated that it no longer had payroll records for its own staff for 1988, 1989, or 1991. EG&G did not have records for FTEs and wages associated with subcontracted work. DynCorp did not have information on its subcontractors prior to 1993. To report fiscal year information, DynCorp had to convert some company financial data that was not identified by fiscal years. We treated all survey data received from contractors as proprietary. Therefore, in discussing survey findings, contractor names are not used and data is aggregated to protect business-sensitive information. All dollars were adjusted for inflation and are constant 1996 dollars. All of the contractors surveyed were DOD prime contractors. Two of the four contractors we surveyed indicated that they were also DOD subcontractors. The total amount of DOD prime and contract subcontract awards has declined over the 9-year period. The totals reported for 1996 were the lowest of all the years. For the 9-year period of our survey, expenditures for DOD prime contracts ($1.5 billion) were roughly the same as for subcontracts ($1.4 billion). However, in 5 of the 9 years, the contractors received more subcontract than prime contract dollars (see fig. III.1). Figure III.1: DOD Contracts Awarded to the Top Four New Mexico Defense Contractors (1988-96) Between 1988 and 1996, the percent of prime contract dollars that remained in-state was consistently greater than 80 percent (see fig. III.2). The 9-year average was 83 percent. Figure III.2: Contract Dollars Received by the Top Four New Mexico Defense Contractors That Stayed In-State (1988-96) Although the average percent of prime contract dollars that remained in New Mexico was high, examination of specific contractor data indicates important exceptions. For two of the contractors, the survey results indicated that nearly 100 percent of the prime contract dollars they received remained in-state between 1988 and 1996. However, one contractor’s data shows that less than 50 percent of prime contract dollars received remained in-state each year between 1988 and 1996. Approximately 70 percent of the total prime contract awards received by another contractor remained in-state for all years (see fig. III.3). Figure III.3: Differences in Percent of Prime Contract Dollars That Remained In-State (1988-96) For the two contractors that were also DOD subcontractors, a slightly smaller percentage of their subcontract dollars remained in-state compared to the percentage of their prime contract dollars (see fig. III.4). On average, 75 percent of subcontract dollars remained in-state between 1988 and 1996. Figure III.4: Subcontract Dollars That Stayed In-State (1988-96) The contractors indicated that the majority of jobs supported by their DOD prime contracts remained in-state. On average, 73 percent of the jobs remained in-state during 1988-96. The lowest yearly percentage was 66 percent in 1989 and 1990, and the highest was 83 percent in 1996 (see fig. III.5). Figure III.5: DOD Prime Contract and Subcontract Jobs That Stayed In-State (1988-96) On average, 73 percent of the total wages for employees working on DOD prime contracts and subcontracts remained in-state between 1988 and 1996 (see fig. III.6). From 1988 to 1996 the percent of wages that remained in-state generally increased. Figure III.6: Wages for DOD Prime Contract and Subcontract Work That Stayed In-State (1988-96) We compared our survey results to DOD’s records of the total amount of contract awards received by the contractors between 1994 and 1996. DOD sources collect and report information only on prime contracts while our survey collected information on DOD prime contracts and subcontracts. Thus, we expected that DOD’s records and the contractors’ would be different as was revealed in the survey. Therefore, we compare DOD’s records of total prime contracts to our survey results on the amount of prime contracts received by the contractors in New Mexico and that remained in the state. However, to shed further light on and quantify, where possible, the limitations in existing DOD data, we also compared the amount of total contracts, defined as in-state prime contracts and subcontracts, to the DOD totals, defined as prime contracts (see fig. III.7). The overall comparison between the contractors’ records and DOD’s records of total prime contract amounts shows that DOD records can both overstate and understate the total amount of prime contracts that actually end up in a state’s economy. In 1994, the contractors’ records show that $93.6 million in DOD prime contract work was done in New Mexico. On the other hand, DOD’s records indicate that the contractors received $144.9 million in prime contracts, representing a possible $51 million, or about a 54-percent overstatement. However, in 1995, the contractors’ records showed that $143.3 million in DOD prime contract work was done in the state, whereas DOD’s records show that the businesses received $117.2 million, representing a possible $26-million, or about an 18 percent understatement. As expected, a comparison of the contractors’ records of the total contracts (in-state prime contracts and in-state subcontracts) to the existing DOD records of total prime contracts shows that the totals reported by the contractors were consistently greater than the totals reported in DOD’s records.
Pursuant to a congressional request, GAO examined defense and other federal spending in the state of New Mexico, focusing on: (1) characteristics of New Mexico's economy and changes in it; (2) the amount of direct defense-related and nondefense-related federal spending in the state and the direct federal employment associated with both, over time; and (3) the extent to which available government data can provide reliable information on defense spending and employment. GAO noted that: (1) New Mexico is home to two Department of Energy (DOE) national laboratories and four Department of Defense (DOD) military installations, among other federal activities; (2) state officials indicate that New Mexico's economy is "heavily dependent" upon federal expenditures; (3) in 1996, New Mexico was fourth among states in the per capita distribution of federal dollars and first in return on federal tax dollars; (4) while parts of the state have relatively strong economies, in 1994 New Mexico's poverty rate was the second highest in the country and its per capita income was 48th in the country; (5) although defense-related spending has been declining, New Mexico's gross state product and total per capita income have been increasing, indicating that the economy is growing and that efforts to diversify the economy may be having a positive effect; (6) one can learn several things from the available federal government expenditure and employment data for New Mexico; (7) DOD and DOE expenditures have consistently represented the largest share of all federal expenditures for procurement and salaries and wages in New Mexico; (8) defense-related employment has also consistently represented the largest share of total federal employment in New Mexico, including retired federal workers; (9) DOD and DOE do not contribute equally on types of defense-related spending or defense-related employment, revealing relevant distinctions between the types of direct economic contributions made by these agencies; (10) DOE contributes most in federal procurement expenditures and private contractor employment; (11) DOD contributes most in federal salaries and wages and federal employment, namely active duty military and retired employees; (12) existing government data, however, contributes to only a partial understanding of the type of federal dollars that enter a state's economy and the employment supported by the expenditures; (13) GAO's research based on New Mexico shows that the data have limitations that severely restrict the ability to determine the total amount and distribution of federal funding and jobs in the state; (14) key limitations include: (a) reporting thresholds that exclude millions in procurement expenditures; (b) the reporting of the value of an obligation, rather than the money actually spent; (c) the absence of any comprehensive source of primary data that systematically identifies private sector employment associated with federal contracts; and (d) DOD's lack of data on subcontracts; and (15) since these data sources are not unique to New Mexico, these limitations would also apply to assessments of other states.
You are an expert at summarizing long articles. Proceed to summarize the following text: TAPP was established originally by section 232 of the Small Business Administration Reauthorization and Amendments Act of 1990 (P.L. 101-574). In October 1991, Congress repealed the earlier authorization in section 609 of Public Law 102-140 and replaced it with the current program. Intended from the start to be a pilot program, the law authorized funding for 4 years, not to exceed $5 million a year. In mid-1994, the Congress decided that it would not reauthorize TAPP beyond fiscal year 1995. TAPP was modeled after Minnesota Project Outreach, a state program that provided small businesses with access to computerized databases and technical experts. Services for Project Outreach were provided under contract by Teltech Resource Network Corporation (Teltech), a Minnesota-based, national supplier of technical and business knowledge. The Minnesota program was regarded as a success in providing user-friendly services to small businesses that would not otherwise have the means or the ability to obtain needed technical information. Its success provided the stimulus for the TAPP legislation. The law made three agencies responsible for administering TAPP. The Small Business Administration (SBA) was authorized to make grants to competing Small Business Development Centers (SBDC), which had to obtain matching contributions at least equal to the awards. SBA was to coordinate with the National Institute of Standards and Technology (NIST) and the National Technical Information Service in establishing and managing the program. According to NIST officials, only SBA and NIST took an active role in program administration because the National Technical Information Service is an agency whose primary role is to collect and disseminate scientific, technical, engineering, and business-related information generated by other federal agencies and foreign sources. In early 1991, NIST and SBA signed a memorandum of understanding that resulted in NIST’s implementing TAPP on behalf of and in close cooperation with SBA. SBA administers TAPP through its Office of Small Business Development Centers, which is responsible for setting policies, developing new approaches, monitoring compliance, and improving operations for the SBDCs. NIST manages and monitors TAPP through its Manufacturing Extension Partnership (MEP), a network of organizations to help American manufacturers increase their competitiveness nationally and internationally through ongoing technological deployment. The SBDCs, which provide counseling and training to existing and prospective small businesses, were chosen as the local level through which TAPP services would be provided. As of July 1994, there were SBDCs and subcenters at 750 geographically dispersed locations nationwide, as well as Puerto Rico and the Virgin Islands. Counselors at the SBDCs are knowledgeable in the needs of small businesses and are experienced in working with them. The first TAPP grants were made for fiscal year 1992 and went to SBDCs in Maryland, Missouri, Oregon, Pennsylvania, Texas, and Wisconsin. Oregon dropped out of TAPP after fiscal year 1993 when it was not able to obtain matching funds; however, it has continued to operate without federal funding on a reduced scale. The remaining five centers continued to receive TAPP funds through fiscal year 1995. As shown in appendix II, federal grants to the six TAPP centers for the 4 years of the program totaled $3,537,000. While the centers have differed somewhat in the way they chose to deliver services, the basic model for each center is the same. First, the center offers its clients access to a variety of on-line databases. These databases cover technical areas such as product development, patents, and manufacturing processes as well as nontechnical areas, such as market research and vendor listings. Secondly, the center links the clients with experts who can provide specific assistance. Typically, services are provided for free or at a nominal charge and may be augmented by other SBDC programs and services. Appendixes IV through IX describe each of the current and former TAPP centers. In our first report on TAPP, we raised concerns about the evaluation methodology for measuring the program’s impact. Although NIST subsequently identified a strategy to address these concerns, this issue is now moot because the program will not be funded past fiscal year 1995. (See app. III.) In our first report, we noted that TAPP had started slowly and that some of the centers, while making progress, were not operating in accordance with the statements of work in their proposals. This is no longer the case. In the program’s fourth and final year, each of the five centers still in the program is fully operational. While the centers differ in some important respects, in many ways they have become more nearly alike in the types of services offered and the methods of delivering them. SBA and NIST have not evaluated the impact on small business productivity and innovation either nationwide or within the individual states where TAPP centers were located. According to the limited responses to client satisfaction surveys, however, the businesses that used TAPP services were pleased with the services they received. Also, TAPP center officials were pleased with the way their individual programs had developed and provided examples of projects that had been successful. At the time of our review, each of the TAPP centers planned to continue its program beyond fiscal year 1995. However, most officials within the centers were uncertain about how they would be organized, what services they would provide, or where they would obtain funding. Currently, the TAPP centers primarily serve clients with a need for new technology, many of whom are just getting started in business. Overall, the five TAPP centers still in the pilot program served approximately 1,840 clients in fiscal year 1994, ranging from 230 in Missouri to 445 in Wisconsin. According to Nexus Associates, a NIST consultant, 59 percent were manufacturers, 21 percent were service companies, 14 percent were wholesale and retail companies, and 7 percent represented other segments of the small business community. Forty percent of the clients had not yet established a business, and another 26 percent were involved in new ventures. While there were some “repeat” clients, 89 percent undertook only one project during the period. The five centers responded to 2,843 information requests during fiscal year 1994, ranging from 283 in Missouri to 847 in Pennsylvania. According to Nexus Associates and as shown in table 1, these projects were evenly divided between technical and nontechnical information, although there were differences among the centers. A more detailed breakdown of the services showed an emphasis on product or process information and market research. Database searches, rather than the use of technical experts, represent the primary type of service provided by the TAPP centers. As shown in table 2, for example, 65 percent of the projects in fiscal year 1994 were for literature searches. Only 9 percent of the projects were for expert and/or technical counseling. The impact TAPP has had on business productivity and innovation cannot be measured because there are no substantive data. Moreover, because NIST cancelled its plans for evaluating the program’s impact after funding was discontinued, no such determination will likely be made. NIST continues to collect data on client satisfaction; however, the surveys are of limited value because of the low response rate. For example, in fiscal year 1994 the response rate of the clients surveyed ranged from a low of 9 percent in Pennsylvania to a high of 46 percent in Wisconsin. According to an analysis by Nexus Associates, those clients that did respond to the satisfaction survey for fiscal year 1994 indicated a high degree of satisfaction with TAPP services. The vast majority of those responding ranked the services they received as “good” to “excellent” and would recommend TAPP to other companies. Similarly, more than 90 percent of the respondents said that their requests for assistance received prompt attention. More than 80 percent said that the representatives who assisted them possessed the necessary skills. The overwhelming majority of the clients rated as “good” to “excellent” the helpfulness of the representatives and the relevance, currentness, and conciseness of the information received. The estimated value of the services provided varied widely among the centers and their clients. The median value, according to the clients’ estimates in their survey responses, ranged from $101 to $150 among the centers; however, 19 percent of the clients responding to the survey placed a value of more than $500 on the services they received. Those clients valuing the services at more than $500 tended to (1) be new businesses, (2) focus on expert searches rather than vendor searches, and (3) request market research information rather than management or vendor information. Two-thirds of the clients responding said that they were unlikely to have been able to obtain the information they received without TAPP. However, the level of satisfaction depended on the type of information requested. For example, while the majority of companies receiving patent information believed they could have received the information elsewhere, the majority of companies receiving management or vendor information believed it was unlikely they could have found this information elsewhere. Officials at the five centers still participating in TAPP told us they were satisfied with the programs they had developed and believed that they were providing valuable services to their client businesses. While they could provide no statistics on the overall impact, they did provide examples of projects perceived as successful, such as the following: An environmental services company in Missouri feared it was infringing on an existing U.S. patent for monitoring gasoline contamination of groundwater around service stations and storage tanks. As part of an overall action plan, the TAPP center conducted a search of the technology that predated the patent. The company resolved the issue and was able to continue to market its services to test for leaks from storage tanks. TAPP center personnel also referred the company to other SBDC personnel who were able to assist it in preparing three Small Business Innovation Research project proposals to SBA. A Wisconsin manufacturer risked losing a major customer because the liquid crystal displays it was making were breaking too easily. Through a literature search by the TAPP center, the manufacturer identified a number of new databases and obtained information that it subsequently incorporated into its product improvement process. The company believes that the information helped it save an account worth approximately $2 million over a 2-year period. A Maryland software company specializing in adaptive network systems wanted to expand into markets beyond the airline industry it originally had targeted. The TAPP center performed a literature search for firms that were purchasing or producing financial yield predictive software. The company was then able to identify and begin to market its products to two financial services companies that had advertised in trade journals their need to obtain revenue management tools. According to TAPP center officials, there was a learning curve associated with developing their individual programs. They provided the following examples of some of the factors with which they had to deal: Technology must be “pulled by” rather than “pushed upon” the clients. Unlike large corporations, small business owners typically have limited budgets, time, and expertise. Technology is of little benefit to them in the abstract and must have practical applications that can be adapted to the marketplace. Thus, technology is best integrated when a center can provide assistance throughout the various stages of a product’s development or delivery. Promotion is essential because small business owners may not know that they need or can use the technology available. The centers must promote their services through such methods as advertisements in trade publications and seminars. A center’s services must be integrated into those of the SBDC. One of the challenges facing the TAPP centers has been internal promotion (i.e., getting other SBDC staff—whose focus has been toward business planning—to see the advantages of TAPP’s technical assistance services so that they can encourage small business owners to use them). Because officials at each of the five TAPP centers still in the program believed their services were a valuable addition to the types of assistance the SBDCs provide, they said they planned to continue them after federal funding ends in fiscal year 1995. Because they did not know whether or how they would replace the federal funds, however, they were not certain how their programs would be organized or whether they would be able to provide the same level of services. While federal funding for TAPP will be discontinued after fiscal year 1995, the interest in programs providing technical assistance to small businesses continues. Thus, it is possible that the Congress may reconsider the need for similar types of federal programs in the future. If so, the lessons learned under the pilot program could be useful. From analyzing 4 years of TAPP funding and operations, we believe the following questions need to be considered prior to funding any future program: What are the program’s specific objectives? Is a separate and distinct federal program necessary to achieve these objectives? How should the program be financed? While the authorizing legislation stated an ultimate goal for TAPP—increasing the innovativeness and competitiveness of small businesses through improved technology—it did not specify what level of increase was desired or how results could be measured. The law did say that the purpose of the program was “increasing access by small businesses to on-line databases that provide technical and business information, and access to technical experts, in a wide range of technologies...” However, it did not define these terms nor did it specify which, if any, segments of the small business community were to be targeted. From the beginning, NIST and the SBDCs differed on the objectives and scope of TAPP. As noted in our earlier report, NIST was concerned that the services provided had too much of a marketing, rather than a technical, orientation and that many TAPP clients were small, local, retail businesses rather than technical or manufacturing concerns. NIST officials had hoped that, while there was no such requirement in the law, eventually 50 percent of the information provided by TAPP centers would be technical in nature. Taking a broader view of technology in the context of TAPP, SBDC officials said that an underlying objective always must be the continued viability of the firms seeking assistance. These officials maintain that it is important not just to disseminate pure technology but also to encourage all businesses to take advantage of whatever technical information is available. This may mean using TAPP databases to obtain marketing information heretofore unavailable to them. The issue seems to have resolved itself within the current program. Projects during fiscal year 1994 were evenly divided between technical and nontechnical information, according to Nexus Associates. NIST officials said they were pleased with the progress the centers had made toward giving TAPP a more technical focus. TAPP was not a new idea; technology assistance programs for small businesses have been available for some time. For example, both the Missouri and Pennsylvania SBDCs already had limited programs that were similar to TAPP in place when they received TAPP grants. Other states, such as New Mexico and North Carolina, have developed “technical” SBDCs on their own to promote and enhance technology transfer. Minnesota’s Project Outreach, which was the model for TAPP, has never received federal funding. Teltech is a private company that has provided technical services under contract to other organizations—including Project Outreach and TAPP centers—on a fee-for-service basis. Generally, the SBDCs appear to agree that they should offer technical assistance to their clients and have begun to establish programs. In a 1991 survey of 56 state SBDC directors conducted by the Association of Small Business Development Centers, 42 directors (75 percent) said they were providing “client-assisted access to databases.” About 60 percent of the SBDCs were providing this service themselves, while the rest were referring their clients to some other organization on an informal or contractual relationship. Eighty-eight percent of the SBDC directors responding to the survey said they were assisting clients in identifying experts who could respond to technical questions. However, only 23 percent of the SBDCs were providing this service on their own; the remainder referred clients to other organizations on an informal or contractual relationship. The survey respondents also noted that they had made a long-term commitment to technical assistance programs. Thirty-three states or areas planned to expand their technology transfer and/or development services, including enhanced access to technical databases. Thirty-six states made capital available for research and development, new product development, and access to technology. Technology assistance is also being provided to small businesses under federally sponsored programs other than those administered by the SBDCs. One example is the Manufacturing Technology Centers (MTC) NIST helped establish as a part of its MEP network. MTCs are regionally located and managed centers for transferring manufacturing technology to small and midsized manufacturing companies. MTCs use a wide variety of technology sources, including commercial firms, federal research and development laboratories, universities, and other research-oriented organizations. MTCs differ from the current TAPP centers in that they are regional in nature, focus solely on pure technology, serve only manufacturers, and work with the same clients on an ongoing basis. However, an MTC can provide the same services to a manufacturing client that a TAPP center can provide. In fact, Minnesota’s Project Outreach, which was the model for TAPP, is now a part of an MTC in the state. Federal appropriations for the TAPP program over its 4 years totaled $3.5 million—far less than the $20 million authorized. As shown in appendix II, none of the centers received more than $200,000 in any one year. Actual budgets were larger, of course, because the law required matching funds. SBDC officials agreed with our observation that the TAPP funding allowed them to create and operate dedicated technology-assistance programs that might not have been possible otherwise. One advantage was that the funding covered the start-up costs of the centers. During the first 2 years of the program, there was a considerable learning curve as the centers established their programs, developed a service mix, and promoted themselves to potential users. Another advantage was that the funding allowed the centers to provide services at little or no cost to prospective clients. The SBDC officials believed that this gave the centers the capability to offer a wider range of services and to serve more businesses. The TAPP law envisioned technology-assistance centers within the SBDCs that eventually would be at least partially self-sustaining. For example, the law gave as one of the selection criteria “the ability of the applicant to continue providing technology access after the termination of this pilot program.” The law also encouraged the TAPP centers to try to obtain funds from other federal and nonfederal sources. In practice, most of the support came from the TAPP funding itself, the SBDCs, the states, or the educational institutions with which the centers were affiliated. One option for funding a technology-assistance program is for the program to charge businesses a fee for the services they use. This is one reason the Oregon center has been able to operate after TAPP funding ended. During the program’s first 2 years, the Oregon center received a total of $325,000 in TAPP funds plus matching state funds. Since the end of fiscal year 1993, however, the center has relied on donations and client fees to operate. Currently, clients are charged $30 an hour plus on-line expenses. According to Oregon center officials, clients pay an average of approximately $114 per search. During the TAPP years, client fees averaged about $10 per search. In 1994, fees totaled about $7,500, or 19 percent, of the Oregon center’s budget of $40,000. Its director believed that, in some ways, the center improved after it began to be self-supporting because clients took them more seriously and were more cautious about the services they requested when they had to pay for them. At the same time, the Oregon center has had to scale down its operations now that it no longer receives federal grants and matching state funds. While Minnesota’s Project Outreach receives the bulk of its funding from state appropriations, it also charges a fee for services. For example, “client companies,” which can access services directly, must pay an annual fee based on sales as well as a fee for certain services. An expert consultation, literature search, or vendor search costs a client company $35 per use. There is no annual fee for “public access users,” who can obtain services through remote terminals across the state. However, there is a higher charge for services, such as $50 for a consultation, an interactive literature search, or a vendor search. In some cases, such as gaining access to certain information on the University of Minnesota’s databases, there is no charge to either type of user. The five TAPP centers still receiving federal grants in fiscal year 1995 had not generated any significant revenues by charging fees for services. Generally, the services were either offered to clients for free or for a fee well below what they would have cost if purchased from a private vendor. This was intentional because the centers used their free and low-cost services to attract clients who might benefit from their technical assistance. While some centers were considering fee-for-service arrangements as one possibility for funding services after the end of TAPP funding, they had not yet finalized any plans. In its fourth and final year of funding, TAPP is fully operational in the five states still participating in the program. Each of the five states as well as Oregon—which dropped out of the program after fiscal year 1993—plan to continue on some level. However, the states are not certain how the centers will be organized, what services will be provided, or where funding will be obtained. NIST officials are no longer concerned that the TAPP centers are focusing on marketing rather than technical services. Data from fiscal year 1994 indicate that about half the services being provided were of a technical nature, which is the ratio NIST envisioned at the program’s inception. Moreover, 59 percent of the users were manufacturing companies. Generally, both the users and the SBDCs were pleased with the services being provided and the results achieved. Because the Congress has decided not to extend TAPP funding past fiscal year 1995, we identified no issues that need to be addressed on the current program. If the Congress decides to fund a program similar to TAPP in the future, it may wish to consider some of the lessons learned, or issues that emerged during the pilot program. These include (1) adding more specificity to the objectives and goals of the program; (2) determining whether a separate and distinct federal program is needed and, if so, what type of organization is best suited to manage it; and (3) deciding how the program should be funded, including charging user fees for the services provided. A draft of this report was sent to both SBA and the Department of Commerce for comment. In its written comments, SBA generally concurred with the findings and conclusions in our draft report. (See app. X.) Commerce, whose comments are included in appendix XI, said that the report (1) contained information which incorrectly characterized TAPP, MEP, and the role of NIST in implementing TAPP and (2) did not provide an adequate context from which to determine the lessons learned from TAPP and how those lessons fit into an overall concept of technical assistance. Specific issues related to Commerce’s two concerns are discussed below. Commerce disagreed first with our characterization of the emphasis NIST placed on the technical orientation of the TAPP centers. For example, Commerce disagreed with our use of the term “scientific information” in describing the types of services NIST wanted to emphasize under TAPP and asked that we use the broader description “technology and technical information.” Commerce also said that NIST officials had never set a 50-percent goal for such services but rather had sought a “balance” in technical and nontechnical services compared to marketing services. We agree with Commerce’s clarification that NIST wanted a technical, and not just scientific, orientation for TAPP and have revised our report accordingly. We disagree, however, that NIST did not set a 50-percent goal for such services, as NIST and TAPP center officials discussed this goal with us during our work on both the interim and current reports. Secondly, Commerce believed the report mischaracterized NIST’s evaluation efforts regarding TAPP. For example, Commerce disagreed that NIST had “cancelled” its evaluation plans, as we had noted in our report. Instead, Commerce asserted that NIST had revised its evaluation methodology. Commerce also said the report improperly characterized Nexus Associates as a NIST consultant on TAPP when Nexus actually was a subcontractor to the University of Houston’s SBDC. Commerce also believed that the report did not elaborate sufficiently on the problems associated with evaluating TAPP. Commerce pointed out that there are no models that could be used to establish a clear correlation between the information provided by a TAPP center and increased productivity and innovation as well as other positive economic indicators. According to Commerce, the key determinant is not the information provided but what is done with that information. Developing proper models would require follow-up over a period of years with clients who are willing to share continuing and potentially sensitive feedback on how the information is being used and what changes it has generated in the clients’ operations. Furthermore, Commerce said that we had previously agreed to fund and develop a survey that met our impact evaluation needs, as well as those of NIST and the TAPP centers. We disagree with Commerce’s assertion that NIST did not cancel its evaluation plans for TAPP. The discussion of this issue in our report focused on the evaluation of program impact. While NIST has continued to evaluate the program by collecting data from client surveys, we do not believe that these surveys address program impact. We have clarified this issue in our report. We also disagree that we mischaracterized the role of Nexus Associates. While Nexus was funded through the University of Houston’s SBDC, it performed analyses of programwide information, was referred to as a TAPP evaluation consultant by NIST officials, and presented its analyses to NIST. We agree with Commerce’s comments on the problems inherent in evaluating TAPP. We made this point in the interim report when we stated that “the data needed to evaluate the effectiveness of the program are not yet available and may not be available for some time.” We also stressed this point in November 1994 correspondence with the congressional committees when we agreed that the focus of this report should be on the lessons learned from TAPP. Contrary to Commerce’s comments, we did not agree to fund and develop the survey instrument. As a third concern, Commerce said that the report needed to provide a better context for how the lessons learned under TAPP fit into the overall concept of technical assistance. Commerce believed that the most important question that we raised in considering future needs is whether a separate and distinct federal program, such as TAPP, is necessary. Commerce said that the types of services provided by TAPP are not “stand-alone” services and that they must be considered within the broader context of services available under MEP. While we agree with Commerce on this point, such an analysis was beyond the scope of this report. Finally Commerce questioned the report’s characterization of MEP. Commerce noted that MEP supports American manufacturers nationally and internationally through ongoing technological deployment, not through technological development as stated in the report. Similarly, Commerce believed the report did not go far enough when it said that an MTC can provide the same types of services to manufacturers that a TAPP center could provide to SBDC clients. Commerce said that MEP’s manufacturing extension center organizations, of which the MTC is one type, actually can provide more such services. We agree with Commerce’s comments on the role of MEP and revised the report to say that MEP supports manufacturers through technological deployment. Also, we do not question that MEP may be able to provide more services to its clients than a TAPP center. We made no revisions to the report, however, as our point was to show that there are other organizations providing the same types of services as TAPP, rather than to compare the quality or quantity of the services provided. We conducted our work between August 1994 and June 1995 in accordance with generally accepted governmental auditing standards. We are sending copies of this report to the appropriate congressional committees; the Secretary of Commerce; the Administrator of SBA; and the Director, Office of Management and Budget. Major contributors to this report are listed in appendix XII. Please contact me at (202) 512-3841 if you or your staff have any questions. Public Law 102-140, enacted October 28, 1991, required GAO to issue two reports on the Pilot Technology Access Program (TAPP). The first, or interim, report was to discuss the program’s implementation and progress. We issued our first report on March 7, 1994. The second report was to determine the program’s effectiveness and impact on improving small business productivity and innovation. Prior to our beginning work on the second report, we learned that the Congress did not intend to fund TAPP beyond fiscal year 1995. Therefore, we met with the authorizing committees to determine what work was needed to meet the legislative mandate and to provide the Congress with information it might be able to use on similar programs in the future. We agreed to report on the experiences of and lessons learned by the TAPP centers during the pilot program. To carry out our objectives, we first met with the federal officials responsible for the management and the oversight of the program. These consisted of officials within (1) the Office of Small Business Development Centers (SBDC) in the Small Business Administration (SBA) and (2) the Manufacturing Extension Partnership (MEP) of the National Institute of Standards and Technology (NIST). We reviewed pertinent documents maintained by these agencies, including reports filed by the individual TAPP centers. We also reviewed materials prepared by a NIST contractor, Nexus Associates. We visited each of the five TAPP centers still in the program in fiscal years 1994 and 1995. These centers were located in SBDCs in Maryland, Missouri, Pennsylvania, Texas, and Wisconsin. We also visited the center in Oregon, which dropped out of TAPP after fiscal year 1993. At each location, we reviewed budgets, reports, and other materials and talked with key officials within the TAPP center and the SBDC. We also met with clients to obtain their perspectives on the TAPP services they had received. For comparison purposes, we visited Project Outreach in Minnesota, which was the model for TAPP; a technical SBDC in North Carolina; and a Manufacturing Technology Center in South Carolina. At each of these locations, we obtained an overview of the organization and services, met with key officials, and reviewed background documentation. We also talked with other persons who had background information on the technology needs of small businesses. These included the Association of Small Business Development Centers and two national associations that deal with small business issues. We asked both SBA and the Department of Commerce to provide comments on a draft of this report. SBA’s written comments are included in appendix X, and Commerce’s written comments are included in appendix XI. We incorporated their comments where appropriate. Also, we discussed the information included in the appendixes about each TAPP center with appropriate center officials. The law authorizing TAPP required GAO to issue two reports on the program. The first, or “interim,” report was to address the implementation and progress of the program. A “final” report was to evaluate the effectiveness of the program in improving small business productivity and innovation. On March 7, 1994, we issued our first report on TAPP entitled Federal Research: Interim Report on the Pilot Technology Access Program (GAO/RCED-94-75). In this report, we discussed the implementation of the six centers that had been established and concluded that it was too early to determine their impact on small businesses within their states. However, we did raise concerns about the evaluation methodology NIST had developed to measure such effects and the difficulties inherent in trying to link the information being provided with improving productivity. NIST had not attempted to develop an evaluation plan during the program’s first year, when the centers were in the process of getting established. In March 1993, during the second year, NIST asked the centers to conduct a postcard survey similar to one used by the Maryland center. This survey asked clients using TAPP services (1) if they had received the information they needed, (2) if they had used the information for making business decisions, (3) what type of information was most useful, (4) if they would use the program in the absence of a subsidy, and (5) what prices they would consider paying for TAPP services. However, this attempt at evaluation had little effect because (1) only 60 clients were surveyed in Maryland and only 47 responded; (2) only three other centers conducted surveys; and (3) the other surveys did not ask the same questions, making comparisons among the centers impossible. As a part of the fiscal year 1994 proposal process, NIST encouraged the centers to develop a standard client evaluation methodology. This would include three survey questionnaires of clients. The first would be a questionnaire on client satisfaction that would be distributed to clients immediately after a service was provided. The second questionnaire would ask about the impact of the service 6 months later. The third would ask clients how the service had affected the client’s competitive position in the market place a year after receiving the service. In our first report, we raised questions about the reliability of the data that would be obtained through the use of these questionnaires. We said that the questions were not clear or precise, did not make a connection between program impact and increased productivity, and failed to ask basic questions regarding client satisfaction with the program. We concluded that we had little confidence the questionnaires in their current form could be used to measure a center’s effectiveness, particularly considering the anticipated low response rate. In response to our first report, the Secretary of Commerce informed us in May 1994 that NIST planned to change its approach with the evaluation questionnaires. The changes would consist of (1) improving the initial client-satisfaction questionnaire; (2) eliminating the other two questionnaires to reduce the burden on TAPP clients; (3) replacing the two questionnaires that were dropped with a new survey instrument that better suited the requirements of GAO, NIST, and the TAPP centers; and (4) developing an analytic report of the data already being generated by the program. TAPP funds would be used to hire a consultant to develop the analytic report. After learning that TAPP was not going to be funded past fiscal year 1995, NIST officials decided against pursuing most of the evaluation plans it had set out. Instead, the TAPP centers were instructed to use only the initial client-satisfaction questionnaire. Also, NIST provided the University of Houston with funding for a contract with Nexus Associates, Inc., to develop an analytic report using data the program generated. Nexus Associates already has prepared a presentation using statistics from reports the centers submitted and the results of the client evaluation survey for fiscal year 1994. In adition, NIST plans to have Nexus Associates critique the other two questionnaires originally intended to provide NIST with information it could use to plan evaluations of future programs. The Maryland Technology Expert Network (TEN) is a part of the Manufacturing and Technology SBDC located at and affiliated with the University of Maryland in College Park. TEN offers small business clients both on-line and off-line services in the form of literature searches, intellectual property searches, expert consultations, and document delivery. These services are used to complement other services offered these same clients by the SBDC. While TEN has been a TAPP participant from the beginning, it has evolved over the years into its current configuration. For the first 3 years, services were provided by Teltech Resource Network Corporation (Teltech) under an exclusive contract. This contract was not continued in fiscal year 1995 because SBDC officials believed they could provide the necessary services in-house at a lesser cost and because they were seeking ways to become self-sustaining after the end of TAPP funding. Instead, the SBDC has contracted with the University of Maryland’s College of Library and Information Services (CLIS), which provides essentially the same database services at a reduced cost. More than 90 databases in a variety of subjects are accessible through the university’s library system. The SBDC also has access to experts associated with the university as well as external contacts. TEN focuses on serving small manufacturing firms, technology companies, and technology-related service companies, such as systems integrators and environmental service companies. TEN informs potential clients of its services through (1) personal contact with SBDC clients; (2) newsletters of various trade organizations and state economic development agencies; (3) targeted mailings; and (4) training events, seminars, workshops, and conferences. TEN has two key personnel that are responsible for its operations. The SBDC State Director provides program oversight while other SBDC staff inform clients of TEN services through their own counseling activities. Clients can access the center through any one of 28 locations throughout the state. TEN personnel have developed the TEN Information System (TENIS), an automated management information system to gather and report evaluation data; process client-tracking statistics; and produce monthly reports on clients by access site, counselor, and date. TENIS is also used to control client invoice information to ensure timely collection of fees. TEN personnel are primarily intermediaries between the client and the database vendor. Upon receipt of a client’s request for a database search, the request is entered into TENIS and forwarded to the vendor. The vendor conducts the search and sends the results to TEN, which delivers them to the client. Search results are typically given in conjunction with business consulting services. Maryland was not among the original states selected for TAPP in fiscal year 1992, the program’s first year. Upon review, NIST and SBA determined that Maryland would be a good site for the program because of a large concentration of high-tech companies and several government research and development locations in the state. Maryland was added to the program at a reduced level of federal funding—$50,400 compared to $200,000 for each of the other centers. TEN subsequently received $50,000 in fiscal year 1993, $170,000 in fiscal year 1994, and $140,000 in fiscal year 1995. TEN has received matching funds from the state, resulting in total state and federal funding of $887,754 over the life of the program. To supplement the funds available for its services, TEN has implemented a client fee structure. Initial searches are free, but the next four searches each require a $25 fee for remote literature, patent, and vendor searches and a $50 fee for expert consultations and literature searches. Clients are charged the market rate for the sixth and subsequent searches. As shown in table IV.1, TEN served many segments of the small business community during fiscal year 1994. The 336 clients served represent an increase of 65 percent over fiscal year 1993. The greatest areas of concentration were in the service and manufacturing segments, which accounted for 82 percent of the clients served. As shown in table IV.2, TEN responded to a total of 627 requests for database information during fiscal year 1994, an increase of 84 percent over 1993. Forty-one percent of these requests were of a technical nature. Legal (patents and/or regulations) TEN currently attempts to measure client satisfaction and program impact through a survey mailed to the client after a service has been provided. This survey requests information on the quality of customer service, the quality of information received, the accessibility of information outside of TEN, the dollar value of information received, and the type of information most critical to the client. The response rate for the fiscal year 1994 survey was 39 percent. Client responses were generally positive. In summary, users found the information from TEN to be very helpful, relevant, and current. Thirty-one percent rated the value of the information at $500 or more and 96 percent said they would recommend the services to others. TEN uses client interviews as another form of data collection. The interviews are conducted some months after a client’s use of TEN to determine its valuation of the economic impact of TEN service. Although few interviews have been conducted to date, TEN plans to begin client interviews on a larger scale in the third quarter of 1995. SBDC officials were pleased with the performance of TEN and planned to continue the program after the termination of TAPP funding. By using services available through CLIS, TEN is transitioning to a state-sponsored program by providing services with instate resources and some combination of state funding, user fees, and corporate sponsorships. The total amount budgeted for the fiscal year 1995 CLIS contract is $63,636. This figure includes $40,295 to cover such fixed costs as salaries, equipment, and on-line subscriptions; and $23,341 to cover such variable costs as supplies, telecommunications, expert consultations, and on-line searches. According to SBDC officials, the new arrangement will have limitations. First, CLIS does not have a well-established and extensive database of technical experts from which to pull resumes. Thus, while TEN can identify experts through CLIS, its database is not as extensive as with Teltech. With time, TEN hopes to develop its own database of experts. Second, interactive searches are not as accessible by staff in the field as they were with Teltech. Interactive searches are now only conducted through the Manufacturing and Technology SBDC in College Park and to a lesser extent in Baltimore. The Missouri Technology Access Program (MOTAP) is a part of the Missouri SBDC and is affiliated with the University of Missouri in Columbia, the University of Missouri in Rolla, and Central Missouri State University in Warrensburg. MOTAP offers small business clients both information services and technical assistance in the form of literature searches, patent searches, expert consultations, and document delivery. These services complement other services the SBDC offers to other clients. MOTAP is a coordinated effort between staff located at the three university campuses. The Missouri SBDC, located on the Columbia campus, houses the marketing component of MOTAP. The Technology Search Center in Rolla and the Center for Small Business Technology and Development in Warrensburg house the technical search capabilities. The Missouri SBDC State Director in Columbia provides management oversight for MOTAP. MOTAP targets the manufacturing community. MOTAP informs potential clients of its services through (1) training events, (2) seminars aimed at the manufacturing community, (3) relationships with network partners who inform their clients about MOTAP, and (4) newsletters and targeted mailings. MOTAP also markets the program internally to SBDC counselors to inform them of its services. The Missouri SBDC offered its clients on-line database searches and access to technical experts prior to federal TAPP funding. With TAPP funding, the SBDC hired two additional persons—one to conduct marketing database searches and one to provide technical assistance. TAPP funds increased the capabilities of existing SBDC functions and added the capability to provide marketing assistance. Six people participate or are involved in the MOTAP marketing information search function in Columbia. A marketing specialist devotes 75 percent of his time to MO TAP and is supported by two research associates who devote 33 and 25 percent of their time to the program respectively. Three other persons handle programming and administrative functions. Nine people perform the technical support function in Rolla and Warrensburg. Included are a technical project manager and a technology transfer coordinator who devote 76 and 25 percent of their time to the program, respectively. The remainder of the staff includes university faculty, a consulting engineer, and administrative support personnel. Other SBDC staff also provide assistance by informing clients of MOTAP services through their own counseling activities. Clients may access MOTAP through any one of 12 regional SBDC locations, 17 university extension locations, or 2 special service centers. The methods by which MOTAP services are provided may vary depending on the circumstances. Information services range from single answers to specific questions to lengthy “information counseling” projects that provide clients with information on a broad topic or opportunity. Such projects can involve multiple database searches, extensive data processing, and compiling reports. Technical assistance also varies from one-time answers to in-depth analyses of processes or problems by technical experts, student teams, field engineers, etc. MOTAP staff at the three campus locations must coordinate their efforts to provide a complete package of marketing and technical services to their clients. For example, if the staff in Rolla performed database searches for market and patent information, this could lead to follow-on services provided by the staff in Warrensburg who provide assistance in developing prototypes, identifying manufacturing facilities, patenting advice, licensing contacts, and other technical services at no cost or on a cost-recovery basis. MOTAP has been a part of TAPP since it began in fiscal year 1992 and has received $700,400 over the life of the program. This includes $200,000 in fiscal year 1992, $190,400 in fiscal year 1993, $170,000 in fiscal year 1994, and $140,000 in fiscal year 1995. MOTAP has received matching funds from the state for each of these years, resulting in a total state and federal funding of $1,419,130 over the life of the program. MOTAP also has collected a total of $24,242 in client fees. As shown in table V.1, MOTAP served many segments of the small business community during fiscal years 1993 and 1994. The 230 clients served represents a decrease of 9 percent from fiscal year 1993. The greatest area of concentration was in the manufacturing segment, which accounted for 64 percent of the clients served in fiscal year 1994. As shown in table V.2, MOTAP processed a total of 283 information requests during fiscal year 1994, a decrease of 34 percent from fiscal year 1993. Fifty-five percent of these requests were of a technical nature. Legal (patents and/or regulations) Although unsure why the number of clients served and requests answered declined in 1994 from the previous year, the state marketing specialist speculated that the floods Missouri experienced during July of 1993 reduced requests. Following the floods, many small businesses in Missouri may have been more concerned with repairing flood damage and related business slow downs than with identifying new business opportunities. MOTAP uses several methods to measure the effectiveness of its services, including client surveys, seminar evaluations, and comments received from clients following visits to its business sites. MOTAP applies information received from these efforts to adapt its services, communications, and management practices. MOTAP sends each client a satisfaction survey the quarter following the client’s MOTAP project. The survey asks questions concerning the quality of MOTAP services, the perceived value of its information, and the likelihood of obtaining similar information outside of MOTAP. The response rate for fiscal year 1994 was 29 percent. Client responses were generally positive. In summary, users found the information MOTAP provided to be helpful, current, concise, relevant, and of overall good quality. More than half of the respondents rated the financial value of the information higher than $150. Forty-three percent of the respondents, however, felt their chances were at least “somewhat likely” that they could have obtained the information outside of MOTAP. MOTAP experienced difficulties in evaluating the impact of its services because many respondents answered survey questions in a form that could not be tabulated. One reason is that respondents often provided descriptions of the ways they used the TAPP information but could not express its impact on their businesses in percentage or monetary terms. Another reason is the typical response rate on MOTAP questionnaires was approximately 25 percent. According to MOTAP officials, a rate this low does not allow a projection of the total program impact with any statistical confidence. Third, respondents often confused information obtained through the MOTAP program with information obtained through other SBDC services—which is understandable because MOTAP services are primarily delivered through SBDC counselors. The Missouri SBDC is updating its survey techniques to minimize the problems with evaluating its services. For example, the Missouri SBDC is developing an exit interview for clients so that the interviewer may ask follow-up questions that will help interpret the responses. Although planning to offer its clients MOTAP services after federal funding ends in 1995, the Missouri SBDC is not sure how the services will be funded or provided. According to SBDC officials, on-line database searching and expert services have been an integral part of the package of services offered by the SBDC. The SBDC will most likely downsize the center and save only the most critical parts. The Pennsylvania Business Intelligence Access System (BIAS) is a part of the Pennsylvania SBDC network and is affiliated with the University of Pennsylvania in Philadelphia. BIAS offers small business clients both on-line and off-line services in the form of literature searches, patent searches, expert consultations, and market analyses. These services are used to complement other services the SBDC offers these same clients. According to the Pennsylvania SBDC State Director, the primary emphasis of the BIAS program is education, also one of the main goals of TAPP. He said many of the BIAS presentations to clients are not sales presentations, but workshops with clear educational goals. In addition to providing on-line services, SBDC consultants explain and often demonstrate technology to clients. BIAS is implemented by the Ben Franklin Technology Center (BFTC), a small business incubator facility. The Pennsylvania SBDC contracted with the Business Information Center (BIC) of the BFTC to manage the BIAS program. The Pennsylvania SBDC State Director provides management oversight for BIAS. BIC is responsible for managing the research process and training both the SBDC consultants and the public. BIC also administers the contract with the database vendors—Telebase and Knowledge Express. Other vendors BIC can access include Batorlink, Internet, and Community of Science. These vendors provide access to more than 3,000 databases of business and technical information, including resumes of university experts from major research universities. BIAS is the only TAPP center that did not contract with Teltech for the first year of the program. Because BIAS has access to the Pennsylvania Technical Assistance Program (PENNTAP), a network of experts, it elected not to contract with Teltech. For the second year of the program, BIAS decided to experiment with Teltech to attract more of its clients to request expert searches. However, because demand for expert searches remained low, BIAS did not renew the Teltech contract for the third year. BIAS focuses on the manufacturing and technology sectors—particularly the advanced materials, biotechnology, and computer hardware and software development industries. BIAS also targets firms adversely affected by reductions in defense procurements, seventy percent of which are in manufacturing and technology-based industries. BIAS informs potential clients of its services through (1) personal contact with SBDC clients; (2) mailings and briefings to various trade organizations; (3) mailings to potential clients; (4) news media and on-line networks; and (5) seminars, workshops, and conferences attended by SBDC clients. Six months prior to federal TAPP funding, the BIC began providing on-line database searches to BFTC clients at a rate of $75 an hour plus expenses. TAPP funding enabled the SBDC to subscribe to services provided by the BIC and offer them to SBDC clients at a subsidized rate. BIAS charges its clients 70 percent of on-line expenses exceeding $75. Under the management of the SBDC assistant state director, two professional information specialists at BIC devote 50 percent of their time to the center. Other SBDC staff also provide assistance by informing other clients of BIAS services through their own consulting activities. BIAS can be accessed through any one of the 16 university-affiliated SBDCs or 70 community outreach offices. In contrast to other TAPP centers, Pennsylvania SBDC consultants are the main providers of BIAS services. After receiving training from the BIC’s senior information specialist, these consultants perform most of the database searches for SBDC clients. BIC information specialists support the SBDC consultants and provide assistance for particularly difficult search projects. According to SBDC officials, this arrangement makes the service more accessible to clients, expands the SBDC’s searching capacity, and strengthens the consultants’ database searching skills. Clients needing expert consultations are referred to PENNTAP, an in-state network of technical consultants. When using PENNTAP, clients are referred to technical experts by the PENNTAP regional staff person. These people identify the appropriate network expert and facilitate the consultation. Other experts can be identified using electronic databases. BIAS has been in TAPP from the beginning and has received $700,400 in federal funding. This included $200,000 in fiscal year 1992, $190,400 in fiscal year 1993, $170,000 in fiscal year 1994, and $140,000 in fiscal year 1995. BIAS has received matching funds from the state for each of these years. As shown in table VI.1, BIAS served many segments of the small business community during fiscal years 1993 and 1994. The 427 clients served represent an increase of 45 percent over fiscal year 1993. The greatest areas of concentration were in the manufacturing and service segments, which accounted for 70 percent of the clients served. As shown in table VI.2, BIAS responded to a total of 847 information requests during fiscal year 1994, an increase of 112 percent over fiscal year 1993. Only 18 percent of these information requests were of a technical nature. BIAS uses a brief mail survey to measure client satisfaction. The survey asks how BIAS information was used in the business, the financial value of the information, the likelihood of obtaining similar information outside of BIAS, and which type of information was most useful. Although the response rate for the fiscal year 1994 evaluations was nine percent, the clients’ responses were generally positive. In summary, clients found the information from BIAS to be concise and current and would recommend that other businesses contact BIAS. Forty-five percent valued the information at more than $100. However, 49 percent indicated their chances of obtaining similar information elsewhere was at least “somewhat likely.” Focus groups were also used to obtain input from clients and consultants concerning needs for on-line information. The information gained during the focus group sessions is used to inform BIAS staff of how to tailor the program to meet the needs of both clients and consultants. The SBDC plans to offer its clients BIAS services after federal TAPP funding ends in 1995. According to SBDC officials, BIAS services will be further incorporated into the SBDC’s basic operations while continuing to use BIC for many BIAS functions. SBDC officials believe that their arrangement with the BIC has been effective and will need only minor modifications in the future. Sources of funding being investigated include the state, other federal sources, and the private sector. The Texas Technology Access Program (TAP/Texas) is a part of the Texas Product Development Center (TPDC), a specialty center of the University of Houston SBDC. TAP/Texas offers small business clients both on-line and off-line services in the form of literature searches, patent searches, expert searches, and document delivery. TAP/Texas is managed by the Director of the TPDC with general oversight from the SBDC Director of the Houston Region. The TPDC and the SBDC are two of five functional areas under the University of Houston Institute for Enterprise Excellence. The other three functional areas are the Texas Manufacturing Assistance Center Gulf Coast, the Texas Information Procurement Service, and the International Trade Center. These five functions coordinate efforts to provide a full range of consulting services to small business clients. Clients of any of the five functional areas have access through TAP/Texas to more than one thousand databases through vendors like Knowledge Express, Dialog, Teltech, and Lexis/Nexis. Special in-state database resources are also available. These include the Mid-Continent Technology Transfer Center at Texas A&M University, TEXAS-ONE/Texas Marketplace, and the Texas Innovation Network System. These sources offer access to databases of the National Aeronautics and Space Administration (NASA) and federal laboratories, electronic bulletin boards containing directories of Texas companies, and access to technical experts and research facilities in Texas. TAP/Texas targets small manufacturers and technology-oriented service companies throughout Texas. TAP/Texas informs potential clients of its services through (1) personal contact with clients; (2) direct mail to targeted industries and trade associations; (3) participation in trade shows and conferences, including demonstrations of on-line capabilities; and (4) classroom workshops. The TPDC Director and one consultant at the TPDC work full time in the program while four additional staff provide support on a part-time basis. SBDC staff also provide assistance by informing clients of TAP/Texas services through counseling. TAP/Texas can be accessed through any one of 56 SBDC locations across the state. The methods by which TAP/Texas services are provided may vary depending on the situation. For example, the information specialist may conduct database searches independently after receiving a search request or interactively with the client guiding the search. Depending on the information requirements and time frames, the SBDC consultant and client may access databases directly from a remote location without the assistance of the information specialist. TAP/Texas has been a part of TAPP since it began in fiscal year 1992 and has received federal funds totaling $720,500 over the life of the program. This includes TAPP funding of $200,000 in fiscal year 1992, $190,400 in fiscal year 1993, $170,000 in fiscal year 1994, and $140,000 in fiscal year 1995. TAP/Texas also has received additional funds from the state, resulting in a total state and federal funding of $1,618,813 over the life of the program. To supplement funds available for on-line searches, TAP/Texas implemented a client fee structure in fiscal year 1994. Initial searches are free, but additional searches require a client co-payment. Fees collected for 114 co-payment searches total $2,744. As shown in table VII.1, TAP/Texas served many segments of the small business community during fiscal years 1993 and 1994. The 402 clients served in fiscal year 1994 represent an increase of 76 percent over the previous fiscal year. The greatest areas of concentration were in the manufacturing and service segments, which accounted for 63 percent of the clients served in fiscal year 1994. As shown in table VII.2, TAP/Texas responded to a total of 445 information requests during fiscal year 1994, an increase of 83 percent over the previous fiscal year. Thirty-three percent of these information requests were of a technical nature. Legal (patents and/or regulations) To measure client satisfaction, TAP/Texas uses a brief mail survey, which is distributed to clients immediately after the first data search is provided. The survey asks clients to evaluate the quality of customer service, the quality of data received from the searches, the accessibility of data outside of TAP/Texas, the value of the data received, and the type of data most critical for their needs. A follow-up letter is sent to nonrespondents after 30 days to increase the response rate. The response rate for the fiscal year 1994 client surveys was 32 percent. Client responses were generally positive. In summary, clients have found the information provided by TAP/Texas to be helpful, relevant, and of overall good quality. Fifty percent of the clients valued the information provided at more than $100. Forty-three percent of the respondents, however, felt their chances were at least “somewhat likely” that they would have obtained the information elsewhere. Focus groups are also used to obtain input from clients concerning on-line information needs. The information gained during the focus group sessions is used to inform TAP/Texas staff of how to tailor the services to meet the needs of both clients and consultants. The TPDC plans to offer its clients TAP/Texas services after federal funding ends in 1995, although officials are not sure how the program will be funded or what level of services will be available. On-line database searching is, and has been, an integral part of the package of services offered by the Institute for Enterprise Excellence. Depending on the future level of funding, however, the TPDC may have to reduce or even discontinue technology access services. The Wisconsin Technology Access Program (WisTAP) is a part of the Wisconsin SBDC and is affiliated with the University of Wisconsin. WisTAP helps small manufacturers and technology companies solve both technical and business management problems through technical counseling, on-line literature searches, and patent searches. These services are used to complement business management services offered these same clients by the SBDC. WisTAP is a decentralized program implemented through ten SBDCs located across the state. The central office in Whitewater coordinates the efforts of the other SBDCs while also providing counseling, assisting with the development of marketing plans, coordinating all remote literature searches, monitoring the activity level for each center, and offering support or shifting resources as needed. The WisTAP central office is staffed by a half-time Director and a half-time research specialist. The Wisconsin SBDC State Director provides management oversight for WisTAP. WisTAP targets small manufacturers and technology-based businesses. WisTAP has developed “marketing partners,” including various trade associations, state agencies, and regional and national technology transfer organizations, to leverage the marketing dollars available. Marketing partners provide mailing lists, underwrite mailings and promotional events, and assist with publications. WisTAP uses information provided by the marketing partners to assist them in targeted marketing efforts. For example, the Wisconsin Manufacturers and Commerce Association provided each SBDC with a database of its members. This database of over 8,500 manufacturers can be sorted by geographic area, type of company, and number of employees. The SBDC offices are able to use this information to reach small manufacturers in their area. The Wisconsin SBDC did not offer its clients technical counseling and on-line database searches prior to federal TAPP funding. WisTAP has added a new dimension to an SBDC by allowing it to broaden its focus to include technology access issues. Counselors at ten SBDC offices across the state and the Wisconsin Innovation Service Center are the primary deliverers of WisTAP services. Rather than locate database experts in a central location, WisTAP attempts to train all SBDC counselors at the various sites on database access. This organizational structure was developed in late 1993 to encourage “one stop” service delivery for WisTAP clients. By delivering WisTAP services through an SBDC counselor, clients may obtain the more traditional SBDC services (e.g., market analysis and management planning) in conjunction with technology access services. Teltech was the primary vendor for on-line services and access to technical experts during the first year of the program. Although WisTAP has been generally satisfied with the services offered by Teltech, the relative cost of its services has prompted WisTAP to identify alternative sources of information. Teltech is now a complement to WisTAP services rather than its primary provider. WisTAP has collaborative arrangements with a variety of sources of technical assistance and vendors. Examples include University-Industry Relations and Wisconsin Techsearch at the University of Wisconsin-Madison and the Office of Industrial Research and Technology Transfer at the University of Wisconsin-Milwaukee. These sources, among others, provide access to technical counseling by university faculty, database search and document delivery services, and other consulting services. Like the Wisconsin SBDC, WisTAP does not charge fees for its services. The Wisconsin SBDC does charge fees for training; however, none of these are credited to the WisTAP account. WisTAP has been a part of TAPP since it began in fiscal year 1992 and has received $700,400 in federal funds over the life of the program. This includes $200,000 in fiscal year 1992, $190,400 in fiscal year 1993, $170,000 in fiscal year 1994, and $140,000 in fiscal year 1995. WisTAP has received matching funds from the University of Wisconsin-Extension for each of these years, resulting in a total state and federal funding of $1,411,100 over the life of the program. As shown in table VIII.1, WisTAP served many segments of the small business community during fiscal years 1993 and 1994. The 445 clients served represents an increase of 16 percent from fiscal year 1993. The greatest area of concentration was in the manufacturing segment, which accounted for 71 percent of the clients served in fiscal year 1994. As shown in table VIII.2, WisTAP processed a total of 641 information requests during fiscal year 1994, a decrease of 39 percent from fiscal year 1993. Seventy-three percent of these requests were of a technical nature. Legal (patents and/or regulations) WisTAP attributes the decline in information requests to two factors. First, WisTAP changed its reporting practices from 1993 to 1994. The 1993 figures represent projects. A solution to a project may require several database interactions, thus having an inflationary effect on the 1993 figures. Secondly, a database vendor offered unlimited and free usage for the first quarter of fiscal year 1993. According to WisTAP officials, WisTAP increased their use of the service for its clients during this period. WisTAP uses a client satisfaction survey to measure the effectiveness of its services. Each quarter, WisTAP mails the survey to clients that had received services during the previous quarter. The survey asks questions concerning the quality of the services, the perceived value of the information, and the likelihood of obtaining similar information elsewhere. The response rate for the fiscal year 1994 client satisfaction survey was 46 percent. Eighty-eight percent of the respondents rated the overall quality of the information provided as good to excellent. Sixty-four percent rated their ability to access the information without WisTAP from somewhat unlikely to extremely unlikely. Sixty-two percent rated the financial value of the information received at more than $100. The Wisconsin SBDC plans to offer its clients WisTAP services after federal funding ends in 1995; however, the level of service will probably be cut in half. To prepare for the end of federal funding for TAPP, the Wisconsin SBDC has been focusing on developing relationships with new and existing network partners. For example, WisTAP has developed relationships with the staff of several University of Wisconsin technical and engineering departments. SBDC officials hope that, as more network partners gain experience working with small businesses, technical information will be accessible independent of WisTAP. The Oregon SBDC participated in TAPP during fiscal years 1992 and 1993. Through a contract agreement with the Oregon Innovation Center (OIC), the SBDC offered small business clients both on-line and off-line services in the form of literature searches, patent searches, expert consultations, and document location. Because of the loss of matching state funds for fiscal year 1994, the Oregon SBDC dropped out of TAPP. The OIC, however, has continued to provide TAPP-like services in the absence of state and federal financial support. The current program is managed and operated by the OIC, which assists businesses in accessing technical information. The OIC continues to offer TAPP-like services to its own clients and clients referred to them by the SBDCs, government agencies, and industry associations. The OIC serves primarily small manufacturers and technology-oriented service companies. OIC services are not limited to Oregon businesses; however, the majority of OIC clients are located in Oregon. When part of TAPP, the OIC informed potential clients of its services through SBDC marketing efforts, including seminars, pamphlets, and media publications. Now that the OIC is no longer directly affiliated with the SBDC, all marketing efforts have been eliminated because of funding constraints. The OIC relies entirely on word-of-mouth to attract new clients. One information specialist at the OIC devotes three-fourths time to the program. Staff of the SBDCs, state economic development agencies, and industry associations also assist by informing clients of OIC services through their own counseling activities. Because the OIC no longer participates in TAPP, it receives fewer referrals from the SBDCs. However, the clients that contact the OIC are more likely to represent technology-oriented industries, according to OIC officials. The OIC provides a range of business services including the development of marketing plans and information research. OIC clients have access to hundreds of on-line and off-line databases, including Dialog, Data-Star, CompuServe, Orbit, NASA, and the Federal Register. At the beginning of the program, OIC also provided access to Teltech. However, because of high costs and low demand to access Teltech experts, the OIC did not renew Teltech’s contract in July 1993. The OIC serves its clients primarily through remote database searches. Upon receipt of a request, the information specialist conducts the search and sends the results to the client. OIC staff rarely meet face-to-face with the client. Nearly all services are provided via telephone, facsimile machine, or computer. According to an OIC information specialist, the OIC has also developed the ability to conduct real-time, screen-to-screen searching. Also, client access is offered through a menu-driven bulletin board system. The OIC received $325,000 in federal funding during the 2 years it was in the program. This included $200,000 in fiscal year 1992 and $125,000 in fiscal year 1993. The OIC also received state matching funds for each of these years, resulting in a total state and federal funding of $650,000 over the life of the program. The OIC has not received any state or federal funding since the end of fiscal year 1993. In the spring of 1996, the OIC will occupy a new facility to be constructed as a joint project with the Central Oregon Community College. This project will be funded by the OIC’s state economic development appropriation that was committed in 1992. The OIC currently relies on donations and client fees to operate. According to OIC officials, client fees averaged $114 per search during 1994. During the TAPP years, clients were charged only about $10 per search although the total cost of the searches averaged $161. As shown in table IX.1, the OIC’s client base was dominated by manufacturing and service concerns in fiscal years 1993 and 1994. In 1994, service and manufacturing businesses accounted for 73 percent of the clients served overall. Because of increased client fees and the elimination of marketing outreach efforts, the number of clients served declined sharply—from 191 to 33—between 1993 and 1994. As shown in table IX.2, the OIC responded to a total of 99 information requests during fiscal year 1994—the first year in which the OIC did not participate in TAPP. This figure represents a decrease of 79 percent from fiscal year 1993. Twenty-three percent of these projects were of a technical nature. Legal (patents and/or regulations) During fiscal year 1993, the OIC conducted three focus group sessions in various locations to determine the informational needs of small businesses. Questions were asked to determine what types of information were the most difficult for small businesses to obtain, what sources small businesses typically use to obtain information, and what improvements they would suggest to provide them with business information. A recurring response from the participants was that marketing information was a primary concern and difficult to obtain. The OIC used the focus group results to gain a better understanding of the information needs of businesses. The OIC plans to continue providing TAPP-like services on a cost-recovery basis as it has been since the end of fiscal year 1993. The OIC hopes to supplement its budget through corporate donations. MCI Telecommunications Corporation, for example, recently donated $10,000 to the OIC. OIC officials said that a self-sufficient program has some advantages. One of these is that because a client makes a larger investment, it is more serious about its request for assistance. Also, the OIC has been able to provide services beyond the small business community, which has both expanded services and generated more funds. John P. Hunt, Jr., Assistant Director Robin M. Nazzaro, Assistant Director Frankie L. Fulton, Evaluator-in-Charge Paul W. Rhodes, Senior Evaluator Kenneth A. Davis, Evaluator Richard P. Cheston, Adviser The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Pilot Technology Access Program (TAPP), focusing on the: (1) program's effectiveness and impact on improving small business productivity and innovation; and (2) experiences and the lessons learned by the TAPP centers during the pilot program. GAO found that: (1) Congress has decided not to fund TAPP beyond fiscal year (FY) 1995; (2) one TAPP center has operated independently on a reduced scale since FY 1993 and the remaining five centers plan to continue operations beyond FY 1995, but they are not sure of their organization, services, and funding; (3) the five centers serviced 1,840 businesses in FY 1994, of which 59 percent were manufacturers and 66 percent were businesses just getting started; (4) TAPP services included technical and nontechnical information, and technical, patent, and marketing assistance; (5) although the program's impact could not be determined, TAPP clients were generally satisfied with the centers' operations and services; (6) center officials were generally pleased with their programs' development and believed that certain individual projects produced favorable results; and (7) lessons learned from TAPP that should be considered in designing future programs include adding more specificity to program goals and objectives, determining whether a separate and distinct federal program is necessary, determining the organizational type best suited to manage such a program, and deciding program funding options.
You are an expert at summarizing long articles. Proceed to summarize the following text: Since 1796, the federal government has had a role in developing and funding surface transportation infrastructure such as roads and canals to promote the nation’s economic vitality and improve the quality of life for its citizens. In 1956, Congress substantially broadened the federal role in road construction by establishing the Highway Trust Fund, a dedicated source of federal revenue, to finance a national network of standardized highways, known as the Interstate Highway System. This system, financed and built in partnership with state and local government over 50 years, has become central to transportation in the United States. Currently most federal surface transportation programs funded by the HTF span four major areas of federal investment: highway infrastructure, transit infrastructure and operations, highway safety, and motor carrier safety. Federal surface transportation funds are distributed either by a formula or on a discretionary basis through several individual grant programs. These grant programs are organized by mode and administered by four of DOT’s operating administrations—the Federal Highway Administration (FHWA), the Federal Transit Administration (FTA), the National Highway Traffic Safety Administration (NHTSA), and the Federal Motor Carrier Safety Administration (FMCSA). The modal administrations work in partnership with the states and other grant recipients to administer federal surface transportation programs. For example the federal government currently provides financial assistance, policy direction, technical expertise and some oversight, while state and local governments are ultimately responsible for executing transportation programs by matching and distributing federal funds and by planning, selecting and supervising infrastructure projects and safety programs while complying with federal requirements. Appendix II provides further information on the current and historical operation of these federal surface transportation programs. Additionally, the federal government provides financial assistance for other surface transportation programs such as intercity passenger rail, which has received over $30 billion of federal support since its inception in 1971. However this program is financed and operated separately from other surface transportation programs and an in-depth discussion of federal intercity passenger rail assistance is not included in this report. Increases over the past 10 years in transportation spending at all levels of government have improved the physical condition of highways and transit facilities to some extent, but congestion has worsened and safety gains have leveled off. According to the most recent DOT data, between 1997 and 2004 total highway spending per year by federal, state, and local governments grew by 22.7 percent in constant dollars. During this time, DOT reported some overall improvements in physical condition for road systems and bridges. For example, the percentage of vehicle miles traveled per year on “good” pavement conditions increased from 39.4 percent to 44.2 percent and the percentage of deficient bridges fell from 29.6 percent in 1998 to 26.7 percent per year in 2004. At the same time, incidents such as the Minneapolis bridge collapse in August 2007 indicate that significant challenges remain. Furthermore, despite increases in investment levels and some improvements in physical condition, operational performance has declined. For example, during the same period the average daily duration of travel in congested conditions increased from 6.2 hours to 6.6 hours, and the extent and severity of congestion across urbanized areas also grew. Transportation safety has improved considerably over the past 40 years, and although motor vehicle and large truck fatality rates have generally continued to fall modestly since the mid-1990s, the improvements yielding the greatest safety benefits (e.g., vehicle crashworthiness requirements and increases in safety belt use) have already occurred, making future progress more difficult. Furthermore, demand on transportation facilities nationwide has grown considerably since our transportation systems were built and is projected to increase in the coming decades as population, income levels, and economic activity continue to rise. According to the Transportation Research Board, an expected population growth of 100 million people could double the demand for passenger travel by 2040. Similarly, freight traffic is expected to climb by 92 percent from 2002 to 2035. These trends have the potential to substantially deepen the strain on the existing system, increasing congestion, and decreasing the reliability of our transportation network—with potentially severe consequences ranging from the economic impact of wasted time and fuel to the environmental and health concerns associated with increased fuel emissions. Moreover, at the current fuel tax rate, revenues to support the HTF may not be sufficient to sustain it. Currently, trust fund receipts are growing and will continue to grow with increased traffic. However, the purchasing power of the dollar has declined with inflation, and the federal motor fuel tax rate has not increased since 1993. In addition, more fuel-efficient and alternative-fuel vehicles are using less taxable motor fuel per mile driven. Recent legislation has authorized spending that is expected to outstrip the growth in trust fund receipts. According to a recent estimate from CBO, the remaining balance in the Highway Account of the Highway Trust Fund will be exhausted in 2009, and in fiscal year 2009 projected highway spending will exceed revenue by $4 to $5 billion. In January 2008 the National Surface Transportation Policy and Revenue Study Commission released a report with several recommendations to place the trust fund on a sustainable path, as well as reform the current structure of the nation’s surface transportation programs. The recommendations include significantly increasing the level of investment by all levels of the government in surface transportation, consolidating and reorganizing the current programs, speeding project delivery, and making the current program more performance- and outcome-based and mode- neutral, among other things. To finance the additional investment, the Commission recommended raising the current federal fuel tax rate by 25 to 40 cents per gallon on an incremental basis equivalent to an increase of 5 to 8 cents per gallon per year for 5 years. It also said that states would have to raise revenue from a combination of higher fuel taxes and other sources. In addition to raising the fuel tax, the Commission recommended a number of other user-based fees such as tolling, congestion pricing, and freight fees to provide additional revenue for transportation improvements. Three members of the Commission disagreed with some of the findings and recommendations of the Commission report. For example, the minority view disagreed with the Commission’s recommendations on expanding the federal role and increasing the federal fuel tax, among others. Rather, the minority view proposed sustaining fuel taxes at the current levels, refocusing federal investment on two areas of national interest, and providing the states with greater regulatory flexibility, incentives, and the analytical tools to allow adoption of market-based reforms on their highway systems. We have ongoing work assessing the Commission’s proposal and other reauthorization proposals and will be issuing a report in 2008. Although most surface transportation funds are still directed to highway infrastructure, the federal role in surface transportation has broadened over the past 50 years to incorporate goals beyond highway construction, and federal surface transportation programs have grown in number and complexity. The resulting conglomeration of program structures reflects a variety of federal approaches for setting priorities, distributing federal funds, and sharing oversight responsibility with state and local partners for surface transportation programs. The HTF was established in 1956 to provide federal funding for Interstate highway construction and other infrastructure improvements based on the “user-pay principle”— that is, users of transportation systems should pay for the systems’ construction through highway user fees such as taxes on motor fuels, tires, and trucks. However, since 1956, the federal role in surface transportation has expanded beyond funding Interstate construction and highway infrastructure to include grant programs that address other transportation, societal, and environmental goals. For example, although most HTF expenditures continue to support highway infrastructure improvements (see fig. 1), Congress established new federal grants for highway safety and transit during the 1960s and added a motor carrier safety grant program during the 1980s. Furthermore, Congress has since expanded the initial basic grant programs in each of these areas to incorporate a variety of different goals. For example, the highway program has expanded to include additional programs to fund air quality improvements, Interstate maintenance, and safety-related construction improvements (see fig. 2). Federal transit assistance expanded from a single grant program that funded capital projects to multiple programs that provide general capital and operating assistance for urban and rural areas, as well as numerous specialized grants with goals ranging from supporting transit service for the elderly, persons with disabilities, and low-income workers to promoting the use of alternative fuels (see fig. 3). Federal safety assistance has also expanded from funding general state highway and motor carrier safety programs and enforcement activities to additionally funding many specialized grants to address specific issues. For example, federal highway safety assistance currently includes several grant programs to address specific accident factors (e.g., alcohol-impaired driving) and safety data gaps (see fig. 4). Similarly, the number of federal motor carrier assistance programs has increased to include several grants for improving data collection, supporting commercial driver’s license programs and funding border enforcement activities (see fig. 5). Consequently, federal funds currently support a wide variety of goals and modes beyond the initial federal focus on highway infrastructure, ranging from broad support for transit in urban areas, to targeted grants to increase seat-belt usage. Furthermore, Congress has also expanded the scope of federal safety goals to include specific legislative changes at the state level. For example, in accepting certain federal-aid highway infrastructure funds, states must enact certain laws to improve highway safety or face penalties in the form of either withholdings or transfers in their federal grants. Over the past 30 years, penalty or incentive provisions have been used to encourage states to enact laws that establish a minimum drinking age of 21 years, a maximum blood alcohol level of 0.08 to determine impaired driving ability, and mandatory seat belt usage, among others (see fig. 4), with transfer or withholding penalties as high as 10 percent of a state’s designated highway infrastructure funds. While most states have chosen to adopt laws that comply with many of these provisions, some remain subject to certain penalties. For example, as of January 2008, 11 states are penalized for not enacting an open container law and 11 are penalized for not enacting a repeat offender law. As federal goals have broadened, Congress has added new federal procedural requirements for infrastructure projects and programs and agencies have issued more complex rules to address these additional federal goals. For example, Congress established cooperative urban transportation planning as a matter of national interest and passed legislation in 1962 requiring all construction projects to be part of a continuing, comprehensive, and cooperative planning process between state and local governments. In another example, grant recipients may be required to conduct environmental assessments for many federally funded transportation projects to comply with the federal environmental goals established by the National Environmental Policy Act of 1969 (NEPA). Other federal requirements may include compliance with the Americans with Disabilities Act, nondiscrimination clauses in the Civil Rights Act of 1964, labor standards mandated by the Davis-Bacon Act, and Buy America procurement provisions, among others. Although behavior-oriented safety programs and activities are generally not subject to construction-related requirements, Congress has required that agencies address additional federal goals in safety-related rulemaking processes. For example, to address national environmental objectives, Congress expanded NHTSA’s regulatory scope in highway safety to include establishing regulations for corporate average fuel economy standards, in addition to issuing rules in areas such as tire-safety standards and occupant-protection devices (e.g., seat belts). Similarly, to address other areas of national concern, Congress has broadened FMCSA’s regulatory authority in motor carrier safety to include household goods movement, medical requirements for motor carrier operators, and greater oversight of border and international safety. Furthermore, when establishing federal standards in these areas, regulatory agencies such as NHTSA and FMCSA may be subject to increasingly rigorous requirements for analysis and justification associated with a wide range of federal legislation and executive orders including NEPA, Executive Order 12866 requiring cost-benefit analysis for proposed rules, Executive Order 13211 requiring consideration of the effects of government regulation on energy, and the Unfunded Mandates Reform Act of 1995, among others. Program expansion over the past 50 years has created a variety of grant structures and established different federal approaches for setting priorities and distributing federal funds across surface transportation programs. These approaches, which range from formula grants to dedicated spending provisions, give state and local governments varying degrees of discretion in allocating federal funds. As in the past, most surface transportation programs are jointly administered by the federal government in partnership with state or local governments, but in recent years the federal government has increasingly delegated oversight responsibility to state and local governments. Federal approaches for setting priorities and distributing funds currently range from giving state and local governments broad discretion in allocating highway infrastructure funds to directly targeting specific federal goals through the use of incentive grants and penalty provisions in safety programs. In 1956 federal surface transportation funds were distributed to the states through four formula grant programs that provided federal construction aid for certain eligible highway categories (e.g., Interstate, primary, and secondary highways and urban extensions). The states in turn, matched and distributed funds at their discretion, within each program’s eligibility requirements. Within the highway program, this federal-state partnership has changed in response to considerable increases in state and local authority and flexibility since 1956. Largely because of revisions to federal highway programs in the 1990s, state and local governments currently have greater discretion to allocate the majority of their federal highway funds according to state and local priorities. For example, core highway programs such as the Surface Transportation Program and the National Highway System program have broader goals and project eligibility requirements than earlier highway infrastructure grant programs. Although funds continue to be distributed by formula to the states for individual programs based on measures of highway use or the extent of a state’s highway network, or other factors, as figure 6 demonstrates, six core highway programs permit the states to transfer up to 50 percent of their apportioned funds, with certain restrictions, to other eligible highway programs. Furthermore, although the process for calculating the distributions is complex for some programs, the end result of most highway program formulas is heavily influenced by minimum apportionment and “equity” requirements. For fiscal year 2008, each state’s share of formula funds will be at least 92 percent of their relative revenue contributions to the Highway Account of the Highway Trust Fund. According to FHWA estimates, the equity requirements will provide approximately $9 billion in highway funds to the states in addition to the amount distributed by formula through the individual grant programs. Over $2 billion of these additional funds will have the same broad eligibility requirements and transfer provisions of the Surface Transportation Program. Moreover, flexible funding provisions within highway and transit programs allow certain infrastructure funds to be used interchangeably for highway or transit projects. Major transit infrastructure grants currently range from broad formula grants that provide capital and operating assistance, such as the Block Grants Program (Urbanized Area Formula Grants), to targeted discretionary grants for new transit systems, such as New Starts and Small Starts, that require applicants to compete for funding based on statutorily defined criteria. For example, projects must compete for New Starts funds on the basis of cost-effectiveness, potential mobility improvements, environmental benefits, and economic development effects, among other factors. Additionally, smaller formula grants direct funds to general goals such as supporting transit services for special populations like elderly, disabled, and low-income persons. Unlike most surface transportation funding, which is distributed through the states, most transit assistance is distributed directly to local agencies, since transit assistance was originally focused on urban areas. Current major highway and motor carrier safety grants include formula grants to provide general assistance for state highway safety programs and improving motor carrier safety and enforcement activities, such as Highway Safety Programs (402) and Motor Carrier Safety Assistance Program (MCSAP) Grants. They also include targeted discretionary grants such as Occupant Protection Incentive Grants and Border Enforcement Grants. Additionally, they include penalty provisions, such as Open Container Requirements (154) and Minimum Penalties for Repeat Offenders for Driving While Intoxicated or Driving Under the Influence (164), designed to address specific safety areas of national interest. Unlike formula-based funding, some of the discretionary grants, such as the Safety Belt Performance Grants, directly promote national priorities by providing financial incentives for meeting specific performance or safety activity criteria (e.g., enforcement, outreach). Additionally, penalty provisions such as those associated with Open Container laws and MCSAP Grants promote federal priorities by either transferring or withholding state highway infrastructure funds from states that do not comply with certain federal provisions. For example, in 2007, penalty provisions transferred over $217 million of federal highway infrastructure assistance to highway safety programs in the 19 states and Puerto Rico that were penalized for failure to enact either open container or repeat offender laws. Finally, Congress provides congressionally directed spending for surface transportation through specific provisions in legislation or committee reports. While estimates of the precise number and value of these congressional directives vary, observers agree that they have grown dramatically. For instance, the Transportation Research Board found that congressional directives have grown from 11 projects in the 1982 reauthorization act to over 5,000 projects in the 2005 reauthorization act. Most federal surface transportation programs continue to be jointly administered by the federal and state, or local governments, but the federal government has increasingly delegated oversight responsibility to state and local governments. This trend is most pronounced for highway infrastructure programs; however, it has also occurred in federal transit and safety programs. For example, when Interstate construction began, the federal government fully oversaw all federally funded construction projects, including approving design plans, specifications, and estimates, and periodically inspecting construction progress. In 1973, Congress authorized DOT to delegate oversight responsibility to states for compliance with certain federal requirements for noninterstate projects. During the 1990s, Congress further expanded this authority to allow states and FHWA to cooperatively determine the appropriate level of oversight for federally funded projects, including some Interstate projects. Currently, based on a stewardship agreement with each state, FHWA exercises full oversight over a limited number Federal-aid Highway projects, constituting a relatively limited amount of highway mileage. States are required to oversee all Federal-aid Highway projects that are not on the National Highway System, which constitutes a large majority of the road mileage receiving federal funds, and states oversee design and construction phases of other projects based on an agreement between FHWA and the state. Full federal oversight for transit projects is limited to major capital projects that cost over $100 million, and grant recipients are allowed to self-certify their compliance with certain federal laws and regulations for other projects. Although state and local grant recipients have considerable oversight authority, FHWA and FTA both periodically review the recipients’ program management processes to ensure compliance with federal laws and regulations. State and local government responsibilities for overseeing transportation planning processes have also grown in recent decades. Although such responsibilities predate federal transportation assistance programs, since 1962, the federal government has made compliance with numerous planning and project selection requirements a condition for receiving federal assistance. During the 1970s, federal requirements grew in range and complexity and, in some cases, specified how state and local governments should conduct planning activities. However, since the 1980s, state and local governments have had greater flexibility to fulfill federal planning requirements. For example, in 1983, urban transportation planning regulations were revised to reduce the level of direct federal involvement in state and local planning processes, and state and local agencies were allowed to self-certify their compliance with federal planning requirements. Similarly, although the federal government identified specific environmental and economic factors to be considered in the planning process as part of the surface transportation program legislation enacted in 1991 and subsequently amended in 1998, these requirements give state and local governments considerable discretion in selecting analytical tools to evaluate projects and make investment decisions based on their communities’ needs and priorities. The states have also been given greater oversight responsibility for safety programs as federal agencies have shifted from direct program oversight to performance-based oversight of state safety goals. For example, since 1998, NHTSA has not approved state highway safety plans or projects, but instead focuses on a state’s progress in achieving the goals it set for itself in its annual safety performance plan. Under this arrangement, a state must provide an annual report that outlines the state’s progress towards meeting its goals and performance measures and the contribution of funded projects toward meeting its goals. If a state does not meet its established safety goals, NHTSA and the state work cooperatively to create a safety improvement plan. FMCSA uses a similar approach to oversee state motor carrier safety activities. Starting in 1997, the states were required to identify motor carrier safety problems based on safety data analysis, target their grant activities to address these issues, and report on their progress toward the national goal of reducing truck crashes, injuries, and fatalities. Much as FHWA and FTA do for their grant programs, both NHTSA and FMCSA periodically review state management processes for compliance with federal laws and regulations. Many federal surface transportation programs do not effectively address identified transportation challenges such as growing congestion. While program goals are numerous, they are sometimes conflicting and often unclear—which contributes to a corresponding lack of clarity in the federal role. The largest highway, transit, and safety grant programs distribute funds through formulas that are typically not linked to performance and, in many cases, have only an indirect relationship to needs. Mechanisms generally do not link programs to the federal objectives they are intended to address, in part due to the wide discretion granted to states and localities in using most federal funds. Furthermore, surface transportation programs often do not employ the best tools and approaches available, such as rigorous economic analysis for project selection and a mode-neutral approach to planning and investment. The federal role in surface transportation is unclear, in part because program goals are often unclear. In some cases, stated goals may be contradictory or may come into direct conflict. For example, it may not be possible to improve air quality while spurring economic development with new highway construction. With the proliferation of goals and programs discussed in the previous section of this report, the federal role varies from funding improvements in specific types of infrastructure (such as the National Highway System) to aiming at specific outcomes (such as reducing highway fatalities). At a recent expert panel on transportation policy convened by the Comptroller General, experts cited the lack of focus of the federal role in transportation as a problem, and some stakeholders have also made similar criticisms. In some policy areas, the federal role is limited despite consensus on goals. For example, freight movement is widely viewed as a top priority, yet no clear federal role has been established in freight policy. DOT’s draft Framework for a National Freight Policy, issued in 2006, is a step toward clarifying a federal role and strategy, but it lacks specific targets and strategies and criteria for achieving them. Current approaches to planning and financing transportation infrastructure do not effectively address freight transportation issues—few programs are directly aimed at freight movement, and funding is based on individual modes, but freight moves across many modes. Similarly, despite statutes and regulations that identify an intermodal approach that provides connections across modes as a goal of federal transportation policy, there is currently only one federal program specifically designed for intermodal infrastructure, and all the funds available for the program are congressionally designated for specific projects. The federal government also lacks a defined role in or mechanism for aiding projects that span multiple jurisdictions. The discretion and differing priorities of individual states and localities can make it difficult to coordinate large projects that involve more than one state or local sponsor. There have been some successful multijurisdictional transportation initiatives, such as the FAST Corridor across several metropolitan areas in Washington State, but a lack of established political or administrative mechanisms for cooperation, combined with the large degree of state and local autonomy in transportation decision-making, is an obstacle to such “megaprojects.” At a hearing of the National Surface Transportation Policy and Revenue Study Commission in New York City, an expert on the regional economy cited the Tappan Zee Bridge in New York State as an example of the obstacles such projects can face. Neighboring Connecticut wants the bridge’s capacity expanded, but there is currently no established mechanism that allows Connecticut to help move the project forward. In testimony for the Commission, stakeholders such as the U.S. Chamber of Commerce and the American Association of Port Authorities cited fostering interjurisdictional coordination as a key federal role, and AASHTO has also highlighted the need for improved multijurisdictional coordination mechanisms in its reports on the future of federal transportation policy. At times, DOT has undertaken new activities without assessing the rationale for a federal role. For example, the agency made short sea shipping of freight a priority, but did not first examine the effect of federal involvement on the industry or identify obstacles to success and potential mitigating actions. Without a consistent approach to identifying the rationale for a federal role, DOT is limited in its ability to evaluate potential investments and determine whether short sea shipping—or another available measure—is the most effective means of enhancing freight mobility. Most federal surface transportation programs lack links between funding and performance. Federal funding for transportation has increased significantly in recent years, but because spending is not explicitly linked to performance, it is difficult to assess the impact of these increases on the achievement of key goals. During this period of funding increases, the physical condition of the highway system has improved, but the system’s overall performance has decreased, according to available measures of congestion. DOT has established goals under the Government Performance and Results Act (GPRA) of 1993 that set specific benchmarks for performance outcomes such as congestion and highway fatalities. However, these performance measures are not well-reflected in individual grant programs because disbursements are seldom linked to outcomes— most highway funds are apportioned without relationship to the performance of the recipients. The largest transit and safety programs also lack links to performance. States and localities receive the same disbursement regardless of their performance at, for example, reducing congestion or managing project costs. As a result, the incentive to improve return on investment—the public benefits gained from public resources expended—is reduced. Safety and some transit grants are more directly linked to goals than highway infrastructure programs, and several incorporate performance measures. Whereas highway infrastructure programs tend to focus on improving specific types of facilities such as bridges, highway safety, and, to a lesser extent, transit programs, are more often designed to achieve specific objectives. For instance, the goal of the Job Access and Reverse Commute transit program is to make jobs more accessible for welfare recipients and other low-income individuals. Likewise, under the Section 402 State and Community Highway Safety Grant Program, funds must be used to further the goal of reducing highway fatalities. To some extent, transit and safety programs also have a more direct link to needs because their formulas do not incorporate equity adjustments that seek to return funds to their source. Furthermore, several highway safety and motor carrier safety grants make use of performance measures and incentives. For example, under the Motor Carrier Safety Assistance Program, some funds are set aside for incentive grants that are awarded using five state performance indicators that include, among others, large truck-involved vehicle fatality rates, data sharing, and commercial driver’s license verification. Most highway transportation programs lack links to need as well as performance. As discussed above, most grant funds are instead distributed according to set formulas that typically have an indirect relation to need. As a result, grant disbursements for these programs not only fail to reflect performance, but they may also not reflect need. Some of the formula criteria, such as population, are indirect measures of need, but the equity bonus and minimum apportionment criteria are not related to need, and exert a strong influence on formula outcomes. Certain programs, such as the Highway Bridge Replacement and Rehabilitation Program, which bases disbursements on the cost of needed repairs, use more direct measures. In general, however, the link between needs and federal highway funding is weak. Besides lacking links between funding and performance, federal surface transportation programs generally lack mechanisms to tie state actions to program goals. DOT does not have direct control over the vast majority of activities that it funds; instead, states and localities have wide discretion in selecting projects to fund with federal grants. Federal law calls the federal-aid highway program a “federally-assisted state program,” and specifies that grant funds “shall in no way infringe on the sovereign rights of the States to determine which projects shall be federally financed.” In addition, states have broad flexibility in using more than half of federal highway funds as a result of a combination of programs with wide eligibility (such as the Surface Transportation Program) and the ability to transfer some funds between highway programs. Furthermore, “flex funding” provisions allow transfers between eligible highway and transit programs; between 1992 and 2006, states used this authority to transfer $12 billion from highway to transit programs. While these provisions give states the discretion to pursue their own priorities, the provisions may impede the targeting of federal funds toward specific national objectives. Federal rules for transferring funds between highway programs are so flexible that the distinctions between individual programs have little meaning. To some extent, the Federal-aid Highway program functions as a cash transfer, general purpose grant program, not as a tool for pursuing a cohesive national transportation policy. Transit and safety grants, in contrast, are more linked to goals because they do not allow transfers among programs to the same degree. Safety grants are linked to goals because states must use data on safety measures to create performance plans that structure their safety investments, yet states are still able to set their own goals, develop their own programs, and select their own projects. Performance measures are also used in allocating funding in several highway safety grant programs, providing an even more direct link to goals. In some areas, federal surface transportation programs do not use the best tools and approaches available. Rigorous economic analysis, applied in benefit-cost studies, is a key tool for targeting investments, but does not drive transportation decision-making. While such analysis is sometimes used, we have previously reported that it is generally only a small factor in a given investment decision. Furthermore, statutory requirements of the planning and project selection processes—such as public participation procedures or NEPA requirements that may be difficult to translate into economic terms—can interfere with the use of benefit-cost analysis. Decision makers often also see other factors as more important. In a survey of state DOTs that we conducted in 2004 as part of that same study, 34 said that political support and public opinion are factors of great or very great importance in the decision to recommend a highway project, while 8 said that the ratio of benefits to costs was a factor of great or very great importance. Economic analysis was more common for transit projects, largely because of the requirements of the competitive New Starts grant program, which uses a cost-effectiveness measure. However, the New Starts program constitutes only 18 percent of transit funding authorizations under the Safe, Accountable, Flexible, and Efficient Transportation Equity Act – A Legacy for Users (SAFETEA-LU) authorization. There are also few formal evaluations of the outcomes of federally-funded projects. As a result, policymakers miss a chance to learn more about the efficacy of different approaches and projects. Such evaluations are especially important because highway and transit projects often have higher costs and lower usage than estimated beforehand. New Starts is also the only transportation grant program that requires before- and-after studies of outcomes. The modal basis of transportation funding also limits opportunities to invest scarce resources as efficiently as possible. Instead of being linked to desired outcomes, such as mobility improvements, funds are “stovepiped” by transportation mode. Although, as discussed above, states and localities have great flexibility in how they use their funds, this modal structure can still discourage investments based on an intermodal approach and cross-modal comparisons. Reflecting the separate federal transportation funding programs, many state and local DOTs are organized into several operating administrations with responsibilities for particular modes. Because different operating administrations oversee and manage separate funding programs, these programs often have differing timelines, criteria, and matching fund requirements, which can make it difficult for public planners to pursue the goal—stated in law and DOT policy—of an intermodal approach to transportation needs. For example, a recent project at the Port of Tacoma (Washington) involved widening a road and relocating rail tracks to improve freight movement on both modes, but it was delayed because highway funding was available, but rail funding was not. Moreover, despite the wide funding flexibility within the highway program and between the highway and transit programs, many funds are dedicated on a modal basis, and state and local decision makers may choose projects based on the mode eligible for federal funding. Experts on the Comptroller General’s recent transportation policy panel cited modal stovepiping as a problem with the current federal structure, saying that it inhibits consideration of a range of transportation options. State officials have also criticized stovepiping, both in AASHTO policy statements and individually. For instance, a state transportation official told a hearing of the National Surface Transportation Policy and Revenue Study Commission that modal flexibility should be increased to allow states to select the best project to address a given goal. The federal government is not equipped to implement a performance- based approach to transportation funding in many areas because it lacks comprehensive data. Data on outcomes—ideally covering all projects and parts of the national transportation network, as well as all modes—would be needed in order to consider performance in funding decisions. Presently, data on key performance and outcome indicators is often absent or flawed. For example, DOT does not have a central source of data on congestion—the available data are stovepiped by mode—and some congestion information for freight rail is inaccessible because it is proprietary and controlled by railroad companies. Likewise, FTA does not possess reliable and complete data on transit safety. A partial exception is highway safety, for which NHTSA and FMCSA have data on a variety of outcomes, such as traffic fatalities. NHTSA employs this information to help states set priorities, FMCSA uses it to target enforcement activities, and both agencies use it to monitor states’ progress toward achieving their goals and to award incentive grants. However, the safety data that states collect are not always timely, complete, and consistent. For example, a review of selected states found that some of the information in their databases was several years old. Tools to make better use of existing infrastructure have not been deployed to their full potential, in part because their implementation is inhibited by the current structure of federal programs. Research has shown that a variety of congestion management tools, such as Intelligent Transportation Systems (ITS) and congestion pricing are effective ways of increasing or better utilizing capacity. Although such tools are increasingly employed by states and localities, their adoption has not been as extensive as it could be given their potential to decrease congestion. One factor contributing to this slow implementation is the lack of a link between funding and performance in current federal programs—projects with a lower return on investment may be funded instead of congestion management tools such as ITS. Furthermore, DOT’s measures of effects fall short of capturing the impact of ITS on congestion, making it more difficult for decision makers to assess the relative worth of alternative solutions. State autonomy also contributes to the slowed rollout of these tools. Even though federal funding is available to encourage investment in ITS, states often opt for investments in more visible projects that meet public demands, such as capacity expansion. Federal investment in transportation may lead to the substitution of federal spending for state and local spending. One strategy that Congress has used to meet the goals of the Federal-aid Highway program has been to increase federal investment. However, not all of the increased federal investment has increased the total investment in highways, in part because Congress cannot prevent states and localities from using some of their own highway funds for other purposes when they receive additional federal funds. We reported, on the basis of our own modeling and a review of other empirical studies, that increased federal highway grants influence states and localities to substitute federal funds for funds they otherwise would have spent on highways. Specifically, we studied the period from 1983 through 2000 and our model suggests that over the entire time period, states substituted about 50 cents of every dollar increase in federal highways grants for funds they would have spent on highways from their own resources. For the latter part of that period, 1992 through 2000, we estimated a substitution rate of about 60 cents for every dollar increase in federal aid. These results were consistent with other study findings and indicate that substitution is reducing the impact of federal investment. Federal grant programs have generally not employed the best tools and approaches to reduce this potential for substitution—maintenance of effort requirements and higher nonfederal matching requirements, discussed in the next section of this report. One reason for the high rate of substitution for the Federal-aid Highway program is that states typically spend more than the amount required to meet federal matching requirements—generally 20 percent. Thus, states can reduce their own highway spending and still obtain increased federal funds. Finally, congressionally directed spending may not be an ideal means of allocating federal grant funds. Some argue that Members of Congress are good judges of investment needs in their districts, and some congressional directives are requested by states. However, officials from FHWA and FTA have stated that congressional directives sometimes displace their priority transportation projects by providing funds for projects that would not have been chosen in a competitive selection process. For example, FHWA officials stated that some congressional directives listed in the Projects of National and Regional Significance program would not have qualified for funding in a merit-based selection process. Officials from three state departments of transportation also noted that inflexibilities in the use of congressionally directed funds limit the states’ ability to implement projects and efficiently use transportation funds by, for example, providing funding for projects that are not yet ready for implementation or providing insufficient funds to complete particular projects. However, an official from one state department of transportation noted that although congressional directives can create administrative challenges, they often represent funding that the state may not have otherwise received. The solvency of the federal surface transportation program is at risk because expenditures now exceed revenues for the Highway Trust Fund, and projections indicate that the balance of the Highway Trust Fund will soon be exhausted. According to the Congressional Budget Office, the Highway Account will face a shortfall in 2009, the Transit Account in 2012. The rate of expenditures has affected its fiscal sustainability. As a result of the Transportation Equity Act for the 21st Century (TEA-21), Highway Trust Fund spending rose 40 percent from 1999 to 2003 and averaged $36.3 billion in contract authority per year, and the upward trend in expenditures continued under SAFETEA-LU, which provided an average of $57.2 billion in contract authority per year. Congress also established a revenue-aligned budget authority (RABA) mechanism in TEA-21 to help assure that the Highway Trust Fund would be used to fund projects instead of accumulating large balances. When revenues into the Highway Trust Fund are higher than forecast, RABA ensures that additional funds are apportioned to the states. The RABA provisions were written so that the adjustments could work in either direction—going up when the trust fund had greater revenues than projected and down when revenues did not meet projected levels. However, when the possibility of a downward adjustment occurred in fiscal year 2003 as a result of lower-than-projected trust fund revenues, Congress chose to maintain spending at the fiscal year 2002 level. If the RABA approach is kept in the future, allowing downward adjustments could help with the overall sustainability of the fund. While expenditures from the trust fund have grown, revenues into the fund have not kept pace. The current 18.4 cents per gallon fuel tax has been in place since 1993, and the buying power of the fixed cents-per-gallon amount has since been eroded by inflation. The reallocation to the Highway Trust Fund of 4.3 cents of federal fuel tax previously dedicated to deficit reduction provided an influx of funds beginning in 1997. However, this influx has been insufficient to sustain current funding levels. In addition, if changes are not made in policy to compensate for both the increased use of alternative fuels that are not currently taxed and increased fuel economy, fuel tax revenues, which still account for the majority of federal transportation financing, may further erode in the future. A sound basis for reexamination can productively begin with identification of and debate on underlying principles. Through our prior work on reexamining the base of government, our analysis of existing programs and other prior reports, we identified a number of principles that could help drive reexamination of federal surface transportation programs and an assessment of options for restructuring the federal surface transportation program. The appropriateness of these options will depend on the underlying federal interest and the relative potential of the options to develop sustainable strategies addressing complex national transportation challenges. These principles are as follows: Create well-defined goals based on identified areas of federal interest. Establish and clearly define the federal role in achieving each goal. Incorporate performance and accountability for results into funding decisions. Employ best tools and approaches to emphasize return on investment. Ensure fiscal sustainability. Determining the federal interest involves examining the relevance and relative priority of existing programs in light of 21st century challenges and identifying emerging areas of national importance. For instance, increases in passenger and freight travel have led to growing congestion, and this strain on the transportation system is expected to grow with population increases, technology changes, and the globalization of the economy. Furthermore, experts have suggested that federal transportation policy should recognize emerging national and global imperatives such as reducing the nation’s dependence on foreign fuel sources and minimizing the impact of the transportation system on global climate change. Given these and other challenges, it is important to assess the continued relevance of established federal programs and to determine whether the current areas of federal involvement are still areas of national interest. Key to such an assessment is how narrowly or broadly the federal interest in the nation’s transportation system should be defined and whether the federal interest is greater in certain areas of national priority: Should federal spending and programs be more focused on specific national interests such as interstate freight mobility or on broad corridor development? Is there a federal interest in local issues such as urban congestion? If so, are there more distinct ways in which federal transportation spending and programs could address local issues that would enhance inherent local incentives and choices? To what extent should federal transportation policy address social concerns such as mobility for disadvantaged persons and transportation safety? If environmental stewardship is part of the federal interest, how might federal transportation policy better integrate national long-term goals related to energy independence and climate change? The proliferation of federal surface transportation programs has, over time, resulted in an amalgam of policy interests that may not accurately reflect current national concerns and priorities. Although policymakers have attempted to clarify federal transportation policy in the past and an FHWA Task Force has called for focusing federal involvement on activities that clearly promote national objectives, current policy statements continue to cover a wide spectrum of broadly defined federal interests ranging from promoting global competitiveness to improving citizens’ quality of life. While these federal programs, activities, and funding flows reflect the interests of various constituencies, they are not as a whole aligned with a strategic, coherent, and well-defined national interest. In short, the overarching federal interest has blurred. Once the federal interest has been refocused and more clearly defined, policymakers will have a foundation for allocating scarce federal resources according to the level of national interest. With the federal interest in surface transportation clearly defined, policymakers can clarify the goals for federal involvement. The more specific, measurable, achievable, and outcome-based the goals are, the better the foundation will be for allocating resources and optimizing results. Even though some federal transportation safety programs are linked to measurable outcome-based goals, such as achieving a specific rate of safety-belt use to reduce traffic fatalities, the formula funding for general improvements to transit facilities or highway systems is generally provided without reference to achieving specific outcomes for federal involvement. For example, the guidelines for state and local recipients’ use of the largest highway and transit formula grant funds, such as the Surface Transportation Program or Block Grant Program (Urbanized Area Formula Grants), are based on broad project eligibility criteria. These criteria involve the type of highway or type of work (e.g., transit capital investment versus operating assistance) rather than the achievement of clearly defined and measurable outcomes. Furthermore, although DOT has already established some outcome measures as part of its strategic planning process, its agencywide goals and outcomes cover a vast array of activities and are generally not directly linked to project selection or funding decisions for most highway funding and the largest transit and safety programs. Without specific and measurable outcomes for federal involvement, policymakers will have difficulty determining whether certain programs are achieving desired results. After identifying the federal interest and federal goals, policymakers can clearly define the federal government’s role in working toward each goal and define that role in relation to the roles of other levels of government and other stakeholders. This would involve an examination of state and local government roles, as well as of the federal role. Following such an examination, the current relationship between the federal and other levels of government could change. For example, in the federal-aid highway program, the current “partnership” between the federal government and the states is based on an explicit recognition of state sovereignty in the conduct of the program, and the states have considerable flexibility in moving funds within this program. By contrast, highway safety programs operate under a grantor-grantee relationship and for transit the grantees are largely local units of government, although the role of states has grown. An examination of these programs could change these relationships, since different federal goals may require different degrees and types of federal involvement. Where the federal interest is greatest, the federal government may play a more direct role in setting priorities and allocating resources, as well as fund a higher share of program costs. Conversely, where the federal interest is less evident, state and local governments could assume more responsibility. Functions that other entities may perform better than the federal government could be turned back to the states or other levels of government. Given the already substantial roles states and localities play in the construction and operation of transportation facilities, there may be areas that no longer call for federal involvement and funding could be reassessed. Notably, we have reported that the modal focus of federal programs can distort the investment and decision-making of other levels of government and a streamlining of federal goals and priorities could better align programs with desired outcomes. Turning functions back to the states has many other implications. For example, states would likely have to raise additional revenues to support the increased responsibilities. While states might be freer to allocate funds internally without modally stovepiped federal funding categories, some states could face legal funding restrictions. For example, some states prohibit the use of highway funds for transit purposes, so if a transit program were returned to the states, alternative taxes would have to be raised or the laws would have to be changed. Until a program or function is actually turned back to the states or localities, it is uncertain how these other levels of government will perform. For example, if highway safety programs were turned back to the states, it is not known whether states would continue to target the same issues that they currently choose to address under federally-funded programs or would emphasize different issues. Likewise, if a program that targets a specific area such as urban transit systems is turned back to the states, there is no assurance that the states would continue to fund this area. Turning programs back to the states would have far-reaching consequences, as discussed in appendix III. Observers have argued that certain issues, such as urban mobility, are essentially metropolitan in character and therefore should be addressed by metropolitan regions, rather than by states or cities. In addition, regional organizations can promote collaborative decision-making and advance regional coordination by creating a forum for stakeholders, address problems of mutual concern, and engage in information and resource sharing. Metropolitan Planning Organizations (MPO) currently perform this function for surface transportation. While MPOs do receive some federal funding for operations, they are not regional governments and generally do not execute projects. Addressing these regional problems remains difficult in the absence of more powerful regional governmental bodies. The development of more powerful regional entities could create new opportunities to address regional transportation problems. Once federal goals and the federal role in surface transportation have been clarified, significant opportunities exist to incorporate performance and accountability mechanisms into federal programs. Tracking specific outcomes that are clearly linked to program goals could provide a strong foundation for holding grant recipients responsible for achieving federal objectives and measuring overall program performance. In particular, substituting specific performance measures for the federal procedural requirements that have increased over the past 50 years could help to shift federal involvement in transportation from the current process-oriented approach to a more outcome-oriented approach. Furthermore, shifting from process-oriented structures such as mode-based grant programs to performance-based programs could improve project selection by removing barriers to funding intermodal projects and giving grantees greater flexibility to select projects based on the project’s ability to achieve results. Directly linking outcome-based goals to programs based on clearly defined federal interests would also help to clarify federal surface transportation policy and create a foundation for a transparent and results-based relationship between the federal government and other transportation stakeholders. Accountability mechanisms can be incorporated into grant structures in a variety of ways. For example, grant guidelines can establish uniform outcome measures for evaluating grantees’ progress toward specific goals, and grant disbursements can depend in part on the grantees’ performance instead of set formulas. Thus, if reducing congestion was an established federal goal, outcome measures for congestion such as travel time reliability could be incorporated into infrastructure grants to hold states and localities responsible for meeting specific performance targets. Similarly, if increasing freight movement was an established federal goal, performance targets for freight throughput and travel time in key corridors could be built into grant programs. Performance targets could either be determined at the national level or, where appropriate, in partnership with grantees—much as DOT has established state performance goals for highway safety and motor carrier safety assistance. Incentive grants or penalty provisions in transportation grants can also create clear links between performance and funding and help hold grantees accountable for achieving desired results. For example, the current highway and motor carrier safety incentive grants and penalty provisions can be used to increase or withhold federal grant funds based on the policy measures that states enact and the safety outcomes they achieve. Depending on the federal interest and established goals, these types of provisions could also be used in federal infrastructure grants. In addition, a competitive selection process can help hold recipients accountable for results. For example, DOT’s competitive selection process for New Starts and Small Starts transit programs require projects to meet a set of established criteria and mandates post-construction evaluations to assess project results. To better ensure that other discretionary grant programs are aligned with federal interests and achieve clearly defined federal transportation goals, Congress could establish specific project selection criteria for those programs and require that they use a competitive project selection process. For instance, key freight projects of national importance could be selected through such a competitive process that would identify those investments that are most crucial to national freight flows. DOT also recently selected metropolitan areas for Urban Partnership Agreements, which are not tied to a single grant program but do provide recipients with financial resources, regulatory flexibility, and dedicated technical support in exchange for their adoption of aggressive congestion-reduction strategies. When a national competition is not feasible, Congress could require a competitive selection process at the state or local level, such as those required for the Job Access and Reverse Commute Program. This program, however, lacks the statutorily defined selection criteria used to select projects for the New Starts and Small Starts programs. The effectiveness of any overall federal program design can be increased by promoting and facilitating the use of the best tools and approaches. Within broader federal program structures that fit the principles we discuss in this report, a number of specific tools and approaches can be used to improve results and return on investment, which is increasingly necessary to meet transportation challenges as federal resources become even more constrained. We and others have identified a range of leading practices, discussed below, however their suitability varies depending on the level of federal involvement or control that policymakers desire for a given area of policy. Rigorous economic analysis is recognized by experts as a useful tool for evaluating and comparing potential transportation projects. Benefit-cost analysis gives transportation decision makers a way to identify projects with the greatest net benefits and compare alternatives for individual projects. By translating benefits and costs into quantitative comparisons to the maximum extent feasible, these analyses provide a concrete way to link transportation investments to program goals. However, in order for benefit-cost analysis to be effective, it must be a key factor in project selection decisions and not seen simply as a requirement to be fulfilled. A complementary type of tool is outcome evaluation, which is already required for New Starts transit projects. Such evaluations would be useful in identifying leading practices and understanding project performance, especially since the available information indicates that the costs of highway and transit projects are often higher than originally anticipated. It should be recognized, however, that benefit-cost comparisons and other analyses do not necessarily identify the federal interest—many local benefits from transportation investments are not net benefits in national terms. For example, economic development may provide financial benefits locally, but nationally the result may be largely a redistribution of resources rather than a net increase. Accordingly, in emphasizing return on federal investment, the relationship of investments to national goals must be considered along with locally-based calculations of benefit and cost. Because current programs are generally based on specific modes, it is difficult to plan and fund intermodal links and projects that involve more than one mode, despite a consensus among experts and DOT itself that an intermodal approach is needed. A number of strategies could be used to move toward an intermodal approach. For example, policy could be changed to allow a single stream of funding to pay for all aspects of a corridor-based project—even if the improvements include such diverse measures as highway expansion, transit expansion, and congestion management. DOT recently created competitive Urban Partnership Agreements, which award grants for initiatives that address congestion through congestion pricing, transit, telecommuting, and ITS elements. Finally, decision makers cannot make full use of cross-modal project comparisons, such as those developed through benefit-cost analysis, if funding streams remain stovepiped. Better management of existing capacity is another strategy that has proved successful, primarily on highways; it is useful because of the growing cost and, in some cases the impracticality, of building additional capacity. We have reported that implementing ITS technology can improve system performance. Congestion pricing of highways, where toll rates change according to demand, is another such leading practice. From an economic perspective, congested highways are generally “underpriced.” Although the social cost of using a roadway is much higher at peak usage times, this higher cost is usually not reflected in what drivers pay. When toll rates increase with demand, some drivers respond to higher peak-period prices by changing the mode or time of their travel for trips that are flexible. This tool can increase the speed of traffic and has the potential to increase capacity as well—an evaluation of the variably priced lanes of State Route 91 in Orange County, California, showed that although the priced lanes represent only 33 percent of the capacity of State Route 91, they carry an average of 40 percent of the traffic during peak travel times. Although the Value Pricing Pilot Program encourages the use of this tool, tolling is prohibited on most Interstate highways by statute. Broader support in policy could increase the adoption of congestion pricing, improving the efficiency and performance of the system. Public-private partnerships are another tool that may benefit public sponsors by bringing private-sector financing and efficiencies to transportation investments, among other potential advantages. Specifically, private investors can help public agencies improve the performance of existing facilities, and in some cases build new facilities without directly investing public funds. At the same time, such partnerships also present potential costs and trade-offs, but the public sector can take steps to protect the public interest. For example, when evaluating the public interest of public-private partnerships, the public sector can employ qualitative public interest tests and criteria, as well as quantitative tests such as Value for Money and Public Sector Comparators, which are used to evaluate if entering into a project as a public-private partnership is the best procurement option available. Such formal assessments of public interest are used routinely in other countries, such as Australia and the United Kingdom, but use of systematic, formal processes and approaches to the identification and assessment of public interest issues has been more limited in the United States. Since public interest criteria and assessment tools generally mandate that certain aspects of the public interest are considered in public-private partnerships, if these criteria and tools are not used, then aspects of public interest might be overlooked. Although these techniques have limitations, they are able to inform public decision making—for instance, the Harris County, Texas, toll authority conducted an analysis similar to a public- sector comparator, and the results helped inform the authority’s decision not to pursue a public-private approach. Tools can also be used in designing grants to help increase the impact of federal funds. One such tool is maintenance of effort requirements, under which state or local grantees must maintain their own level of funding in order to receive federal funds. Maintenance of effort requirements could discourage states from substituting federal support for funds they themselves would otherwise have spent. However, our past work has shown that maintenance of effort requirements should be indexed to inflation and program growth in order to be effective. Matching requirements are another grant design tool that can be adjusted to increase the impact of federal programs. The allowable federal share covers a substantial portion of project costs—often 80 percent—in many transportation programs, especially for highways. Increasing the state share can help induce recipients to commit additional resources. For example, NHTSA’s Occupant Protection grant program provides 75 percent federal funding the first year, but reduces the federal share to 25 percent in the fifth and sixth years to shift the primary financing responsibility to the states. Data collection is a key tool to give policymakers information on how the transportation system is functioning. Data on the system and its individual facilities and modes are useful in their own right for decision making, but are also essential to enable other effective approaches, such as linking grant disbursements to grantees’ performance. As discussed previously, DOT does not have complete data in some crucial areas; the effective use of data in safety programs, despite problems, demonstrates the potential of more comprehensive data gathering to improve evaluations and induce improved performance in the surface transportation system. A restructured federal program could increase the application of these and other leading tools and approaches by providing incentives for or requiring their use in certain circumstances. For example, in competitive discretionary grant programs, the application of specific tools and approaches could be considered in evaluating proposals, just as the use of incentives or penalties could be considered in noncompetitive grant programs. The Motor Carrier Safety Assistance Program already employs this approach—one factor considered in awarding incentive funds is whether states provide commercial motor vehicle safety data for the national database. The use of certain tools and approaches could also simply be required in order to receive federal funds under relevant transportation grant programs. However, if federal programs were restructured to be based on performance and outcomes, states would have more incentive to implement such tools and approaches on their own. Under such a scenario, an appropriate federal role could be to facilitate their identification and dissemination. Transportation financing, and the Highway Trust Fund in particular, faces an imbalance of revenues and expenditures and other threats to its long- term sustainability. In considering sustainable sources of funds for transportation infrastructure, the user-pay principle is often cited as an appropriate pricing mechanism for transportation infrastructure. While fuel taxes do reflect usage, they are not an exact user-pay mechanism and they do not convey to drivers the full costs of their use of the road. These taxes are not tied to the time when drivers actually use the road or which road they use. Taxes and fees should also be equitably assigned and reflect the different costs imposed by different users. The trucking industry pays taxes and fees for the highway infrastructure it uses, but its payments generally do not cover the costs it imposes on highways, thereby giving the industry a competitive price advantage over railroads, which use infrastructure that they own and operate. An alternative to fuel taxes would be to introduce mileage charges on vehicles—Oregon is pilot testing the technology to implement this approach. Finally, the use of congestion pricing to reflect the much greater cost of traveling congested highways at peak times will help optimize investment by providing market cues to policymakers. Concerns about funding adequacy have led state and local governments to search for alternative revenue approaches, including alternative financing vehicles at the federal level, such as grant anticipation revenue vehicles, grant anticipation notes, state infrastructure banks and federal loans. These vehicles can accelerate the construction of projects, leverage federal assistance, and provide greater flexibility and more funding techniques. However, they are also different forms of debt financing. This debt ultimately must be repaid with interest, either by highway users— through tolls, fuel taxes, licensing or vehicle fees—or by the general population through increases in general fund taxes or reductions in other government services. Highway public-private partnerships show promise as an alternative, where appropriate, to help meet growing and costly transportation demands. Highway public-private partnerships have resulted in advantages, from the perspective of state and local governments, such as the construction of new infrastructure without using public funding, and obtaining funds by extracting value from existing facilities for reinvestment in public transportation and other public programs. However, there is no “free” money in public-private partnerships. Highway financing through public-private partnerships also is largely a new source of borrowed funds that must be repaid to private investors by road users, over what could be a period of several generations. Finally, the sustainability of transportation financing should also be seen in the context of broader fiscal challenges. In a time of growing structural deficits, constrained state and local budgets, and looming Social Security and Medicare spending commitments, the resources available for discretionary programs will be more limited. The federal role in transportation funding must be reexamined to ensure that it is sustainable in this new fiscal reality. The long-term pressures on the Highway Trust Fund and the governmentwide problem of fiscal imbalance highlight the need for a more efficient, redesigned program based on the principles we have identified. The sustainability of surface transportation programs depends not only on the level of federal funding, but also on the allocation of funds to projects that provide the best return on investment and address national transportation priorities. Using the tools and approaches for improving transportation programs that we have discussed could also help surface transportation programs become more fiscally sustainable and more directly address national transportation priorities. The National Surface Transportation Policy and Revenue Study Commission (National Commission) issued its final report in January 2008. The report recommended significantly increasing the level of investment by all levels of government in surface transportation, consolidating and reorganizing the current programs, speeding project delivery, and making the current program more performance-based and mode-neutral, among other things. However, several commissioners offered a dissenting view on some of the Commission’s recommendations, notably the level of investment, size of the federal role, and the revenue sources recommended. The divergent views of the commission members indicate that while there is a degree of consensus on the need to reexamine federal surface transportation programs, there is not yet a consensus on the form a restructured surface transportation program should take. The principles that we discussed for examining restructuring options are a sound basis on which this discussion can take place. These principles do not prescribe a specific approach to restructuring, but they do provide key attributes that will help ensure that a restructured surface transportation program addresses current challenges. The current federal approach to addressing the nation’s surface transportation problems is not working well. Despite large increases in expenditures in real terms for transportation the investment has not resulted in a commensurate improvement in the performance of nation’s surface transportation system, as congestion continues to grow, and looming problems from the anticipated growth in travel demand are not being adequately addressed. The current collection of flexible but disparate programs grants that characterizes the existing approach is the result of a patchwork evolution of programs over time, not a result of a specific rationale or plan. This argues for a fundamental reexamination of the federal approach to surface transportation problems. In cases where there is a significant national interest, maintaining strong federal financial support and a more direct federal involvement in the program may be needed. In other cases, functions may best be carried by other levels of government or not at all. There may also be instances where federal financial support is desirable but a more results-oriented approach is appropriate. In addition, it is important to recognize that depending on the transportation issue and the desired goals, different options and approaches may best fit different problems. Reforming the current approach to transportation problems will take time, but a vision and strategy is needed to begin the process of transforming to a set of policies and programs to effectively address the nation’s transportation needs and priorities. The current system evolved over many years and involves different modes, infrastructure and safety issues, and extends widely into the operations of state and local governments. Given the proliferation of programs and goals previously discussed, refocusing federal programs is needed to address the shortfalls of the current approach. Focusing federal programs around a clear federal interest is key. Well-defined goals based on identified areas of federal interest would establish what federal participation in surface transportation is designed to accomplish. A clearly defined federal role in achieving these goals would give policymakers the ability to direct federal resources proportionately to the level of national interest. Once this is accomplished, a basis exists to reexamine the current patchwork of programs, test their continued relevance and relative priority, potentially devolve programs and policies that are outdated or ineffective, and modernize those programs and policies that remain relevant. Once those areas of federal interest are known, tying federal funds to performance and having mechanisms to test whether goals are met would help create incentives to state and local governments to improve their performance and the performance of the transportation system. Both incentive programs and sanctions are possible models for better tying performance to outcomes. Having more federal programs operate on a competitive basis and projects selected based on potential benefits could also help tie federal funds to performance. There also is a need to improve the use of analytical tools in the selection and evaluation of the performance of projects. Better use of tools such as benefit-cost analysis and using return on investment as a criterion for the selection of individual projects can help identify the best projects. Specifically, the use of a return on investment framework will help to emphasize that federal financial commitments to transportation infrastructure projects are, in fact, long-term capital investments designed to achieve tangible results in a transparent fashion. Finally, a fundamental problem exists in the fiscal sustainability of surface transportation programs as a result of the impending shortfall in the Highway Trust Fund. The trust fund is the primary source of federal support to state and local governments across highways, transit, and surface transportation safety programs. This fiscal crisis is fundamentally based on the balance of revenues and expenditures in the fund, and thus either reduced expenditures, increased revenues, or a combination of the two is now needed to bring the fund back into balance. Finally, given the scope of needed transformation, the shifts in policies and programs may need to be done incrementally or on a pilot basis to gain practical lessons for a coherent, sustainable, and effective national program and financing structure to best serve the nation for the 21st century. To improve the effectiveness of the federal investment in surface transportation, meet the nation’s transportation needs, and ensure a sustainable commitment to transportation infrastructure, Congress should consider reexamining and refocusing surface transportation programs to be responsive to these principles so that they: have well-defined goals with direct links to an identified federal interest institute processes to make grantees more accountable by establishing more performance-based links between funding and program outcomes, institute tools and approaches to that emphasize the return on the federal investment, and address the current imbalance between federal surface transportation revenues and spending. We provided copies of a draft of this report to DOT for its review and comment. In an email on February 22, 2008, DOT noted that surface transportation programs could benefit from restructured approaches that apply data driven performance oriented criteria to enable the nation to better focus its resources on key surface transportation issues. DOT officials generally agreed with the information in this report, and they provided technical clarifications which we incorporated, as appropriate. We will send copies of this report to interested congressional committees and the Secretary of Transportation. Copies will also be available to others upon request and at no cost on GAO’s Website at www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834, or heckerj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. We were asked to (1) provide an historical overview of the federal role in surface transportation and the goals and structures of federal surface transportation programs funded by the Highway Trust Fund, (2) summarize conclusions from our prior work on the structure and performance of these and other federal programs, and (3) identify principles to help assess options for focusing the future federal role and the structure of federal surface transportation programs. We focused our work on programs funded by the Highway Trust Fund (HTF) because it is the primary vehicle for federal financing of surface transportation, receiving nearly all federal fuel tax revenue; it is also a focus of most proposals to reform the current federal role. We examined the Federal Highway Administration (FHWA), Federal Motor Carrier Safety Administration (FMCSA), Federal Transit Administration (FTA) and National Highway Traffic Safety Administration (NHTSA) as part of this study; we did not look at two other DOT agencies that receive HTF funds, the Research and Innovative Technology Administration (RITA) and the Federal Railroad Administration (FRA). RITA was excluded because it focuses on federal research, in contrast to our focus on federal-state programs; FRA was excluded because the portion of HTF funds that it receives is so small that it cannot be compared to the other operating agencies. To provide an historical overview of the federal role in surface transportation and the goals and structures of federal surface transportation programs, we drew information from statutes, especially transportation authorization laws; regulations; budget documents; agency reports; and literature on transportation policy by outside experts. We interviewed officials in DOT’s modal administrations, including FHWA, FMCSA, FTA, and NHTSA in order to help clarify agency goals, roles and structures. We also interviewed representatives of stakeholder groups such as the American Association of State Highway and Transportation Officials (AASHTO) and the American Public Transit Association (APTA). To describe conclusions that we and others have drawn about the current structure and performance of these federal programs, we reviewed relevant GAO reports on specific transportation programs, as well as reports that looked at broader issues of performance measurement, oversight, grant design, and other related issues. We also reviewed reports, policy statements, and other materials from stakeholder groups and other organizations. Additionally, we reviewed materials from hearings held by the National Surface Transportation Policy and Revenue Study Commission. Finally, we sought the views of transportation experts, including the 22 who participated in a forum convened by the Comptroller General in May 2007, that included public officials, private-sector executives, researchers, and others. To review policy options for addressing the federal role, we identified options from previous proposals, both those originating in Congress and presidential administrations, as well as those presented by stakeholder groups such as AASHTO. We also reviewed options discussed in previous GAO reports, as well as testimony and other materials generated by the National Surface Transportation Policy and Revenue Study Commission, which the Congress also tasked to examine the federal approach to surface transportation programs. In addition, to complement our appendix III discussion of the implications of turning over responsibility for surface transportation to the states, we analyzed the potential fiscal impact of turning over most elements of the federal transportation program to the states. We obtained DOT data on state grant disbursements and calculated total federal grant receipts for each state and the District of Columbia. We limited our analysis to grant programs funded by the HTF, because the federal fuel taxes that would be eliminated or sharply reduced under this scenario are deposited almost exclusively in the HTF. We also omitted discretionary grants because they are a small portion of federal transportation grants and often vary significantly from year to year in a given state. Separately, we obtained state fuel consumption data from DOT. In order to calculate the extent to which individual states would have to raise their fuel taxes to maintain the same level of spending if federal grants were eliminated, we divided the total grant receipts (as described above) for each state by the number of gallons of highway fuel used in that state in the prior year. This calculation yielded the per-gallon increase in state taxes that would be needed to maintain spending, assuming it would be implemented evenly across all types of fuel. Because diesel and gasoline are taxed at different federal rates, and represent different shares of total usage in each state, we used a weighted average to calculate the current effective per-gallon federal fuel tax rate in each state. We then expressed the per-gallon tax rate results in terms of change from the current federal tax rate. Where we had not previously assessed the reliability of the source data, we conducted a limited data reliability analysis and found the data suitable for the purpose of this analysis. We conducted this performance audit between April 2007 and February 2008 in accordance with Generally Accepted Government Auditing Standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence that provides a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Federal assistance for highway infrastructure is distributed through several grant programs, known collectively as the Federal-Aid Highway Program. Both Congress and DOT have established multiple broad policy goals for the Federal-Aid Highway Program, which provides financial and technical assistance to states to construct, preserve, and improve eligible federal-aid highways. The program’s current goals include safety, efficiency, mobility, congestion relief, interstate and international commerce, national security, economic growth, environmental stewardship, and sustaining the nation’s quality of life. The Federal-Aid Highway Program currently consists of seven core formula grant programs and several smaller formula and discretionary grant programs. The majority of Highway Trust Fund revenues are distributed through the core formula grant programs to the states for a variety of purposes, including road construction and improvements, Interstate highway and bridge repair, air pollution mitigation, highway safety, and equity considerations. Broad flexibility provisions allow states to transfer funds between core highway programs and to the Federal Transit Administration (FTA) for eligible transit projects. Highway Trust Fund revenues are also distributed through the smaller formula and discretionary grant programs, which cover a wide range of projects, including border infrastructure, recreational trails, and safe routes to schools. Congress has also designated funds for specific projects. For example, according to the Transportation Research Board, SAFETEA- LU—the most recent reauthorization legislation—contained over 5,000 dedicated spending provisions. The Federal-Aid Highway Program is administered through a federal-state partnership. The federal government, through FHWA, provides financial assistance, policy direction, technical expertise, and some oversight. FHWA headquarters provides leadership, oversight, and policy direction for the agency, FHWA state division offices deliver the bulk of the program’s technical expertise and oversight functions, and five FHWA regional service resource centers provide guidance, training, and additional technical expertise to the division offices. In turn, state and local governments execute the programs by matching and distributing federal funds; planning, selecting, and supervising projects; and complying with federal requirements. Currently, based on stewardship agreements with each state, FHWA exercises full oversight on a limited number of federal-aid projects. States are required to oversee all federal-aid highway projects that are not on the National Highway System, and states oversee design and construction phases of other projects based on an agreement between FHWA and the state. FHWA also reviews state management and planning processes. Many state and local government processes are driven by federal requirements, including not only highway-specific requirements for transportation planning and maintenance, but also environmental review requirements and labor standards that are the result of separate federal legislation designed to address social and environmental goals. Since its reauthorization under the Federal-Aid Highway Act of 1956, the Federal-Aid Highway Program has grown in size, scope, and complexity as federal goals for the program have expanded. In 1956, the primary focus of the Federal-Aid Highway Program was to help states finance and construct the Interstate Highway System to meet the nation’s needs for efficient travel, economic development, and national defense. The Federal-aid Highway Program made funds available to states for road construction and improvements through four formula programs—one program for each of four eligible road categories—with a particular focus on the Interstate system. Yet the Federal-Aid Highway Program has also served as a mechanism to achieve other societal goals. For example, the 1956 Act requires that states adhere to federal wage and labor standards for any state construction project using federal-aid funds. In successive reauthorizations of the program, Congress has increased program requirements to achieve other societal goals such as civil rights, environmental protection, urban planning, and economic development. Besides increasing compliance requirements, Congress has authorized new grant programs to achieve expanded program objectives. For example, Congress authorized new core grant programs to address Interstate highway maintenance, environmental goals, and safety. In response to controversy over the distribution of highway funds between states that pay more in federal taxes and fees than they receive in federal- aid (donor states) and states that receive more in federal-aid than they contribute (donee states), Congress established and strengthened equity programs that guarantee states a minimum relative return on their payments into the Highway Account of HTF. Additionally, Congress has further expanded the program’s scope by authorizing highway funds for additional purposes and uses, such as highway beautification, historic preservation, and bicycle trails. The federal-state partnership has evolved as programs have changed to give states and localities greater funding flexibility. For example, in 1991, when Interstate construction was nearly complete, Congress restructured the Federal-aid Highway Program to promote a more efficient and flexible distribution of funds. Specifically, under the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA), Congress substantially increased flexibility by consolidating road-category grant programs, creating a surface transportation block grant, and establishing broad flexible fund transfer provisions between highway programs and transit— a structure that remains today. At the same time, Congress altered the established federal-state partnership by increasing the authority of metropolitan planning organizations—local governmental planning bodies—in federally mandated planning processes. The federal-state partnership has further evolved as Congress has delegated federal oversight responsibilities to state and local governments, but has assumed a greater role in project selection. When Interstate construction began, the federal government provided direct oversight during the construction and maintenance phases of projects and ensured that the states complied with federal requirements. By 1973, states could self-certify compliance with most federal grant requirements, and during the 1990s, Congress further expanded this authority to allow states and FHWA to cooperatively determine the appropriate level of oversight for federally funded projects, including some Interstate projects. While reducing the federal role in oversight, Congress has increased its role in project selection—traditionally a state and local responsibility—through congressional directives. For example, according to the Transportation Research Board, there were over 5,000 directives in the latest reauthorization from 2005, up from 1,850 in 1998 and 11 in 1982. As the Federal-Aid Highway Program has grown in size and complexity, so too has the federal administrative structure although some shifting or consolidation of responsibilities has occurred. Before FHWA was created in 1967, its predecessor, the Bureau of Public Roads, established a decentralized administrative structure and a field office in each state, reflecting the close partnership between the federal government and the states. Moreover, as the number of the Federal-Aid Highway Program requirements and the scope of the program increased, the agency, which initially had an engineering focus, hired a wide range of specialists including: economists, landscape architects, planners, historians, ecologists, safety experts, civil rights experts, and others. When DOT was formed in 1967, new motor carrier and traffic and vehicle safety functions were assigned to FHWA. These functions have since shifted to NHTSA and FMCSA, although FHWA continues to collaborate on these issues and retains responsibility for highway infrastructure-related safety projects and programs. In 1998, FHWA consolidated its organization by eliminating its nine regional offices and establishing regional service resource centers, as well as devolving responsibility for state projects and programs entirely to the FHWA division offices in each state. For fiscal year 2009, FHWA requested funding for 2,861 full-time equivalent staff divided between headquarters, 5 regional service resource centers and 55 division offices. Both Congress and DOT have established multiple broad policy goals for FTA, which provides financial and technical assistance to local and state public agencies to build, maintain, and operate mass transportation systems. FTA’s current statutory goals include (1) promoting the development of efficient and coordinated urban transportation systems that maximize mobility, support economic development, and reduce environmental and energy consumption impacts, and (2) providing mobility for vulnerable populations in both urban and rural areas. DOT’s six strategic goals also apply to FTA: safety, congestion mitigation, global connectivity, environmental stewardship, security and preparedness, and organizational excellence. Currently, FTA divides its major capital and operating assistance programs into two categories: formula and bus grants, which are funded entirely from HTF’s Mass Transit Account, and capital investment grants, which are financed using general revenue. The formula and bus grants provide capital and operating assistance to transit agencies and states through a combination of seven relatively large and five smaller formula and discretionary grants. Under these grants, the federal government generally provides 80 percent of the funding and the locality provides 20 percent, with certain exceptions. The capital investment grants provide discretionary capital assistance for the construction of new fixed- guideway and corridor systems and extensions of existing systems. Funds for new fixed-guideway systems are distributed through the New Starts and Small Starts grant programs and are awarded to individual projects through a competitive selection process. Although the statutory federal match for the New Starts and Small Starts programs is 80 percent, agency officials stated the actual federal match is closer to 50 percent due to high levels of state and local investment and the competitive selection process that favors projects that require a lower federal match. FTA also provides financial support for research and planning activities. Funds for research are allocated on a discretionary basis out of the General Fund, and planning funds are taken from the Mass Transit Account of the Highway Trust Fund and distributed to states by formula. In addition to the funding they obtain through these programs, states may transfer a portion of certain highway program funds to FTA for eligible transit expenses. According to the most recent DOT data, in 2004, 28.1 percent of the funding for transit was system-generated through fares or other charges, and the remaining funds came from local (34.6 percent), state (19.7 percent), and federal (17.6 percent) sources. Approximately 75 percent of federal transit assistance is directed to capital investments, and the remainder is directed to other eligible expenses such as operating expenses. In contrast to federal highway infrastructure programs, which are administered through a federal-state partnership, federal transit programs are generally administered through a federal-local partnership, although rural programs are administered at the state level. The federal government, through FTA headquarters and 10 FTA regional offices, provides financial assistance, establishes requirements, performs oversight, and conducts research. Grant recipients such as local transit agencies are responsible for matching federal funds and for planning, selecting, and executing projects while complying with federal requirements. The degree of federal oversight varies across programs and among grant recipients. Currently, full federal oversight is limited to major capital projects that cost over $100 million, and local and state grant recipients are allowed to self-certify their compliance with certain federal laws and regulations. For example, FTA conducts periodic reviews of program management processes for recipients of Block Grants Program (Urbanized Area Formula Grants) funds and provides direct project management oversight for recipients of New Starts funding. In addition, FTA conducts discretionary reviews of grantees’ compliance with requirements in other areas such as financial management or civil rights and uses a rating system to determine the level of oversight needed for each grantee. FTA employees work with external contractors to conduct project management and program management process reviews. For fiscal year 2009, FTA requested funding for 526 full- time-equivalent staff, divided among its 10 regional offices and headquarters. From the modern transit program’s inception as part of the Urban Mass Transportation Act of 1964 (UMTA), Congress justified federal funding for mass transportation capital improvements as a means to address pressing urban problems such as urban decay, traffic congestion, and poor development planning. Federal capital assistance was distributed to local governments on a discretionary basis to help urban areas improve and expand urban mass transportation systems. Congress also established federal transit programs to achieve other societal goals. For example, UMTA required grant recipients to provide labor protections for transit employees and relocation assistance for individuals displaced by transit projects. Later federal legislation increased grant requirements to achieve other societal goals such as civil rights, environmental protection, and economic development. In addition to increasing compliance requirements, Congress has authorized new grant programs and broadened program eligibility requirements to promote expanding objectives. For example, federal transit assistance expanded during the 1970s to include grant programs designed to meet social and transportation-related goals such as: improving mobility in rural areas and making public transportation more accessible for the elderly and the disabled. More recently, Congress has further broadened the scope of programs to include making transportation to jobs more accessible for welfare recipients and low-income individuals and providing transit service within public parks and lands. Although federal transit funding was initially provided on a discretionary basis from the General Fund of the Treasury, many of the newer programs make funds available through formulas, and highway user fees have replaced general revenues as the major source of transit assistance since the creation of the Mass Transit Account of the Highway Trust Fund in 1983. In addition, Congress has broadened the scope of federal transit assistance to include operating expenses and capital maintenance as well as capital expenses. For example, concerns about growing operating deficits among transit agencies led Congress to authorize the use of federal funds for transit operating expenses in 1974. Although federal support for operating expenses in urbanized areas has since declined, operating assistance is still available for areas with a population of less than 200,000. The federal-local relationship in transit has evolved as Congress has expanded federal involvement in transit and increased state and local government authority and flexibility in using federal funds. For example, in 1978, Congress expanded federal transit assistance to rural areas and made state governments responsible for receiving and distributing these funds. According to agency officials, states previously played a limited role in transit projects because the federal government worked directly with urban areas and transit agencies. In 1991, Congress increased local authority by expanding the role of metropolitan planning organizations in project selection and transportation planning. At the same time, Congress substantially increased state and local authority to transfer funds between highway and transit programs. The combination of additional transfer authority and the gradual shift toward apportioning funds through formulas rather than individual project awards has increased flexibility for both state and local transit grant recipients. In addition, state and local government oversight responsibilities have increased for federal transit grants, much as they have for federal highway infrastructure grants, with self-certification procedures for compliance with federal laws and regulations, and additional federal compliance requirements such as those for environmental review. Federal highway safety and motor carrier safety assistance programs are separately administered by NHTSA and FMCSA. The primary statutory policy goals of these programs are directed to reducing accidents, and the bulk of NHTSA’s and FMCSA’s financial support and research, education, rulemaking, and enforcement activities fall under DOT’s strategic goal of improving safety. Although FHWA and FTA exercise rulemaking authority in the administration of their programs, rulemaking and enforcement are primary tools that NHTSA and FMCSA use to reduce accidents and their associated damages. Highway safety and motor carrier safety grant programs are similarly organized. Both use a basic formula grant to provide funding to states for safety programs, enforcement activities, and related expenditures, coupled with several targeted discretionary grants. Currently, almost 40 percent of authorized federal highway safety assistance is distributed by formula to states through the State and Community Highway Safety Grant Program (Section 402), which supports a wide range of highway safety initiatives at the state and local level. This basic program is augmented by several smaller discretionary grant programs that mostly target funds to improve safety through the use of measures such as seat belts and child safety restraints, among others. Most of these discretionary grants provide states with financial incentives for meeting specific performance or safety activity criteria. For example, to be eligible for Alcohol-Impaired Driving Countermeasures Incentive grants, most states must either have a low alcohol fatality rate or meet programmatic criteria for enforcement, outreach, and other related activities. In addition to discretionary grants, Congress has authorized highway safety provisions that penalize states by either transferring or withholding state highway infrastructure funds from states that do not comply with certain federal provisions. These penalty provisions can provide a substantial amount of additional funding for state safety activities. For example, in 2007, penalty provisions transferred over $217 million of federal highway infrastructure assistance to highway safety programs in the 19 states and Puerto Rico that were penalized for failure to meet federal criteria for either open container requirements or minimum penalties for repeat offenders for driving while intoxicated or under the influence. The majority of federal motor carrier safety funds are distributed by formula to states through the Motor Carrier Safety Assistance Program (MSCAP), which provides financial assistance to states for the enforcement of federal motor carrier safety and hazardous materials regulations. In addition, several smaller discretionary programs are targeted to achieve specific goals such as data system improvements and border enforcement, among others. Some of these grants require states to maintain a level of funding for eligible motor carrier safety activities to reduce the potential for federal funds to replace state financial support. Finally, FMCSA sets aside MCSAP funds to support high-priority areas such as audits of new motor carrier operations. Unlike the highway safety grants, most of these discretionary programs do not have statutorily defined performance or outcome-related eligibility criteria, and funds are allocated at the agency’s discretion. States that do not comply with federal commercial driver licensing requirements may have up to 5 percent of their annual highway construction funds withheld in the first fiscal year and 10 percent in the second fiscal year of violation. However, these withheld funds, unlike the funds withheld or transferred under some highway safety penalty provisions, are not available to the penalized states for motor carrier safety activities. Like highway infrastructure grants, most federal highway safety and motor carrier safety grants are jointly administered through a federal-state partnership. Through NHTSA and FMCSA, the federal government provides funds, establishes and enforces regulations, collects and analyzes data, performs oversight, conducts research, performs educational outreach, and provides technical assistance. In turn, states provide matching funds, develop and execute safety and enforcement plans and programs, distribute funds to other governmental partners, collect and analyze data, and comply with federal grant and reporting requirements. Both NHTSA and FMCSA use a performance-based approach to grant oversight. Each agency reviews state safety plans, which establish specific performance goals, and then monitors states’ progress towards achieving their goals. Because these efforts rely on the accuracy and completeness of state safety data, both NHTSA and FMCSA emphasize state data collection and analysis in the administration of their grant programs. In addition to their annual safety performance reviews, NHTSA and FMCSA conduct periodic management and compliance reviews of grant recipients. NHTSA and FMCSA also each have a substantial regulatory role. NHTSA establishes and enforces safety standards for passenger vehicles in areas such as tire safety, occupant protection devices, and crashworthiness, as well as issuing fuel economy standards. FMCSA establishes and enforces standards for motor carrier vehicles and operations, hazardous materials, household goods movement, commercial vehicle operator medical requirements, and international motor carrier safety. NHTSA conducts testing, inspection, analysis, and investigations to identify noncompliance with vehicle safety standards, and if necessary, initiates a product recall. FMCSA conducts compliance reviews of motor carriers’ operations at their places of business as well as roadside inspections of drivers and vehicles, and can assess a variety of penalties including fines and cessation orders for noncompliance. Both NHTSA and FMCSA rely on data to target their enforcement activities. NHTSA and FMCSA use different organizational structures to administer their grant programs. NHTSA has both a headquarters office and 10 regional offices. Headquarters staff develop policy and programs and provide technical assistance to regional staff. Regional staff review and approve state safety plans, and provide technical assistance. According to agency officials, since NHTSA does not provide the same level of technical assistance as FHWA, a regional rather than a state division structure is appropriate to NHTSA’s needs. For fiscal year 2009, NHTSA requested funding for 635 full-time-equivalent staff divided among its headquarters and regional offices. Similar to FHWA, FMCSA has a field structure of 4 regional service centers and 52 division offices. Headquarters staff establish and communicate agency priorities, issue policy guidance, and carry out financial management activities. Regional service centers act as an intermediary between headquarters and division offices by clarifying policy and organizing training and goal-setting meetings for MSCAP grants. Division offices have primary responsibility for overseeing state motor carrier safety programs and work closely with the states to develop commercial vehicle safety plans. These offices also monitor state progress and grant expenditures. For fiscal year 2009, FMCSA requested funding for 1119 full-time equivalent staff divided among its headquarters and field offices. In broad terms, both federal highway safety and motor carrier safety programs have followed a similar path since their inception. Both federal highway safety and motor carrier safety activities were components of the federal highway program before separate modal agencies were established within DOT. Both state-assistance programs began as a single basic formula grant that was then expanded to include smaller targeted discretionary grants. Additionally, Congress has given states greater flexibility to set their own priorities within the parameters of national safety goals, and both NHTSA and FMCSA have adopted a performance- based approach to grant oversight. Although broader environmental and social goals have had less of an impact on federal safety grant programs, the scope and administrative complexity of highway safety and motor carrier safety regulatory functions has expanded to incorporate these goals. Because of growing concerns about vehicle safety and traffic accidents, the National Traffic and Motor Vehicle Safety Act and Highway Safety Act established highway safety as a separate grant program and regulatory function in 1966. Two major grants provided federal highway safety assistance in 1966: the State and Community Highway Safety (Section 402) grants and Highway Safety Research and Development (Section 403) grants. Section 402 grants distributed federal assistance to states by formula to support the creation of state highway safety programs and the implementation of countermeasures to address behavioral factors in accidents. State safety programs were required to meet several uniform federal standards to be eligible for funding and avoid withholding penalties. Section 403 grants provided discretionary federal funding for research, training, technical assistance, and demonstration projects. Although originally administered by the Department of Commerce, federal highway safety grants and regulatory authority were transferred to the Federal Highway Administration (FHWA) upon its creation in 1967. In 1970, FHWA’s National Highway Safety Bureau became a separate agency within DOT and was renamed the National Highway Traffic Safety Administration. Since 1966, Congress has increased state and local government authority and flexibility to set and fund safety priorities by removing some federal grant requirements and restrictions, and by relying more on incentive- based discretionary grants to achieve national safety goals. For example, the uniform federal standards first established in 1966 for state highway safety programs funded by Section 402 grants became guidelines in 1987, and in 1998, Congress amended federal oversight procedures from direct oversight of state safety programs to selective oversight of state safety goals based on state performance. Additionally Congress has removed dedicated spending restrictions on Section 402 funds and replaced some of them with separate incentive grant programs. For example, provisions that required a percentage of Section 402 funds to be dedicated to 55 mph speed limit enforcement, school bus safety, child safety restraints, and seat belt use have been discontinued. Some of the priorities addressed by these spending restrictions have become separate incentive programs designed to reward state performance and activities in these areas rather than limit the availability of Section 402 funds. However, in certain priority areas, Congress has provided additional incentives for state compliance by authorizing penalty provisions to withhold or transfer state highway infrastructure funds for failure to meet specific safety criteria. Unlike federal highway and transit infrastructure grants, NHTSA’s grants have not been as directly affected by emerging national social and environmental goals, although Congress has incorporated these goals into NHTSA’s regulatory processes. States must comply with several broad federal requirements such as nondiscrimination policies to receive federal safety funds. However, these requirements have not increased the administrative complexity of highway safety grants to the same extent as infrastructure grants because most safety activities funded through NHTSA do not require construction. For example, state safety activities such as enforcement of traffic laws and accident data collection are generally not subject to construction-related requirements such as environmental assessments and construction contract labor standards which apply to highway and transit infrastructure programs. Similarly, Congress has added only one targeted highway safety grant program to specifically address a social goal unrelated to safety—the reduction of racial profiling in law enforcement—and one grant provision requiring states to ensure accessibility for disabled persons on all new roadside curbs. In contrast, federal social and environmental goals have had a greater impact on NHTSA’s regulatory processes. For example, in response to the energy crisis during the 1970s, Congress gave NHTSA authority to set corporate average fuel economy standards. Furthermore, the agency’s rulemaking process is subject to executive orders and regulations designed to meet legislatively established social and environmental goals such as NEPA, the Paperwork Reduction Act, energy effects, and unfunded mandates. Before FMCSA was established as a separate modal administration within DOT in 1999, federal motor carrier safety functions were administered by both the former Interstate Commerce Commission and FHWA. Until 1982, the federal government regulated motor carrier safety but did not provide financial assistance to states for enforcement. The Surface Transportation Act of 1982 authorized the Secretary of Transportation to make grants to the states for the development or implementation of state programs to enforce federal and state commercial motor vehicle regulations. This authorization became the foundation for the basic MCSAP grant. Since 1982, Congress has expanded the number and scope of motor carrier grant programs and requirements to meet emerging areas of concern, including border enforcement, vehicle and driver information systems, commercial driver license oversight, and safety data collection. Congress has also set- aside grant funds for purposes such as high-priority areas and new entry audits. Additionally, grant eligibility requirements have increased. For example, state enforcement plans must meet 24 criteria to be eligible for a basic MCSAP grant today, compared with 7 criteria when the program started in 1982. Although grant requirements have increased, Congress has given states some flexibility to set enforcement priorities by restructuring the programs to become performance-based and allowing states to tailor their activities to meet their particular circumstances, provided these activities work toward national goals. Additionally, FMCSA follows a performance-based approach to grant oversight. Like highway safety grant programs, motor carrier safety grant programs have undergone fewer structural and administrative changes in response to emerging national social and environmental concerns than have federal highway and transit infrastructure grant programs. Although states must adhere to broad requirements to receive federal funds, some of these requirements, such as those calling for environmental assessments, are not relevant for safety activities that do not involve construction. Furthermore, Congress has not added any specific grant programs or grant requirements exclusive to motor carrier safety assistance that directly address other social and environmental goals. FMCSA’s regulatory and enforcement scope has expanded considerably over time. Much of this expansion is related directly to safety, but Congress has also incorporated other policy goals into FMCSA’s regulatory functions. For example, hazardous materials transport, commercial driver licensing programs, and operator medical requirements have become additional areas of FMCSA regulation and enforcement that directly relate to safety. However, Congress has also given FMCSA regulatory authority for consumer protection in interstate household goods movement, which does not specifically address reducing motor carrier-related fatalities. Additionally, FMCSA’s rulemaking process is subject to executive orders and regulations designed to meet legislatively established social and environmental goals. A fundamental reexamination of surface transportation programs begins with identifying issues in which there is a strong federal interest and determining what the federal goals should be related to those issues. Once the federal interest and goals have been identified, the federal role in relation to state and local governments can be clearly defined. For issues in which there is a strong federal interest, ongoing federal financial support and direct federal involvement could help meet federal goals. But for issues in which there is little or no federal interest, programs and activities may better be devolved to other levels of government or to other parties. In some cases, it may be appropriate to “turn back” activities and programs to state and local governments if they are best suited to perform them. Many surface transportation programs have a dedicated source of funding, that is, they are funded from a dedicated fund—the Highway Trust Fund. Devolving federal responsibility for programs could entail simultaneously relinquishing the federal revenue base, in this case, revenues that go into the Highway Trust Fund. A turnback of federal programs, responsibilities, and funding would have many implications and would require careful decisions to be made at the federal, state, and local levels. These implications and decisions include the following: At the federal level, it would need to be determined (1) what functions would remain and (2) how federal agencies would be structured and staffed to deliver those programs. In deciding what functions would remain, the extent of federal interest in the activity compared to the extent of state or local interest should be considered. Furthermore, in deciding how to staff and deliver programs, for agencies with a large field presence, like FHWA and FMCSA, it would have to be determined what their responsibilities would be. At all levels of government, it would need to be determined how to handle a variety of other federal requirements that are tied to federal funds, such as the requirements for state highway safety programs related to impaired driving and state and metropolitan planning roles. At the federal level, Congress would have to decide whether to keep the requirements, and if so, how to ensure that they are met without federal funds to provide incentives or to withhold with sanctions. If the effect of a turnback is to relinquish requirements, then states and localities would have to decide what kind of planning and other requirements they want to have and how to implement them. At the state and local levels, it would need to be determined (1) whether to replace revenues with state taxes and (2) what type of programs to finance. Deciding whether to replace federal revenues with state taxes may be difficult because states also face fiscal challenges and replacing revenues would have different effects on different states. For example, if states decided to raise fuel taxes, some states could simply replace the current federal tax with an equivalent state tax, but other states might have to levy additional state taxes at a much higher level than the current federal tax. States would also have options of using other revenue sources such as vehicle registration fees or expanded use of tolling. With states deciding what type of programs to continue there is no way to predict which federal programs would be replaced with equivalent state programs. Finally, while states may gain flexibility in how they deliver projects, in some cases states could actually lose some flexibility they currently have using federal funds—for example, the flexibility to move funds between highway and transit programs. The functions that would remain at the federal level would be determined by the level of federal interest. Some functions are financed from the Highway Trust Fund but exist because of broader commitments. For example, the federal government owns land managed by agencies such as the Bureau of Land Management, Bureau of Indian Affairs, and the Forest Service. The responsibility for funding and overseeing construction of these roads is within DOT, specifically within FHWA’s federal lands division. It is unlikely that the federal government would assign the responsibilities to construct roads on federal lands to state or local government. Thus, the decision may be whether, in a restructured federal program, to continue to finance this responsibility from federal gas taxes or shift responsibility to the managing agency, but not whether the responsibility would be turned over to another level of government. In another area, the federal government takes a defined role in response to disasters, as exemplified in the Robert T. Stafford Disaster Relief and Emergency Assistance Act. Similarly, the Emergency Relief program provides funds to states and other federal agencies for the repair or reconstruction of federal-aid highways that have been damaged or destroyed by natural disasters or catastrophic failures. This is a long- established federal function and Congress has provided funds for the emergency repair of roads since at least 1928. Given the ongoing federal commitment to respond to disasters it is likely that emergency relief would remain a federal function. Devolving other programs would depend on how the federal interest and the federal role were defined. For example, maintaining systems such as Interstate highways or the National Highway System could be designated as part of the national interest. The effect of various turnback scenarios on DOT modal agencies would depend on how expansively the federal role is defined. For example, FHWA in fiscal year 2008 had about 1,400 personnel in field offices, or about half of its total staff. FHWA maintains a division office in each state that provides oversight of state programs and projects as defined in a stewardship agreement between the state and the division office. The division offices may provide project-level oversight in some cases or delegate that responsibility to the state. Division offices also review state DOTs’ programs and processes to ensure that states have adequate controls in place to effectively manage federally assisted projects. Thus, if a substantial portion of federal highway programs is turned back to the states, the greatest effect might be felt at the division office level, as the oversight activities of these offices might largely be considered for elimination. However, certain functions and offices could remain, such as the Office of Federal Lands Highways, which provides funding and oversight for highways on federal lands and constitutes, including both headquarters and field, about one-fourth of all FHWA staff. Other functions, such as Emergency Relief program or environmental oversight, might remain and require a field office presence of some type. A reduced or eliminated division office structure might be warranted, or residual functions might suggest a regional structure. Even under an extensive turnback scenario, FHWA might retain a technical support function, along with its five existing resource center locations. Effects on other DOT agencies of a general turnback of transportation grants would vary and would hinge on what activities the agencies would continue to perform. For example, assuming FMCSA’s inspection activities continue, the significant numbers of field staff required to perform those functions would remain. If NHTSA’s safety grants to the states for purposes such as reducing impaired driving or increasing seat belt use were turned back, the functions of NHTSA field staff would need to be reviewed, as these staff would no longer be needed for grant oversight. However, NHTSA could still retain its regulatory and research responsibilities, such as those related to fuel economy standards, automotive recalls, and crash testing, among others, and might need to retain those staff. In some programs, federal funding is contingent on actions taken by states. In the highway safety area the federal government has applied both incentives and sanctions based on state actions. In the past these strategies have been used to encourage states to enact laws that establish a minimum drinking age of 21 years and a maximum blood alcohol level of 0.08 to determine impaired driving ability. In addition, Safety Belt Performance Grants promote national priorities by providing financial incentives for meeting certain specific performance or safety activity criteria. Penalty provisions such as those associated with Open Container laws and Motor Carrier Safety Assistance Program grants promote federal priorities by transferring or withholding the state’s federal funds if states do not comply. If such programs were turned back to the states and if these incentive and sanction programs were eliminated, there would not appear to be a substitute basis for the federal government to influence state actions. Extensive state and metropolitan planning requirements could be affected by a turnback of the highway program. Federal laws and requirements specify an overall approach for transportation planning that states and regional organizations must follow in order to receive federal funds. This approach includes involving numerous stakeholders, identifying state and regional goals, developing long- and short-range state and metropolitan planning documents, and ensuring that a wide range of transportation planning factors are considered in the process. Without this structure, it is not clear what form planning processes might take at the state level, or what role, if any, the federal government would have in relation to planning activities. At the local level, metropolitan planning organizations (MPO) came into being largely as result of federal planning requirements, and MPO activities are in part funded through the current federal-aid program. In general, the role MPOs would play after a turnback of the federal program is unclear and would need to be redefined. The status of existing planning requirements and the amount of federal funding for metropolitan planning organizations (MPOs), if any, would have to be determined. If the effect of a turnback is to relinquish requirements, then states and localities would have to decide what kind of planning and other requirements they want to have and how to establish those requirements as a matter of policy. In addition, a turnback of federal surface transportation programs would necessitate a review of which federal requirements still apply. As a condition of receiving federal funds, states must adhere to federal regulations such as those covering contracting practices. For example, under the current highway program states must comply with the provisions of the Disadvantaged Business Enterprise Program, which requires that a certain percentage of contracts be awarded to socially or economically disadvantaged firms such as minority and women-owned businesses. Yet another area requiring review would be the applicability of federal environmental requirements. Federal laws not predicated on the receipt of federal funds would still apply and in some cases states have environmental regulations requiring their own environmental process. States would have to decide whether to replace revenues with state taxes. This decision would have different impacts on different states because some states contribute more in taxes than they get back in program funds and vice versa. In the highway context, these states are referred to as donor and donee states. However, a turnback might require states to replace Highway Trust Fund revenues for transit programs and safety grants as well as highways. For some states replacing federal revenues with state taxes sufficient to continue to fund existing federal programs would result in a net decrease in fuel taxes in that state while in others a net increase in fuel taxes—in some cases a substantial increase. This raises questions whether surface transportation programs would continue at the same funding level under a turnback because states face their own long-term fiscal challenges, and the fiscal capacity of states varies. Other factors could affect outcomes at the state level. For example, there is no way to reliably predict the extent to which “tax competition” between states—efforts to keep taxes lower as a way of attracting business—would occur. We considered the implications of a relatively complete turnback of federal grant programs, including highway, transit and safety grants. In the following example, almost all federal surface transportation programs funded through the Highway Trust Fund would be turned back to the states, with the exception of Federal Lands and Emergency Relief. In order to provide a consistent basis for comparison, we assumed that states would substantially continue current programs and activities that now receive federal funding, and that states would raise their fuel taxes to provide the additional revenues needed to cover the cost of these programs and activities. However, if a turnback of the federal program were to actually occur, the outcome would almost certainly differ from these results, because states would not necessarily elect to replace all current federal programs or finance the same programs and activities from their own resources. Furthermore, states might not elect to replace federal revenue with state fuel taxes as states have options for raising revenue other than fuel taxes. For example, a state might choose to raise vehicle registration fees or increase the use of tolling. The illustrative analysis of this turnback scenario showed that 27 states could achieve the same funding level as they currently receive through federal transportation grants with taxes lower than the existing federal tax, while 23 states and the District of Columbia would require taxes higher than the existing federal tax, or other revenue sources, to achieve full replacement value. Figure 1 lists the net change in per-gallon fuel taxes that would occur if the federal fuel tax were eliminated and states replaced Highway Trust Fund grants with their own fuel taxes. States in table 1 with a negative value would need to raise state taxes less than the current federal tax level, and states with a positive value would need to raise state taxes more than the current federal tax level, or obtain other revenue sources. Although table 1 shows that a similar number of states would likely require net increases and net decreases, the range is much wider among states that would require a net increase. While some states, such as Virginia and Arizona, would likely end up with modest net decreases in fuel taxes of up to 6 cents per gallon under this scenario, nine states and the District of Columbia would face increases of more than twice that— Mississippi and Alaska would all require comparatively extreme net increases of more than 30 cents per gallon, and the District of Columbia over $1 per gallon. These results reflect a cumulative effect of many factors, such as the “donor-donee” distinctions between states, equity and minimum apportionment adjustments from the Highway Trust Fund, the various allocations made to states for safety, and allocations to states and localities for transit programs. In general, states would have great flexibility in how they use funds under a turnback approach. States would have greater flexibility to develop their own programs and approaches without being limited to the current federal program categories, and would have greater discretion to define and fund projects that best suit their needs. In addition, there would be no congressionally directed spending. To the extent that federal programs affect the targeting of funds, states might shift funds to different projects. However, the current federal-aid program already gives states great discretion in setting priorities and selecting projects. In contrast, the current federal program may provide some states with flexibility they otherwise would not have. For example, some federal highway programs provide that funds may be transferred (flexed) between highway and transit programs. However, under a turnback of surface transportation programs, this flexibility could be lost in some states. For example, some states have constitutional provisions that require all fuel taxes to be spent solely on roads, thus making transit and safety programs ineligible barring constitutional change. Such states would have to revise certain laws and constitutional provisions or develop alternative sources of revenue in order to replace federal funds. In addition to the individual named above, other key contributors to this report were Steve Cohen, Assistant Director; Lauren Calhoun; Robert Ciszewski; Jay Cherlow; Elizabeth Eisenstadt; Teague Lyons; Josh Ormond; and Lisa Van Arsdale. The following are GAO products pertinent to the issues discussed in this report. Other products may be found at GAO’s Web site at www.gao.gov. Surface Transportation: Preliminary Observations on Efforts to Restructure Current Program. GAO-08-478T. Washington, D.C.: February 6, 2008. Freight Transportation: National Policy and Strategies Can Help Improve Freight Mobility. GAO-08-287. Washington, D.C.: January 7, 2008. Highlights of a Forum: Transforming Transportation Policy for the 21st Century. GAO-07-1210SP. Washington, D.C.: September 19, 2007. Railroad Bridges and Tunnels: Federal Role in Providing Safety Oversight and Freight Infrastructure Investment Could Be Better Targeted. GAO-07-770. Washington, D.C.: Aug. 6, 2007. Motor Carrier Safety: Preliminary Information on the Federal Motor Carrier Safety Administration’s Efforts to Identify and Follow Up with High-Risk Carriers. GAO-07-1074T. Washington, D.C.: July 11, 2007. Intermodal Transportation: DOT Could Take Further Actions to Address Intermodal Barriers. GAO-07-718. Washington, D.C.: June 20, 2007. Intercity Passenger Rail: National Policy and Strategies Needed to Maximize Public Benefits from Federal Expenditures. GAO-07-15. Washington, D.C.: November 13, 2006. Freight Railroads: Industry Health Has Improved, but Concerns about Competition and Capacity Should Be Addressed. GAO-07-94. Washington, D.C.: October 6, 2006. Public Transportation: New Starts Program Is in a Period of Transition. GAO-06-819. Washington, D.C.: August 30, 2006. Freight Transportation: Short Sea Shipping Option Shows Importance of Systematic Approach to Public Investment Decisions. GAO-05-768. Washington, D.C.: July 29, 2005. Rail Transit: Additional Federal Leadership Would Enhance FTA’s State Safety Oversight Program. GAO-06-821. Washington, D.C.: July 26, 2006. Intermodal Transportation: Potential Strategies Would Redefine Federal Role in Developing Airport Intermodal Capabilities. GAO-05-727. Washington, D.C.: July 26, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February, 2005. Homeland Security: Effective Regional Coordination Can Enhance Emergency Preparedness. GAO-04-1009. Washington, D.C.: September 15, 2004. Freight Transportation: Strategies Needed to Address Planning and Financing Limitations. GAO-04-165. Washington, D.C.: December 19, 2003. Surface and Maritime Transportation: Developing Strategies for Enhancing Mobility: A National Challenge. GAO-02-775. Washington, D.C.: August 30, 2002. Highway Infrastructure: Interstate Physical Conditions Have Improved, but Congestion and Other Pressures Continue. GAO-02-571. Washington, D.C.: May 31, 2002. Highway Public-Private Partnerships: More Rigorous Up-front Analysis Could Better Secure Potential Benefits and Protect the Public Interest. GAO-08-44. Washington D.C.: February 8, 2008. Federal-Aid Highways: Increased Reliance on Contractors Can Pose Oversight Challenges for Federal and State Officials. GAO-08-198. Washington D.C.: January 8, 2008. A Call For Stewardship: Enhancing the Federal Government’s Ability to Address Key Fiscal and Other 21st Century Challenges. GAO-08-93SP. Washington, D.C.: December 2007. Public Transportation: Future Demand Is Likely for New Starts and Small Starts Programs, but Improvements Needed to the Small Starts Application Process. GAO-07-917. Washington, D.C.: July 27, 2007. Surface Transportation: Strategies Are Available for Making Existing Road Infrastructure Perform Better. GAO-07-920. Washington, D.C.: July 26, 2007. Motor Carrier Safety: A Statistical Approach Will Better Identify Commercial Carriers That Pose High Crash Risks Than Does the Current Federal Approach. GAO-07-585. Washington, D.C.: June 11, 2007. Public Transportation: Preliminary Analysis of Changes to and Trends in FTA’s New Starts and Small Starts Programs. GAO-07-812T. Washington, D.C.: May 10, 2007. Older Driver Safety: Knowledge Sharing Should Help States Prepare for Increase in Older Driver Population. GAO-07-413. Washington, D.C.: April 11, 2007. Older Driver Safety: Survey of States on Their Implementation of Federal Highway Administration Recommendations and Guidelines, an E-Supplement. GAO-07-517SP. Washington, D.C.: April 11, 2007. Performance and Accountability: Transportation Challenges Facing Congress and the Department of Transportation. GAO-07-545T. Washington, D.C.: March 6, 2007. Transportation-Disadvantaged Populations: Actions Needed to Clarify Responsibilities and Increase Preparedness for Evacuations. GAO-07-44. Washington, D.C.: December 22, 2006. Federal Transit Administration: Progress Made in Implementing Changes to the Job Access Program, but Evaluation and Oversight Processes Need Improvement. GAO-07-43. Washington, D.C.: November 17, 2006. Truck Safety: Share the Road Safely Pilot Initiative Showed Promise, but the Program’s Future Success Is Uncertain. GAO-06-916. Washington, D.C.: September 8, 2006. Public Transportation: Preliminary Information on FTA’s Implementation of SAFETEA-LU Changes. GAO-06-910T. Washington, D.C.: June 27, 2006. Intermodal Transportation: Challenges to and Potential Strategies for Developing Improved Intermodal Capabilities. GAO-06-855T. Washington, D.C.: June 15, 2006. Federal Motor Carrier Safety Administration: Education and Outreach Programs Target Safety and Consumer Issues, but Gaps in Planning and Evaluation Remain. GAO-06-103. Washington, D.C.: December 19, 2005. Large Truck Safety: Federal Enforcement Efforts Have Been Stronger Since 2000, but Oversight of State Grants Needs Improvement. GAO-06- 156. Washington, D.C.: December 15, 2005. Highway Safety: Further Opportunities Exist to Improve Data on Crashes Involving Commercial Motor Vehicles. GAO-06-102. Washington, D.C.: November 18, 2005. Transportation Services: Better Dissemination and Oversight of DOT’s Guidance Could Lead to Improved Access for Limited English-Proficient Populations. GAO-06-52. Washington, D.C.: November 2, 2005. Highway Congestion: Intelligent Transportation Systems Promise for Managing Congestion Falls Short, and DOT Could Better Facilitate Their Strategic Use. GAO-05-943. Washington, D.C.: September 14, 2005. Highlights of an Expert Panel: The Benefits and Costs of Highway and Transit Investments. GAO-05-423SP. Washington, D.C.: May 6, 2005. Federal-Aid Highways: FHWA Needs a Comprehensive Approach to Improving Project Oversight. GAO-05-173. Washington, D.C.: January 31, 2005. Highway and Transit Investments: Options for Improving Information on Projects’ Benefits and Costs and Increasing Accountability for Results. GAO-05-172. Washington, D.C.: January 24, 2005. Highway Safety: Improved Monitoring and Oversight of Traffic Safety Data Program Are Needed. GAO-05-24. Washington, D.C.: November 4, 2004. Surface Transportation: Many Factors Affect Investment Decisions. GAO-04-744. Washington, D.C.: June 30, 2004. Highway Safety: Better Guidance Could Improve Oversight of State Highway Safety Programs. GAO-03-474. Washington, D.C.: April 21, 2003. Executive Guide: Leading Practices in Capital Decision Making. GAO/AIMD-99-32. Washington, D.C.: December 1998. Congressional Directives: Selected Agencies’ Processes for Responding to Funding Instructions. GAO-08-209. Washington, D.C.: January 31, 2008. Highway and Transit Investments: Flexible Funding Supports State and Local Transportation Priorities and Multimodal Planning. GAO-07-772. Washington, D.C.: July 26, 2007. State and Local Governments: Persistent Fiscal Challenges Will Likely Emerge within the Next Decade. GAO-07-1080SP. Washington D.C.: July 18, 2007. Highway Emergency Relief: Reexamination Needed to Address Fiscal Imbalance and Long-Term Sustainability. GAO-07-245. Washington, D.C.: February 23, 2007. High Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Highway Finance: States’ Expanding Use of Tolling Illustrates Diverse Challenges and Strategies. GAO-06-554. Washington, D.C.: June 28, 2006. Highway Trust Fund: Overview of Highway Trust Fund Estimates. GAO-06-572T. Washington, D.C.: April 4, 2006. Federal-Aid Highways: Trends, Effect on State Spending, and Options for Future Program Design. GAO-04-802. Washington, D.C.: August 31, 2004. U.S. Infrastructure: Funding Trends and Federal Agencies’ Investment Estimates. GAO-01-986T. Washington, D.C.: July 23, 2001. Federal Budget: Choosing Public Investment Programs. GAO/AIMD-93- 25. Washington, D.C.: July 23, 1993.
Surface transportation programs need to be reexamined in the context of the nation's current unsustainable fiscal path. Surface transportation programs are particularly ready for review as the Highway Trust Fund faces a fiscal imbalance at a time when both congestion and travel demand are growing. As you requested, this report (1) provides an overview of the federal role in surface transportation and the goals and structures of federal programs, (2) summarizes GAO's conclusions about the structure and performance of these programs, and (3) provides principles to assess options for focusing future surface transportation programs. GAO's study is based on prior GAO reports, stakeholder reports and interviews, Department of Transportation documents, and the views of transportation experts. Since federal financing for the interstate system was established in 1956, the federal role in surface transportation has expanded to include broader goals, more programs, and a variety of program structures. To incorporate additional transportation, environmental and societal goals, federal surface transportation programs have grown in number and complexity. While some of these goals have been incorporated as new grant programs in areas such as transit, highway safety, and motor carrier safety, others have been incorporated as additional procedural requirements for receiving federal aid. Broad program goals, eligibility requirements, and transfer provisions give states and local governments substantial discretion for allocating most highway infrastructure funds. For transit and safety programs, broad basic grant programs are augmented by programs that either require a competitive selection process or use financial incentives to directly target federal funds toward specific goals or safety activities. Many current programs are not effective at addressing key transportation challenges such as increasing congestion and freight demand. They generally do not meet these challenges because federal goals and roles are unclear, many programs lack links to needs or performance, and the programs often do not employ the best tools and approaches. The goals of current programs are numerous and sometimes conflicting. Furthermore, states' ability to transfer highway infrastructure funds among different programs is so flexible that some program distinctions have little meaning. Moreover, programs often do not employ the best tools and approaches; rigorous economic analysis is not a driving factor in most project selection decisions and tools to make better use of existing infrastructure have not been deployed to their full potential. Modally-stovepiped funding can impede efficient planning and project selection and, according to state officials, congressionally directed spending may limit the states' ability to implement projects and efficiently use transportation funds. A number of principles can help guide the assessment of options for transforming federal surface transportation programs. These principles include: (1) ensuring goals are well defined and focused on the federal interest, (2) ensuring the federal role in achieving each goal is clearly defined, (3) ensuring accountability for results by entities receiving federal funds, (4) employing the best tools and approaches to emphasize return on targeted federal investment, and (5) ensuring fiscal sustainability. With the sustainability and performance issues of current programs, it is an opportune time for Congress to more clearly define the federal role in transportation and improve progress toward specific, nationally-defined outcomes. Given the scope of needed transformation, it may be necessary to shift policies and programs incrementally or on a pilot basis to gain practical lessons for a coherent, sustainable, and effective national program and financing structure to best serve the nation for the 21st century.
You are an expert at summarizing long articles. Proceed to summarize the following text: Long-term care includes many types of services needed when a person has a physical or mental disability. Individuals needing long-term care have varying degrees of difficulty in performing some activities of daily living without assistance, such as bathing, dressing, toileting, eating, and moving from one location to another. They may also have trouble with instrumental activities of daily living, which include such tasks as preparing food, housekeeping, and handling finances. They may have a mental impairment, such as Alzheimer’s disease, that necessitates supervision to avoid harming themselves or others or assistance with tasks such as taking medications. Although a chronic physical or mental disability may occur at any age, the older an individual becomes, the more likely a disability will develop or worsen. According to the 1999 National Long-Term Care Survey, approximately 7 million elderly had some sort of disability in 1999, including about 1 million needing assistance with at least five activities of daily living. Assistance takes place in many forms and settings, including institutional care in nursing homes or assisted living facilities, home care services, and unpaid care from family members or other informal caregivers. In 1994, approximately 64 percent of all elderly with a disability relied exclusively on unpaid care from family or other informal caregivers; even among elderly with difficulty with five activities of daily living, about 41 percent relied entirely on unpaid care. Nationally, spending from all public and private sources for long-term care for all ages totaled about $137 billion in 2000, accounting for nearly 12 percent of all health care expenditures. Over 60 percent of expenditures for long-term care services are paid for by public programs, primarily Medicaid and Medicare. Individuals finance almost one-fourth of these expenditures out-of-pocket and, less often, private insurers pay for long- term care. Moreover, these expenditures do not include the extensive reliance on unpaid long-term care provided by family members and other informal caregivers. Figure 1 shows the major sources financing these expenditures. Medicaid, the joint federal-state health-financing program for low-income individuals, continues to be the largest funding source for long-term care. Medicaid provides coverage for poor persons and to many individuals who have become nearly impoverished by “spending down” their assets to cover the high costs of their long-term care. For example, many elderly persons become eligible for Medicaid as a result of depleting their assets to pay for nursing home care that Medicare does not cover. In 2000, Medicaid paid 45 percent (about $62 billion) of total long-term care expenditures. States share responsibility with the federal government for Medicaid, paying on average approximately 43 percent of total Medicaid costs. Eligibility for Medicaid-covered long-term care services varies widely among states. Spending also varies across states—for example, in fiscal year 2000, Medicaid per capita long-term care expenditures ranged from $73 per year in Nevada to $680 per year in New York. For the national average in recent years, about 53 to 60 percent of Medicaid long- term care spending has gone toward the elderly. In 2000, nursing home expenditures dominated Medicaid long-term care expenditures, accounting for 57 percent of its long-term care spending. Home care expenditures make up a growing share of Medicaid long-term care spending as many states use the flexibility available within the Medicaid program to provide long-term care services in home- and community- based settings. Expenditures for Medicaid home- and community-based services grew ten-fold from 1990 to 2000—from $1.2 billion to $12.0 billion. Other significant long-term care financing sources include: Individuals’ out-of-pocket payments, the second largest payer of long-term care services, accounted for 23 percent (about $31 billion) of total expenditures in 2000. The vast majority (80 percent) of these payments were used for nursing home care. Medicare spending accounted for 14 percent (about $19 billion) of total long-term care expenditures in 2000. While Medicare primarily covers acute care, it also pays for limited stays in post-acute skilled nursing care facilities and home health care. Private insurance, which includes both traditional health insurance and long-term care insurance, accounted for 11 percent (about $15 billion) of long-term care expenditures in 2000. Less than 10 percent of the elderly and an even lower percentage of the near elderly (those aged 55 to 64) have purchased long-term care insurance, although the number of individuals purchasing long-term care insurance increased during the 1990s. Before focusing on the increased burden that long-term care will place on federal and state budgets, it is important to look at the broader budgetary context. As we look ahead we face an unprecedented demographic challenge with the aging of the baby boom generation. As the share of the population 65 and over climbs, federal spending on the elderly will absorb a larger and ultimately unsustainable share of the federal budget and economic resources. Federal spending for Medicare, Medicaid, and Social Security are expected to surge—nearly doubling by 2035—as people live longer and spend more time in retirement. In addition, advances in medical technology are likely to keep pushing up the cost of health care. Moreover, the baby boomers will be followed by relatively fewer workers to support them in retirement, prompting a relatively smaller employment base from which to finance these higher costs. Under the 2001 Medicare trustees’ intermediate estimates, Medicare will double as a share of gross domestic product (GDP) between 2000 and 2035 (from 2.2 percent to 5.0 percent) and reach 8.5 percent of GDP in 2075. The federal share of Medicaid as a percent of GDP will grow from today’s 1.3 percent to 3.2 percent in 2035 and reach 6.0 percent in 2075. Under the Social Security trustees’ intermediate estimates, Social Security spending will grow as a share of GDP from 4.2 percent to 6.6 percent between 2000 and 2035, reaching 6.7 percent in 2075. (See fig. 2.) Combined, in 2075 a full one-fifth of GDP will be devoted to federal spending for these three programs alone. To move into the future with no changes in federal health and retirement programs is to envision a very different role for the federal government. Our long-term budget simulations serve to illustrate the increasing constraints on federal budgetary flexibility that will be driven by entitlement spending growth. Assume, for example, that last year’s tax reductions are made permanent, revenue remains constant thereafter as a share of GDP, and discretionary spending keeps pace with the economy. Under these conditions, spending for net interest, Social Security, Medicare, and Medicaid would consume nearly three-quarters of federal revenue by 2030. This will leave little room for other federal priorities, including defense and education. By 2050, total federal revenue would be insufficient to fund entitlement spending and interest payments. (See fig. 3.) Beginning about 2010, the share of the population that is age 65 or older will begin to climb, with profound implications for our society, our economy, and the financial condition of these entitlement programs. In particular, both Social Security and the Hospital Insurance portion of Medicare are largely financed as pay-as-you-go systems in which current workers’ payroll taxes pay current retirees’ benefits. Therefore, these programs are directly affected by the relative size of populations of covered workers and beneficiaries. Historically, this relationship has been favorable. In the near future, however, the overall worker-to-retiree ratio will change in ways that threaten the financial solvency and sustainability of these entitlement programs. In 2000, there were 4.9 working-age persons (18 to 64 years) per elderly person, but by 2030, this ratio is projected to decline to 2.8. This decline in the overall worker-to-retiree ratio will be due to both the surge in retirees brought about by the aging baby boom generation as well as falling fertility rates, which translate into relatively fewer workers in the near future. Social Security’s projected cost increases are due predominantly to the burgeoning retiree population. Even with the increase in the Social Security eligibility age to 67, these entitlement costs are anticipated to increase dramatically in the coming decades as a larger share of the population becomes eligible for Social Security, and if, as expected, average longevity increases. As the baby boom generation retires and the Medicare-eligible population swells, the imbalance between outlays and revenues will increase dramatically. Medicare growth rates reflect not only a rapidly increasing beneficiary population, but also the escalation of health care costs at rates well exceeding general rates of inflation. While advances in science and technology have greatly expanded the capabilities of medical science, disproportionate increases in the use of health services have been fueled by the lack of effective means to channel patients into consuming, and providers into offering, only appropriate services. Although Medicare cost growth had slowed in recent years, in fiscal year 2001 Medicare spending grew by 10.3 percent and is up 7.8 percent for the first 5 months of fiscal year 2002. To obtain a more complete picture of the future health care entitlement burden, especially as it relates to long-term care, we must also acknowledge and discuss the important role of Medicaid. Approximately 71 percent of all Medicaid dollars are dedicated to services for the aged, blind, and disabled individuals, and Medicaid spending is one of the largest components of most states’ budgets. At the February 2002 National Governors Association meeting, governors reported that during a time of fiscal crisis for states, the growth in Medicaid is creating a situation in which states are faced with either making major cuts in programs or being forced to raise taxes significantly. Further, in a 2001 survey, 24 states cited increased costs for nursing homes and home- and community-based services as among the top factors in Medicaid cost growth. Over the longer term, the increase in the number of elderly will add considerably to the strain on federal and state budgets as governments struggle to finance increased Medicaid spending. In addition, this strain on state Medicaid budgets may be exacerbated by fluctuations in the business cycle, such as the recent economic slowdown. State revenues decline during economic downturns, while the needs of the disabled for assistance remain constant. In coming decades, the sheer number of aging baby boomers will swell the number of elderly with disabilities and the need for services. These overwhelming numbers offset the slight reductions in the prevalence of disability among the elderly reported in recent years. In 2000, individuals aged 65 or older numbered 34.8 million people—12.7 percent of our nation’s total population. By 2020, that percentage will increase by nearly one-third to 16.5 percent—one in six Americans—and will represent nearly 20 million more elderly than there are today. By 2040, the number of elderly aged 85 years and older—the age group most likely to need long- term care services—is projected to more than triple from about 4 million to about 14 million (see fig. 4). It is difficult to precisely predict the future increase in the number of the elderly with disabilities, given the counterbalancing trends of an increase in the total number of elderly and a possible continued decrease in the prevalence of disability. For the past two decades, the number of elderly with disabilities has remained fairly constant while the percentage of those with disabilities has fallen between 1 and 2 percent a year. Possible factors contributing to this decreased prevalence of disability include improved health care, improved socioeconomic status, and better health behaviors. The positive benefits of the decreased prevalence of disability, however, will be overwhelmed by the sheer numbers of aged baby boomers. The total number of disabled elderly is projected to increase to between one- third and twice current levels, or as high as 12.1 million by 2040. The increased number of disabled elderly will exacerbate current problems in the provision and financing of long-term care services. Approximately one in five adults with long-term care needs and living in the community reports an inability to receive needed care, such as assistance in toileting or eating, often with adverse consequences. In addition, disabled elderly may lack family support or the financial means to purchase medical services. Long-term care costs can be financially catastrophic for families. Services, such as nursing home care, are very expensive; while costs can vary widely, a year in a nursing home typically costs $50,000 or more, and in some locations can be considerably more. Because of financial constraints, many elderly rely heavily on unpaid caregivers, usually family members and friends; overall, the majority of care received in the community is unpaid. However, in coming decades, fewer elderly may have the option of unpaid care because a smaller proportion may have a spouse, adult child, or sibling to provide it. By 2020, the number of elderly who will be living alone with no living children or siblings is estimated to reach 1.2 million, almost twice the number without family support in 1990. In addition, geographic dispersion of families may further reduce the number of unpaid caregivers available to elderly baby boomers. Currently, public and private spending on long-term care is about $137 billion for persons of all ages, and for the elderly alone is projected to increase two-and-a-half to four times in the next 40 to 50 years—reaching as much as $379 billion in constant dollars for the elderly alone, according to one source. (See fig. 5.) Estimates of future spending are imprecise, however, due to the uncertain effect of several important factors, including how many elderly will need assistance, the types of care they will use, and the availability of public and private sources of payment for care. Absent significant changes in the availability of public and private payment sources, however, future spending is expected to continue to rely heavily on public payers, particularly Medicaid, which estimates indicate pays about 36 to 37 percent of long-term care expenditures for the elderly. One factor that will affect spending is how many elderly will need assistance. As I have previously discussed, even with continued decreases in the prevalence of disability, aging baby boomers are expected to have a disproportionate effect on the demand for long-term care. Another factor influencing projected long-term care spending is the type of care that the baby boom generation will use. Currently, expenditures for nursing home care greatly exceed those for care provided in other settings. Average expenditures per elderly person in a nursing home can be about four times greater than average expenditures for those receiving paid care at home. The past decade has seen increases in paid home care as well as in assisted living facilities, a relatively newer and developing type of housing in which an estimated 400,000 elderly with disabilities resided in 1999. It is unclear what effect continued growth in paid home care, assisted living facilities, or other care alternatives may have on future expenditures. Any increase in the availability of home care may reduce the average cost per disabled person, but the effect could be offset if there is an increase in the use of paid home care by persons currently not receiving these services. Changes in the availability of public and private sources to pay for care will also affect expenditures. Private long-term care insurance has been viewed as a possible means of reducing catastrophic financial risk for the elderly needing long-term care and relieving some of the financial burden currently falling on public long-term care programs. Increases in private insurance may lower public expenditures but raise spending overall because insurance increases individuals’ financial resources when they become disabled and allows the purchase of additional services. The number of policies in force remains relatively small despite improvements in policy offerings and the tax deductibility of premiums. However, as we have previously testified, questions about the affordability of long-term care policies and the value of the coverage relative to the premiums charged have posed barriers to more widespread purchase of these policies. Further, many baby boomers continue to assume they will never need such coverage or mistakenly believe that Medicare or their own private health insurance will provide comprehensive coverage for the services they need. If private long-term care insurance is expected to play a larger role in financing future generations’ long-term care needs, consumers need to be better informed about the costs of long-term care, the likelihood that they may need these services, and the limits of coverage through public programs and private health insurance. With or without increases in the availability of private insurance, Medicaid and Medicare are expected to continue to pay for the majority of long-term care services for the elderly in the future. Without fundamental financing changes, Medicaid can be expected to remain one of the largest funding sources for long-term care services for aging baby boomers, with Medicaid expenditures for long-term care for the elderly reaching as high as $132 billion by 2050. As I noted previously, this increasing burden will strain both federal and state governments. Given the anticipated increase in demand for long-term care services resulting from the aging of the baby boom generation, the concerns about the availability of services, and the expected further stress on federal and state budgets and individuals’ financial resources, some policymakers and advocates have called for long-term care financing reforms. As further deliberation is given to any long-term care financing reforms, I would like to close by suggesting several considerations for policymakers to keep in mind. At the outset, it is important to recognize that long-term care services are not just another set of traditional health care services. Meeting acute and chronic health care needs is an important element of caring for aging and disabled individuals. Long-term care, however, encompasses services related to maintaining quality of life, preserving individual dignity, and satisfying preferences in lifestyle for someone with a disability severe enough to require the assistance of others in everyday activities. Some long-term care services are akin to other health care services, such as personal assistance with activities of daily living or monitoring or supervision to cope with the effect of dementia. Other aspects of long-term care, such as housing, nutrition, and transportation, are services that all of us consume daily but become an integral part of long-term care for a person with a disability. Disabilities can affect housing needs, nutritional needs, or transportation needs. But, what is more important is that where one wants to live or what activities one wants to pursue also affects how needed services can be provided. Providing personal assistance in a congregate setting such as a nursing home or assisted living facility may satisfy more of an individual’s needs, be more efficient, and involve more direct supervision to ensure better quality than when caregivers travel to individuals’ homes to serve them one on one. Yet, those options may conflict with a person’s preference to live at home and maintain autonomy in determining his or her daily activities. Keeping in mind that policies need to take account of the differences involved in long-term care, let me offer several considerations as you seek to shape effective long-term care financing reforms. These include: Determining societal responsibilities. A fundamental question is how much the choices of how long-term care needs are met should depend upon an individual’s own resources or whether society should supplement those resources to broaden the range of choices. For a person without a disability requiring long-term care, where to live and what activities to pursue are lifestyle choices based on individual preferences and resources. However, for someone with a disability, those lifestyle choices affect the costs of long-term care services. The individual’s own resources—including financial resources and the availability of family or other informal supports—may not be sufficient to preserve some of their choices and also obtain needed long-term care services. Societal responsibilities may include maintaining a safety net to satisfy individual needs for assistance. However, the safety net may not provide a full range of choices in how those needs are met. Persons who require assistance multiple times a day and lack family members to provide some share of this assistance may not be able to have their needs satisfied in their own homes. The costs of meeting such extensive needs may mean that sufficient public support is available only in settings such as assisted living facilities or nursing homes. More extensive public support may be extended, but decisions to do so should carefully consider affordability in the context of competing demands for our nation’s resources. Considering the potential role of social insurance in financing. Government’s role in many situations has extended beyond providing a safety net. Sometimes this extended government role has been a result of efficiencies in having government undertake a function, and in other cases this role has been a policy choice. Some proposals have recommended either voluntary or mandatory social insurance to provide long-term care assistance to broad groups of beneficiaries. In evaluating such proposals, careful attention needs to be paid to the limits and conditions under which services will be provided. In addition, who will be eligible and how such a program will be financed are critical choices. As in defining a safety net, it is imperative that any option under consideration be thoroughly assessed for its affordability over the longer term. Encouraging personal preparedness. Becoming disabled is a risk. Not everyone will experience disability during his or her lifetime and even fewer persons will experience a severe disability requiring extensive assistance. This is the classic situation in which having insurance to provide additional resources to deal with a possible disability may be better than relying on personally saving for an event that may never occur. Insurance allows both persons who eventually will become disabled and those who will not to use more of their economic resources during their lifetime and to avoid having to put those resources aside for the possibility that they may become disabled. The public sector has two important potential roles in encouraging personal preparedness. The first is to adequately educate people about the boundaries between personal and societal responsibilities. Only if the limits of public support are clear will individuals be likely to take steps to prepare for a possible disability. Currently, one of the factors contributing to the lack of preparation for long-term care among the elderly is a widespread misunderstanding about what services Medicare will cover. The second public sector role may be to assure the availability of sound private long-term care insurance policies and possibly to create incentives for their purchase. Progress has been made in improving the value of insurance policies through state insurance regulation and strengthening the requirements for policies qualifying for favorable tax treatment through the Health Insurance Portability and Accountability Act of 1996. However, long-term care insurance is still an evolving product, and given the flux in how long-term care services are delivered, it is important to monitor whether long-term care insurance regulations need adjustments to ensure that consumers receive fair value for their premium dollars. Recognizing the benefits, burdens, and costs of informal caregiving.
As more and more of the baby boomers enter retirement age, spending for Medicare, Medicaid, and Social Security is expected to absorb correspondingly larger shares of federal revenue and crowd out other spending. The aging of the baby boomers will also increase the demand for long-term care and contribute to federal and state budget burdens. The number of disabled elderly who cannot perform daily living activities without assistance is expected to double in the future. Long-term care spending from public and private sources--about $137 billion for persons of all ages in 2000--will rise dramatically as the baby boomers age. Without fundamental financing changes, Medicaid--which pays more than one-third of long-term care expenditures for the elderly--can be expected to remain one of the largest funding sources, straining both federal and state governments.
You are an expert at summarizing long articles. Proceed to summarize the following text: There is an increasing demand, coming from the Congress and the public, for a smaller government that works better and costs less. Having valuable, accurate, and accessible financial and programmatic information is a critical element for any improvement effort to succeed. Furthermore, increasing the quality and speed of service delivery while reducing costs will require the government to make significant investments in three fundamental assets—personnel, knowledge, and capital property/fixed assets. Investments in information technology (IT) projects can dramatically affect all three of these assets. Indeed, the government’s ability to improve performance and reduce costs in the information age will depend, to a large degree, on how well it selects and uses information systems investments to modernize its often outdated operations. However, the impact of information technology is not necessarily dependent on the amount of money spent, but rather on how the investments are selected and managed. This, in essence, is the challenge facing federal executives: Increasing the return on money spent on IT projects by spending money wiser, not faster. IT projects, however, are often poorly managed. For example, one market research group estimates that about a third of all U.S. IT projects are canceled, at a estimated cost in 1995 of over $81 billion. In the last 12 years, the federal government has obligated at least $200 billion for information management with mixed results at best. Yet despite this huge investment, government operations continue to be hampered by inaccurate data and inadequate systems. Too often, IT projects cost much more and produce much less than what was originally envisioned. Even worse, often these systems do not significantly improve mission performance or they provide only a fraction of the expected benefits. Of 18 major federal agencies, 7 have an IT effort that has been identified as high risk by either the Office of Management and Budget (OMB) or us. Some private and public sector organizations, on the other hand, have designed and managed IT to improve their organizational performance. In a 1994 report, we analyzed the information management practices of several leading private and state organizations. These leading organizations were identified as such by their peers and independent researchers because of their progress in managing information to improve service quality, reduce costs, and increase work force productivity and effectiveness. From this analysis, we derived 11 fundamental IT management practices that, when taken together, provide the basis for the successful outcomes that we found in leading organizations. (See figure 1.1.) One of the best practices exhibited by leading organizations was that they manage information systems projects as investments. This particular practice offers organizations great potential for gaining better control over their IT expenditures. In the short term (within 2 years), this practice serves as a powerful tool for carefully managing and controlling IT expenditures and better understanding the explicit costs and projected returns for each IT project. In the long term (from 3 to 5 years), this practice serves as an effective process for linking IT projects to organizational goals and objectives. However, managing IT projects as investments works most effectively when implemented as part of an integrated set of management practices. For example, project management systems must also be in place, reengineering improvements analyzed, and planning processes linked to mission goals. While the specific processes used to implement an investment approach may vary depending upon the structure of the organization (e.g., centralized versus decentralized operations), we nonetheless found that the leading organizations we studied shared several common management practices related to the strategic use of information and information technologies. Specifically, they maintained a decision-making process consisting of three phases—selection, control, and evaluation—designed to minimize risks and maximize return on investment. (See figure 1.2.) The Congress has passed several pieces of legislation that lay the groundwork for agencies to establish a investment approach for managing IT. For instance, revisions to the Paperwork Reduction Act (PRA) (Public Law 104-13) have put more emphasis on evaluating the operational merits of information technology projects. The Chief Financial Officers (CFO) Act (Public Law 101-576) focuses on the need to significantly improve financial management and reporting practices of the federal government. Having accurate financial data is critical to establishing performance measures and assessing the returns on IT investments. Finally, the Government Performance and Results Act (GPRA) (Public Law 103-62) requires agencies to set results-oriented goals, measure performance, and report on their accomplishments. In addition, the recently passed Information Technology Management Reform Act (ITMRA) (Division E of Public Law 104-106) requires federal agencies to focus more on the results achieved through IT investments while streamlining the federal IT procurement process. Specifically this act, which became effective August 8 of this year, introduces much more rigor and structure into how agencies approach the selection and management of IT projects. Among other things, the head of each agency is required to implement a process for maximizing the value and assessing and managing the risks of the agency’s IT acquisitions. Appendix V summarizes the primary IT investment provisions contained in ITMRA. ITMRA also heightens the role of OMB in supporting and overseeing agencies’ IT management activities. The Director of OMB is now responsible for promoting and directing that federal agencies establish capital planning processes for IT investment decisions. The Director is also responsible for evaluating the results of agency IT investments and enforcing accountability. The results of these decisions will be used to develop recommendations for the President’s budget. OMB has begun to take action in these areas. In November 1995, OMB, with substantial input from GAO, published a guide designed to help federal agencies systematically manage and evaluate their IT-related investments.This guide was based on the investment processes found at the leading organizations. Recent revisions to OMB Circular A-130 on federal information resources management have also placed greater emphasis on managing information system projects as investments. And the recently issued Part 3 of OMB Circular A-11, which replaced OMB Bulletin 95-03, “Planning and Budgeting for the Acquisition of Fixed Assets,” provides additional guidance and information requirements for major fixed asset acquisitions. The Chairman, Senate Subcommittee on Oversight of Government Management and the District of Columbia, Committee on Governmental Affairs and the Chairman and Ranking Minority Member, House Committee on Government Reform and Oversight, requested that we compare and contrast the management practices and decision processes used by leading organizations with a small sample of federal agencies. The process used by leading organizations is embodied in OMB’s Evaluating Information Technology Investments: A Practical Guide and specific provisions contained in the Information Technology Management Reform Act of 1996. The agencies we examined are the National Aeronautics and Space Administration (NASA) ($1.6 billion spent on IT in FY 1994), National Oceanic and Atmospheric Administration (NOAA) ($296 million spent on IT in FY 1994), Environmental Protection Agency (EPA) ($302 million spent on IT in FY 1994), Coast Guard ($157 million spent on IT in FY 1994), and the Internal Revenue Service (IRS) ($1.3 billion spent on IT in FY 1994). We selected the federal agencies for our sample based on one or more of the following characteristics: (1) large IT budgets, (2) expected IT expenditure growth rates, and (3) programmatic risk as assessed by GAO and OMB. In addition, the Coast Guard was selected because of its progress in implementing an investment process. Collectively, these agencies spent about $3.7 billion on IT in FY 1994—16 percent of the total spent on IT. Our review focused exclusively on how well these five agencies manage information technology as investments, one of the 11 practices used by leading organizations to improve mission performance, as described in our best practices report. As such, our evaluation only focused on policies and practices used at the agencywide level; we did not evaluate the agencies’ performance in the 10 other practices. In addition, we did not systematically examine the overall IT track records of each agency. During our review of agency IT investment decision-making processes, we did the following: reviewed agencies’ policies, practices, and procedures for managing IT investments; interviewed senior executives, program managers, and IRM professionals; and determined whether agencies followed practices similar to those used by leading organizations to manage information systems projects as investments. We developed the attributes needed to manage information systems projects as investments from the Paperwork Reduction Act, the Federal Acquisition Streamlining Act, OMB Circular A-130, GAO’s “best practices” report on strategic information management, GAO’s strategic information management toolkit, and OMB’s guide Evaluating Information Technology Investments: A Practical Guide. Many of the characteristics of this investment approach are contained in the Information Technology Management Reform Act of 1996 (as summarized in appendix V). However, this law was not in effect at the time of our review. To identify effects associated with the presence or absence of investment controls, we reviewed agencies’ reports and documents, related GAO and Inspector General reports, and other external reports. We also discussed the impact of the agencies’ investment controls with senior executives, program managers, and IRM professionals to get an agencywide perspective on the controls used to manage IT investments. Additionally, we reviewed agency documentation dealing with IT selection, budgetary development, and IT project reviews. To determine how much each agency spent on information technology, we asked each agency for information on spending, staffing, and their 10 largest IT systems and projects. The agencies used a variety of sources for the same data elements, which may make comparisons among agencies unreliable. While data submitted by the agencies were validated by agency officials, we did not independently verify the accuracy of the data. Most of our work was conducted at agencies’ headquarters in Washington, D.C. Similarly, we visited NOAA offices in Rockville, Maryland, and the National Weather Service in Silver Spring, Maryland. We also visited NASA program, financial, and IRM officials at Johnson Space Center in Houston, Texas, and Ames Research Center in San Francisco, California, to learn how they implement NASA policy on IT management. We performed the majority of our work from April 1995 through September 1995, with selected updates through July 1996, in accordance with generally accepted government auditing standards. We updated our analyses of IRS and NASA in conjunction with other related audit work. In addition, several of the agencies provided us with updated information as part of their comments on a draft version of the report. Many of these changes have only recently occurred and we have not fully evaluated them to determine their effect on the agency’s IT investment process. We provided and discussed a draft of this report with officials from OMB, EPA, NASA, NOAA, IRS, and the Coast Guard, and have incorporated their comments where appropriate. OMB’s written comments, as well as our evaluation, are provided in appendix I. Appendix II profiles each agency’s IT spending, personnel, and major projects. Appendix III provides a brief description of an IT investment process approach based on work by GAO and OMB. Appendix IV provides a brief overview of each agency’s IT management processes. Because of its relevance to this report, the investment provisions of the Information Technology Management Reform Act of 1996 are summarized in appendix V. Major contributors to this report are listed in appendix VI. All of the agencies we studied—NASA, IRS, the Coast Guard, NOAA, and EPA—had at least elements or portions of an IT investment process in place. For instance, the Coast Guard had a selection process with decision criteria that included an analysis of cost, risk, and return data; EPA had created a executive management group to address cross-agency IT NASA and NOAA utilized program control meetings to ensure senior management involvement in monitoring the progress of important ongoing IT projects; and IRS had developed a systems investment evaluation review methodology and used it to conduct postimplementation reviews of some Tax Systems Modernization projects. However, none of these five agencies had implemented a complete, institutionalized investment approach that would fulfill requirements of PRA and ITMRA. Consequently, IT decision-making at these agencies was often inconsistent or based on the priorities of individual units rather than the organization as a whole. Additionally, cost-benefit and risk analyses were rarely updated as projects proceeded and were not used for managing project results. Also, the mission-related benefits of implemented systems were often difficult to determine since agencies rarely collected or compared data on anticipated versus actual costs and benefits. In general, we found that the IT investment control processes used at the case study agencies at the time of our review contained four main weaknesses. While all four weaknesses may not have been present at each agency, in comparison to leading organizations, the case study agencies lacked a consistent process (used at all levels of the agency) for uniformly selecting and managing systems investments; focused their selection processes on selected efforts, such as justifying new project funding or focusing on projects already under development, rather than managing all IT projects—new, under development, and operational—as a portfolio of competing investments; made funding decisions without giving adequate attention to management control or evaluation processes, and made funding decisions based on negotiations or undefined decision criteria and did not have the up-to-date, accurate data needed to support IT investment decisions. Appendix IV provides a brief overview of how each agency’s current processes for selecting, controlling, and evaluating IT projects worked. Leading organizations use the selection, control, and evaluation decision-making processes in a consistent manner throughout different units. This enables the organization, even one that is highly decentralized, to make trade-offs between projects, both within and across business units. Figure 2.1 illustrates how this process can be applied to the federal government where major cabinet departments may have several agencies under their purview. IT portfolio investment processes can exist at both the departmental and agency levels. As with leading organizations, the key factor is being able to determine which IT projects and resources are shared (and should be reviewed at the departmental level) and which are unique to each agency. Three common criteria used by leading organizations are applicable in the federal setting. These threshold criteria include (1) high-dollar, high-risk IT projects (risk and dollar amounts having been already defined), (2) cross-functional projects (two or more organizational units will benefit from the project), and (3) common infrastructure support (hardware and telecommunications). Projects that meet these particular threshold criteria are discussed, reviewed, and decided upon at a departmentwide level. The key to making this work is having clearly defined roles, responsibilities, and criteria for determining the types of projects that will be reviewed at the different organizational levels. As described in ITMRA, agency heads are to implement a process for maximizing the value and assessing and managing the risks of IT investments. Further, this process should be integrated with the agency’s budget, financial, and program management process(es). Whether highly centralized or decentralized, matrixed or hierarchial, agencies can most effectively reap the benefits of an investment process by developing and maintaining consistent processes within and across their organizations. One of the agencies we reviewed—the Coast Guard—used common investment criteria for making cross-agency IT decisions. IRS had defined some criteria, but was not yet using these criteria to make decisions. The three other agencies—NASA, EPA, and NOAA—chose IT projects based on inconsistent or nonexistent investment processes. There was little or no uniformity in how risks, benefits, and costs of various IT projects across offices and divisions within these three agencies were evaluated. Thus, cross-comparisons between systems of similar size, function, or organizational impact were difficult at best. More important, management had no assurance that the most important mission objectives of the agency were being met by the suite of system investments that was selected. NASA, for instance, allowed its centers and programs to make their own IT funding decisions for mission-critical systems. These decisions were made without an agencywide mechanism in place to identify low-value IT projects or costs that could be avoided by capitalizing on opportunities for data sharing and system consolidation across NASA units. As a result, identifying cross-functional system opportunities was problematic at best. The scope of this problem became apparent as a result of a special NASA IT review. In response to budget pressures, NASA conducted an agencywide internal information systems review to identify cost savings. The resulting March 1995 report described numerous instances of duplicate IT resources, such as large-scale computing and wide area network services, that were providing similar functions. A subsequent NASA Inspector General’s (IG) report, also issued in March 1995, substantiated this special review, finding that at one center NASA managers had expended resources to purchase or develop information systems that were already available elsewhere, either within NASA or, in some cases, within that center itself. While this special review prompted NASA to plan several consolidation efforts, such as consolidating its separate wide area networks (for a NASA projected savings of $236 million over 5 years), the risk of purchasing duplicate IT resources remained because of weaknesses in its current decentralized decision-making process. For example, NASA created chief information officer (CIO) positions for NASA headquarters and for each of its 23 centers. These CIOs have a key role in improving agencywide IT cooperation and coordination. However, the CIOs have limited formal authority and to date have only exercised control over NASA’s administrative systems—which account for about 10 percent of NASA’s total IT budget. With more defined CIO roles, responsibility, and authority, it is likely that additional opportunities for efficiencies will be identified. NASA recently established a CIO council to establish high-level policies and standards, approve information resources management plans, and address issues and initiatives. The council will also serve as the IT capital investment advisory group to the proposed NASA Capital Investment Council. NASA plans for this Capital Investment Council to have responsibility for looking at all capital investments across NASA, including those for IT. NASA’s proposed Capital Investment Council may fill this need for identifying cross-functional opportunities; however, it is too early to evaluate its impact. By having consistent, quantitative, and analytical processes across NASA that address both mission-critical and administrative systems, NASA could more easily identify cross-functional opportunities. NASA has already demonstrated that savings can be achieved by looking within mission-critical systems for cross-functional opportunities. For instance, NASA estimated that $74 million was saved by developing a combined Space Station and Space Shuttle control center using largely commercial off-the-shelf software and a modular development approach, rather than the original plan of having two separate control centers that used mainframe technology and custom software. EPA, like NASA, followed a decentralized approach for making IT investment decisions. Program offices have had control and discretion over their specific IT budgets, regardless of project size or possible cross-office impact. As we have previously reported, this has led to stovepiped systems that do not have standard data definitions or common interfaces, making it difficult to share environmental data across the agency. This is important because sharing environmental data across the agency is crucial to implementing EPA’s strategic goals. In 1994, EPA began to address this problem by creating a senior management Executive Steering Committee (ESC) charged with ensuring that investments in agencywide information resources are managed efficiently and effectively. This committee, comprised of senior EPA executives, has the responsibility to (1) recommend funding on major system development efforts and (2) allocate the IT budget reserved for agencywide IRM initiatives, such as geographical information systems (GIS) support and data standards. At the time of our review, the ESC had not reviewed or made recommendations on any major information system development efforts. Instead, the ESC focused its activity on spending funds allocated to it for agencywide IRM policy initiatives, such as intra-agency data standards. The ESC met on June 26, 1996, to assess the impact of ITMRA upon EPA’s IT management process. In conducting their selection processes, leading organizations assess and manage the different types of IT projects, such as mission-critical or infrastructure, at all different phases of their life cycle, in order to create a complete strategic investment portfolio. (See figure 2.2.) By scrutinizing and analyzing their entire IT portfolio, managers can examine the costs of maintaining existing systems versus investing in new ones. By continually and rigorously reevaluating the entire project portfolio based on mission priorities, organizations can reach decisions on systems based on overall contribution to organizational goals. Under ITMRA, agencies will need to compare and prioritize projects using explicit quantitative and qualitative decision criteria. At the federal agencies we studied, some prioritization of projects was conducted, but none made managerial trade-offs across all types of projects. IRS, NOAA, and the Coast Guard each conducted some type of portfolio analyses; EPA and NASA did not. Additionally, the portfolio analyses that were performed generally covered projects that were either high dollar, new, or under development. For example, in 1995 we reported that IRS executives were consistently maintaining that all 36 TSM projects, estimated to cost up to $10 billion through the year 2001, were equally important and must all be completed for the modernization to succeed.This approach, as well as the accompanying initial failure to rank the TSM projects according to their prioritized needs and mission performance improvements, has meant that IRS could not be sure that the most important projects were being developed first. Since our 1995 report, IRS has begun to rank and prioritize all of the proposed TSM projects using cost, risk, and return decision criteria. However, these decision criteria are largely qualitative, the data used for decisions were not validated or reliable, and analyses were not based on calculations of expected return on investment. In addition, according to IRS, its investment review board uses a separate process with different criteria for analyzing operational systems. IRS also said that the board does not review research and development (R&D) systems or field office systems. Using separate processes for some system types and not including all systems prevents IRS from making comparisons and trade-offs as part of a complete IT portfolio. Of all the agencies we reviewed, the Coast Guard had the most experience using a comprehensive selection phase. In 1991, the Coast Guard started a strategic information resources management process and shortly thereafter initiated an IT investment process. Under this investment process, a Coast Guard working group from the IRM office ranks and prioritizes new IT projects and those under development based on explicit risk and return decision criteria. A senior management board meets annually to rank the projects and decide on priorities. The Coast Guard has derived benefits from its project selection process. During the implementation of its IT investment process, the Coast Guard identified opportunities for systems consolidation. For example, the Coast Guard reported that five separate personnel systems are being incorporated into the Personnel Management Information System/Joint Military Pay System II for a cost avoidance of $10.2 million. The Coast Guard also identified other systems consolidation opportunities that, if implemented, could result in a total cost savings of $77.4 million. However, at the time of our review, the Coast Guard’s selection process was still incomplete. For example, R&D projects and operational systems were not included in the prioritization process. As a result, the Coast Guard could not make trade-offs between all types of proposed systems investments, creating a risk that new systems would be implemented that duplicate existing systems. Additionally, the Coast Guard was at risk of overemphasizing investments in one area, such as maintenance and enhancements for existing systems, at the expense of higher value investments in other areas, such as software applications development supporting multiple unit needs. Leading organizations continue to manage their investments once selection has occurred, maintaining a cycle of continual control and evaluation. Senior managers review the project at specific milestones as the project moves through its life cycle and as the dollar amounts spent on the project increase. (See figure 2.3.) At these milestones, the executives compare the expected costs, risks, and benefits of earlier phases with the actual costs incurred, risks encountered, and benefits realized to date. This enables senior executives to (1) identify and focus on managing high-potential or high-risk projects, (2) reevaluate investment decisions early in a project’s life cycle if problems arise, (3) be responsive to changing external and internal conditions in mission priorities and budgets, and (4) learn from past success and mistakes in order to make better decisions in the future. The level of management attention focused on each of the three investment phases is proportional based on such factors as the relative importance of each project in the portfolio, the relative project risks, and the relative number of projects in different phases of the system development process. The control phase focuses senior executive attention on ongoing projects to regularly monitor their interim progress against projected risks, cost, schedule, and performance. The control phase requires projects to be modified, continued, accelerated, or terminated based on the results of those assessments. In the evaluation phase, the attention is focused on implemented systems to give a final assessment of risks, costs, and returns. This assessment is then used to improve the selection of future projects. Similarly in the federal government, GPRA forces a shift in the focus of federal agencies—away from such traditional concerns as staffing and activity levels and towards one overriding issue: results. GPRA requires agencies to set goals, measure performance, and report on their accomplishments. Just as in leading organizations, GPRA, in concert with the CFO Act, is intended to bring a more disciplined, businesslike approach to the management of federal programs. The agencies we reviewed focused most of their resources and attention on selecting projects and gave less attention to controlling or evaluating those projects. While IRS, NASA, and NOAA had implemented control mechanisms, and IRS had developed a postimplementation review methodology, none of the agencies had complete and comprehensive control and evaluation processes in place. Specifically, in the five case study agencies we evaluated, we found that control mechanisms were driven primarily by cost and schedule concerns without any focus on quantitative performance measures, evaluations of actual versus projected returns were rarely conducted, and information and lessons learned in either the control or evaluation phases were not systematically fed back to the selection phase to improve the project selection process. Leading organizations maintain control of a project throughout its life cycle by regularly measuring its progress against not only projected cost and schedule estimates, but also quantitative performance measures, such as benefits realized or demonstrated in pilot projects to date. To do this, senior executives from the program, IRM, and financial units continually monitor projects and systems for progress and identify problems. When problems are identified, they take immediate action to resolve them, minimize their impact, or alter project expectations. Legislation now requires federal executives to conduct this type of rigorous project monitoring. With the passage of ITMRA, agencies are required to demonstrate, through performance measures, how well IT projects are improving agency operations and mission effectiveness. Senior managers are also to receive independently verifiable information on cost, technical and capability requirements, timeliness, and mission benefit data at project milestones. Furthermore, pursuant to the Federal Acquisition Streamlining Act of 1994 (Public Law 103-355), if a project deviates from cost, schedule, and performance goals, the agency head is required to conduct a timely review of the project and identify appropriate corrective action—to include project termination. Two of the agencies we reviewed—the Coast Guard and EPA—did not use management control processes that focused on IT systems projects. The other three agencies—IRS, NOAA, and NASA—had management control processes that focused primarily on schedule and cost concerns, but not interim evaluations of performance and results. Rarely did we find examples in which anticipated benefits were compared to results at critical project milestones. We also found few examples of lessons that were learned during the control phase being cycled back to improve the selection phase. To illustrate, both IRS and NASA used program control meetings (PCMs) to keep senior executives informed of the status of their major systems by requiring reports, in the form of self-assessments, from the project managers. However, these meetings did not focus on how projects were achieving interim, measurable improvement targets for quality, speed, and service that could form the basis for project decisions about major modifications or termination. IRS, for instance, used an implementation schedule to track different components of each of its major IT projects under TSM. Based on our discussions with IRS officials, the PCMs focused on factors bearing on real or potential changes in project costs or schedule. Actual, verified data on interim application or system testing results—compared to projected improvements in operational, mission improvements—were not evaluated. At NASA, senior program executives attended quarterly Program Management Council (PMC) meetings to be kept informed of major programs and projects and to take action when problems arose. While not focused exclusively on IT issues, the PCMs were part of a review process that looked at implementation issues of programs and projects that (1) were critical to fulfilling NASA’s mission, particularly those that were assigned to two or more field installations, (2) involved the allocation of significant resources, defined as projects whose life-cycle costs were over $200 million, or (3) warranted special management attention, including those that required external agency reporting on a regular basis. During the PMC meetings, senior executives reviewed self-assessments (grades of green, yellow, and red), done by the responsible project manager, on the cost, schedule, and technical progress of the project. Using this color-coded grading scheme, NASA’s control process focused largely on cost, schedule, and technical concerns, but not on assessing improvements to mission performance. Additionally, the grading scheme was not based on quantitative criteria, but instead was largely qualitative and subjective in nature. For instance, projects were given a “green” rating if they were “in good shape and on track consistent with the baseline.” A “yellow” rating was defined as a “concern that is expected to be resolved within the schedule and budget margins,” and a “red” rating was defined as “a serious problem that is likely to require a change in the baseline determined at the beginning of the project.” However, the lack of quantitative criteria, benefit analysis, and performance data invited the possibility for widely divergent interpretations and a misunderstanding of the true value of the projects under review. As of 1995, three IT systems had met NASA’s review criteria and had been reviewed by the PMC. These three systems constituted about 7 percent of NASA’s total fiscal year 1994 IT spending. No similar centralized review process existed for lower dollar projects, which could have resulted in problem projects and systems that collectively added up to significant costs being overlooked. For instance, in 1995 NASA terminated an automated accounting system project that had been under development for about 6 years, had cost about $45 million to date, and had an expected life-cycle cost of over $107 million. In responding to a draft of this report, the NASA CIO said that the current cost threshold of $200 million is being reduced to a lower level to ensure that most, if not all, agency IT projects will be subject to PMC reviews. In addition, the CIO noted that NASA’s internal policy directive on program/project management is being revised to (1) include IT evaluation criteria that are aligned with ITMRA and executive-branch guidance and (2) clearly establish the scope and levels of review (agency, lead center, or center) for IT investment decisions. Once projects have been implemented and become operational, leading organizations evaluate them to determine whether they have achieved the expected benefits, such as lowered cost, reduced cycle time, increased quality, or increased the speed of service delivery. They do this by conducting project postimplementation reviews (PIRs) to compare actual to planned cost, returns, and risks. The PIR results are used to calculate a final return on investment, determine whether any unanticipated modifications may be necessary to the system, and provide “lessons learned” input for changes to the organization’s IT investment processes and strategy. ITMRA now requires agencies to report to OMB on the performance benefits achieved by their IT investments and how those benefits support the accomplishment of agency goals. Only one of the five federal agencies we reviewed—IRS—systematically evaluated implemented IT projects to determine actual costs, benefits, and risks. Indeed, we found that most of the agencies rarely evaluated implemented IT projects at all. In general, the agency review programs were insufficiently staffed and used poorly defined and inconsistent approaches. In addition, in cases where evaluations were done, the findings were not used to consider improvements or revisions in the IT investment decision-making process. NOAA, for instance, had no systematic process in place to ensure that it was achieving the planned benefits from its annual $300 million IT expenditure. For example, of the four major IT projects that constitute the $4.5 billion National Weather Service (NWS) modernization effort, only the benefits actually accruing from one of four—the NEXRAD radars—had been analyzed. While not the only review mechanism used by the agency, NOAA’s central review program was poorly staffed. NOAA headquarters, with half a staff year devoted to this review program, generally conducted reviews in collaboration with other organizational units and had participated in only four IT reviews over the last 3 fiscal years. Additionally, these reviews generally did not address the systems’ projected versus actual cost, performance, and benefits. IRS had developed a PIR methodology that it used to conduct five systems postimplementation reviews. A standardized methodology is important because it makes the reviews consistent and adds rigor to the analytical steps used in the review process. The IRS used the June 1994 PIR on the Corporate Files On-Line (CFOL) system as the model for this standardized methodology. In December 1995, IRS used the PIR methodology to complete a review of the Service Center Recognition/Image Processing System (SCRIPS). Subsequently, three more PIRs have been completed (TAXLINK, the Enforcement Revenue Information System, and the Integrated Collection System) and five more are scheduled. IRS estimated that the five completed systems have an aggregate cost of about $845 million. However, the PIR methodology was not integrated into a cohesive investment process. Specifically, there were no mechanisms in place to take the lessons learned from the PIRs and apply them to the decision criteria and other tools and techniques used in their investment process. As a result, the PIRs that were conducted did not meet one of their primary objectives—to ensure continual improvement based on lessons learned—and IRS ran the risk of repeating past mistakes. To help make continual decisions on IT investments, leading organizations require all projects to have complete and up-to-date project information. This information includes cost and benefit data, risk assessments, implementation plans, and initial performance measures. (See figure 2.4). Maintaining this information allows senior managers to rigorously evaluate the current status of projects. In addition, it allows them to compare IT projects across the organization; consider continuation, delay, or cancellation trade-offs; and take action accordingly. ITMRA requires agencies to use quantitative and qualitative criteria to evaluate the risks and the returns of IT investments. As such, agencies need to collect and maintain accurate and reliable cost, benefit, risk, and performance data to support project selection and control decisions. The requirement for accurate, reliable, and up-to-date financial and programmatic information is also a primary requirement of the CFO Act and is essential to fulfilling agency requirements for evaluating program results and outcomes under GPRA. At the five case study agencies we evaluated, we found that, in general agency IT investment decisions were based on undefined or implicit data on the project’s cost, schedule, risks, and returns were not documented, defined, or kept up-to-date, and, in many cases, were not used to make investment decisions. To ensure that all projects and operational systems are treated consistently, leading organizations define explicit risk and return decision criteria. These criteria are then used to evaluate every IT project or system. Risk criteria involve managerial, technical, resource, skill, security, and organizational factors, such as the size and scope of the project, the extent of use of new technology, the potential effects on the user organization, the project’s technical complexity, and the project’s level of dependency on other systems or projects. Return criteria are measured in financial and nonfinancial terms. Financial measurements can include return on investment and internal rate of return analyses while nonfinancial assessments can include improvements in operational efficiency, reductions in cycle time, and progress in better meeting customer needs. Of the five agencies in our sample, only the Coast Guard used a complete set of decision criteria. These decision criteria included (1) risk assessments of schedule, cost, and technical feasibility dimensions, (2) cost-benefit impacts of the investment, (3) mission effectiveness measures, (4) degree of alignment with strategic goals and high-level interest (such as Congress or the President), and (5) organizational impact in the areas of personnel training, quality of work life, and increased scope of service. The Coast Guard used these criteria to prioritize IT projects and justify final selections. The decision criteria were weighted and scored, and projects were evaluated to determine those with the greatest potential to improve mission performance. Generally, officials in other agencies stated that they determine which projects to fund based on the judgmental expertise of decisionmakers involved in the process. NOAA, for instance, had a board of senior executives that met annually to determine budget decisions across seven strategic goals. Working groups for each strategic goal met and each created a prioritized funding list, which was then submitted to the executive decision-making board. These working groups did not have uniform criteria for selecting projects. The executive board accepted the prioritized lists as submitted and made funding threshold decisions based on these lists. As a result, the executive board could not easily make consistent, accurate trade-offs among the projects that were selected by these individual working groups on a repeatable basis. In addition, to maximize funding for a specific working group, project rankings may not have been based on true risk or return. According to a NOAA senior manager and the chair of one of the NOAA working groups, one group ranked high-visibility projects near the bottom of the list to encourage the senior decision-making board to draw the budgetary cut-off line below these high visibility projects. Few of these high-visibility projects were at the top of the list, despite being crucial to NOAA and high on the list of the NOAA Administrator’s priorities. Explicit decision criteria would eliminate this type of budgetary gamesmanship. Leading organizations consider project data the foundation by which they select, control, and evaluate their IT investments. Without it, participants in an investment process cannot determine the value of any one project. Leading organizations use rigorous and up-to-date cost-benefit analyses, risk assessments, sensitivity analyses, and project specific data including current costs, staffing, and performance, to make funding decisions and project modifications based, whenever possible, on quantifiable data. While the agencies in our sample developed documents in order to get project approvals, little effort was made to ensure that the information was kept accurate and up-to-date, and rarely were the data used to manage the project throughout its life cycle. During our review, we asked each agency to supply us with basic data on its largest dollar IT projects. However, this information was not readily available and gathering it required agency officials to rely on a variety of sometimes incomparable sources for system cost, life-cycle phase, and staffing levels. In addition, some of the agencies could not comparatively analyze IT projects because they did not keep a comprehensive accounting of data on all of the IT systems. For example, EPA had to conduct a special information collection to identify life-cycle cost estimates on its major systems and projects for this report. While the individual system managers at EPA did have system life-cycle cost estimates, the fact that this information was decentrally maintained made cross-system comparisons unlikely. In a 1995 report, the NASA IG found that neither NASA headquarters nor any of the NASA centers had a complete inventory of all information systems for which they were responsible. All of the agencies we reviewed conducted cost-benefit analyses for their major IT projects. However, these analyses were generally done to support decisions for project approval and were seldom kept current. In addition, the cost-benefit projections were rarely used to evaluate actual project results. The NWS modernization, for instance, has a cost-benefit analysis that was done in 1992. This analysis covers the four major systems under the modernization. To be effective, an analysis should include the costs and benefits of each project, alternatives to that project, and finally, a combined cost-benefit analysis for the entire modernization. However, the cost-benefit analysis that was conducted only compares the aggregate costs and benefits of the NWS modernization initiative against the current system. It does not assess or analyze the costs and benefits of each system, nor does it examine alternatives to those systems. As a result, NWS does not know if each of the modernization projects is cost-beneficial, and cannot make trade-offs among them. If using only this analysis, decision-makers are forced to choose either the status quo or all of the projects proposed under the modernization. Without updated cost-benefit data, informed management decisions become difficult. We reported in April 1995 that NWS was trying to assess user concerns related to the Automated Surface Observing System (ASOS), one of the NWS modernization projects, but that NWS did not have a complete estimate of what it would cost to address these concerns. As we concluded in the report, without reliable estimates of what an enhanced or supplemented ASOS would cost, it would be difficult for NWS to know whether continued investment in ASOS is cost-beneficial. We provided and discussed a draft of this report with officials from EPA, NASA, NOAA, IRS, and the Coast Guard, and have incorporated their comments where appropriate. Several of the agencies noted that they, in response to the issuance of OMB’s guidance on IT investment decision-making and the passage of ITMRA, have made process changes and organizational modifications affecting IT funding decisions. We have incorporated this information into the report where applicable. However, many of the process changes and modifications have occurred very recently, and we have not fully evaluated these changes or determined their effects. Officials from NOAA and NASA also had reservations about the applicability of the investment portfolio approach to their organizations because their decentralized operating environments were not conducive to a single agencywide portfolio model with a fixed set of criteria. Because any organization, whether centralized or decentralized, has to operate within the parameters of a finite budget, priorities must still be set, across the organization, about where limited IT dollars will be spent to achieve maximum mission benefits. We agree that many IT spending decisions can be made at the agency or program level. However, there are some decisions—especially those involving projects that are (1) high-risk, high-dollar, (2) cross-functional, (3) or providing a common infrastructure (e.g., telecommunications)—that should be made at a centralized, departmental level. Establishing a common, organizationwide focus, while still maintaining a flexible distribution of departmental and agency/program/site decision-making, can be achieved by implementing standard decision criteria. These criteria help ensure that projects are assessed and evaluated consistently at lower levels, while still maintaining an enterprisewide portfolio of IT investments. Buying information technology can be a high-risk, high-return undertaking that requires strong management commitment and a systematic process to ensure successful outcomes. By using an investment-driven management approach, leading organizations have significantly increased the realized return on information technology investments, reduced the risk of cost overruns and schedule delays, and made better decisions about how their limited IT dollar should be spent. Adopting such an investment-driven approach can provide federal agencies with similar opportunities to achieve greater benefits from their IT investments on a more consistent basis. However, the federal case study agencies we examined used decision-making processes that lacked many essential components associated with an investment approach. Critical weaknesses included the absence of reliable, quantitative cost figures, net return on investment calculations, rigorous decision criteria, and postimplementation project reviews. With sustained management attention and substantive improvements to existing processes, these agencies should be able to meet the investment-related provisions of ITMRA. Implementing and refining an IT investment process, however, is not an easy undertaking and cannot be accomplished overnight. Maximizing the returns and minimizing the risks on the billions of dollars that are spent each year for IT will require continued efforts on two fronts. First, agencies must fundamentally change how they select and manage their IT projects. They must develop and begin using a structured IT investment approach that encompasses all aspects of the investment process—selection, control, and evaluation. Second, oversight attention far beyond current levels must be given to agencies’ management processes and to actual results that are being produced. Such attention should include the development of policies and guidance as well as selective evaluations of processes and results. These evaluations should have a dual focus: They should identify and address deficiencies that are occurring, but they should also highlight positive results in order to share lessons learned and speed success. OMB’s established leadership role, as well as the policy development and oversight responsibilities that it was given under ITMRA, place it in a key position to provide such oversight. OMB has already initiated several changes to governmentwide guidance to encourage the investment approach to IT decision-making, and has drawn upon the assistance of several key interagency working groups comprised of senior agency officials. Such efforts should be continued and expanded, to ensure that the federal government gets the most return for its information technology investments. Given its significant leadership responsibility in supporting agencies’ improvement efforts and responding to requirements of ITMRA, it is imperative that OMB continue to clearly define expectations for agencies and for itself to successfully implement investment decision-making approaches. As such, we are recommending four specific actions for the Director of OMB to take. OMB’s first challenge is to help agencies improve their investment management processes. With effective processes in place, agencies should be in much stronger positions to make informed decisions about the relative benefits and risks of proposed IT spending. Without them, agencies will continue to be vulnerable to risks associated with excessively costly projects that produce questionable mission-related improvements. Under Sections 5112 and 5113 of the Information Technology Management Reform Act, the Director of OMB has responsibility for promoting and directing that federal agencies establish capital planning processes for information technology investment decisions. In designing governmentwide guidance for this process, we recommend that the Director of the Office of Management and Budget require agencies to: Implement IT investment decision-making processes that use explicitly defined, complete, and consistent criteria applied to all projects, regardless of whether project decisions are made at the departmental, bureau, or program level. With criteria that reflect cost, benefit, and risk considerations, applied consistently, agencies should be able to make more reasonable and better informed trade-offs between competing projects in order to achieve the maximum economic impact for their scarce investment dollars. Periodically analyze their entire portfolios of IT investments—at a minimum new projects, as well as projects in development and operations and maintenance expenditures—to determine which projects to approve, cancel or delay. With development and maintenance efforts competing directly with one another for funding, agencies will be better able to gauge the best proportion of investment in each category of spending to move away from their legacy bases of systems with excessive maintenance costs. Design control and evaluation processes that include cost, schedule, and quantitative performance assessments of projected versus actual improvement in mission outcomes. As a result, they should increase their capacity to both assess actual project results and learn from their experience which operational areas produce the highest returns and how well they estimate projects and deliver final results. Advise agencies in setting minimum quality standards for data used to assess (qualitatively and quantitatively) cost, benefit, and risks decisions on IT investments. Agencies should demonstrate that all IT funding proposals include only data meeting these quality requirements and that projected versus actual results are assessed at critical project milestones. The audited data required by the CFO Act should help produce this accurate, reliable cost information. Higher quality information should result in better and more consistent decisions on complex information systems investments. OMB’s second challenge is to use the results produced by the improved investment processes to develop recommendations for the President’s budget that reflect an agency’s actual track record in delivering mission performance for IT funds expended. Under Section 5113 of ITMRA, the Director of OMB is charged with evaluating the results of agency IT investments and enforcing accountability—including increases or reductions in agency IT funding proposals—through the annual budget process. In carrying out these responsibilities, we recommend that the Director of the Office of Management and Budget: Evaluate information system project cost, benefit, and risk data when analyzing the results of agency IT investments. Such analyses should produce agency track records that clearly and definitively show what improvements in mission performance have been achieved for the IT dollars expended. Ensure that the agency investment control process are in compliance with OMB’s governmentwide guidance, and if not, assess strengths and weaknesses and recommend actions and timetables for improvements. When results are questionable or difficult to determine, monitoring agency investment processes will help OMB diagnose problem causes by determining the degree of agency control and the quality of decisions being made. Use OMB’s evaluation of each agency’s IT investment control processes and IT performance results as a basis for recommended budget decisions to the President. This direct linkage should give agencies a strong, much needed incentive to maximize the returns and minimize the risks of their scarce IT investments. To effectively implement improved investment management processes and make the appropriate linkages between agency track records and budget recommendations, OMB also has a third challenge. It will need to marshal the resources and skills to execute the new types of analysis required to make sound investment decisions on agency portfolios. Specifically, we recommend that the Director of the Office of Management and Budget: Organize an interagency group comprised of budget, program, financial, and IT professionals to develop, refine and transfer guidance and knowledge on best practices in IT investment management. Such a core group can serve as an ongoing source of practical knowledge and experience on the state of the practice for the federal government. Obtain expertise on an advisory basis to assist these professionals in implementing complete and effective investment management systems. Agency senior IRM management could benefit greatly from a high quality, easily accessible means to solicit advice from capital planning and investment experts outside the federal government. Identify the type and amount of skills required for OMB to execute IT portfolio analyses, determine the degree to which these needs are currently satisfied, specify the gap and both design and implement a plan, with timeframes and goals, to close the gap. Given existing workloads and the resilience of the OMB culture, without a determined effort to build the necessary skills, OMB will have little impact on the quality of IT investment decision-making. If necessary to augment its own staff resources, OMB should consider the option of obtaining outside support to help perform such assessments. Finally, as part of its internal implementation strategy, the Director of the Office of Management and Budget should consider developing an approach to assessing OMB’s own performance in executing oversight responsibilities under the ITMRA capital planning and investment provisions. Such a process could focus on whether OMB reviews of agency processes and results have an impact on reducing risk or increasing the returns on information technology investments—both within and across federal agencies. In its written comments on a draft of our report, OMB generally supported our recommendations and said that it is working towards implementing many aspects of the recommendations as part of the fiscal year 1998 budget review process of fixed capital assets. OMB also provided observations or suggestions in two additional areas. First, OMB stated that given ITMRA’s emphasis on agencies being responsible for IT investment results, it did not plan to validate or verify that each agency’s investment control process is in compliance with OMB’s guidance contained in its management circulars. As discussed in our more detailed evaluation of OMB’s comments in appendix I, conducting selective evaluations is an important aspect of an overall oversight and leadership role because it can help identify management deficiencies that are contributing to poor IT investment results. Second, OMB noted that the relationship of IT investment processes between a Cabinet department and bureaus or agencies within the department was not fully evaluated and that additional attention would be needed as more data on this issue become available. We agree that our focus was on assessing agencywide processes and that continued attention to the relationships between departments, bureaus, and agencies will contribute to increased understanding across the government and will ultimately improve ITMRA’s chances of success. This issue is discussed in more detail in our response to comments provided by the five agencies we reviewed (summarized at the end of chapter 2). The following are GAO’s comments on the Office of Management and Budget’s letter dated July 26, 1996. 1. As stated in the scope and methodology section of the report, we focused our analysis on agencywide processes. We agree that continued attention to this issue will contribute to increased understanding across the government and will ultimately improve ITMRA’s chances of success. As noted in our response to comments received from the agencies we reviewed (provided at the end of chapter 2), we believe that a flexible distribution of departmental and agency/program/site IT decision-making is possible and can best be achieved by implementing standard decision criteria for all projects. In addition, we note that particular types of IT decisions, such as those with unusually high-risk, cross-functional impact or that provide common infrastructure needs, are more appropriately decided at a centralized, departmental level. Experience gained during implementation of the Chief Financial Officers (CFO) Act showed that departmental-level CFOs needed time to build effective working relationships with their agency- or bureau-level counterparts. We believe the same will be true for Chief Information Officers (CIOs) established by ITMRA and that establishing and maintaining this bureau-level focus will be integral for ensuring the act’s success. 2. ITMRA does squarely place responsibility and accountability for IT investment results with the head of each agency. Nevertheless, ITMRA clearly requires that OMB provide a key policy leadership and implementation oversight role. While we agree that it may not be feasible to validate and verify every agency’s investment processes, it is still essential that selected evaluations be conducted on a regular basis. These evaluations can effectively support OMB’s performance and results-based approach. They can help to identify and understand problems that are contributing to poor investment outcomes and also help perpetuate success by providing increased learning and sharing about what is and is not working. In order to develop a profile of each agency’s IT environment, we asked the agencies to provide us information on the following: total IT expenditures for fiscal year 1990 through fiscal year 1994; total number of staff devoted to IRM functions and activities for fiscal year 1990 through fiscal year 1994; and costs for the 10 largest IT projects for fiscal year 1994 (as measured by total project life-cycle cost). To gather this information, we developed a data collection instrument and submitted it to responsible agency officials. Information supplied by the agencies is summarized in the following tables. We did not independently verify the accuracy of this information. Moreover, comparison of figures across the agencies is difficult because agency officials used different sources (such as budget data, IRM strategic plans, etc.) for the same data elements. U.D. Provides an organizationwide microcomputer infrastructure and is the primary source for acquiring desktop, server and portable hardware; operating system and office automation system software; utilities and peripherals, training, personnel support, and cabling. Op. Provides continued support for the Coast Guard’s existing microcomputer infrastructure. Op. Provides a consolidated accounting and pay system. U.D. A configuration of sensors, communication links, personnel, and decision support tools that will modernize and expand the systems in three cities by incorporating radar sensor information overlaid on digital nautical charts as well as improved decision support systems. U.D. Provides an automated and consolidated communication system. U.D. Merges two maintenance systems for tracking and recording scheduled aviation maintenance actions. U.D. Reprograms most of the existing Coast Guard developed applications to comply with the National Institute of Standards and Technology’s Application Portability Profile. Op. Provides safety performance histories of vessels and involved parties and is used as a decision support tool for the Commercial Vessel Safety program. Op. Provides aviation technical publications in electronic format. U.D. Consolidated into the Coast Guard Standard Workstation III system. Op. Performs funds control from commitments through payment; updates all ledgers and tables as transactions are processed; provides a standard means of data entry, edit, and inquiry; and provides a single set of reference and control files. Op. Contains data submitted to EPA under the Emergency Planning and Community Right to Know Act for chemicals and chemical categories listed by the agency. Data include chemical identity, amount of on-site users, release and off-site transfers, on-site treatment, minimization/prevention actions. Public access is provided by the National Library of Medicine. Op. Supports management and administration of chemical samples from Superfund sites that are analyzed under agency contracts with chemical laboratories. The system schedules and tracks samples from site collection, through analysis, to delivery to the agency. Op. Stores air quality, point source emissions, and area/mobile source data required by federal regulations from the 50 states. Op. Superfund’s official source of planning and accomplishment data. Serves as the primary basis for strategic decision-making and site-by-site tracking of cleanup activities. Op. Contains a set of computer applications and a major relational database which is used to support regulation development, air quality analysis, compliance audits, investigations, assembly line testing, in-use compliance, legislation development, and environmental initiatives. Op. Maintains basic data identifying and describing hazardous waste handlers; detailed information about hazardous waste treatment storage and disposal processes, environmental permitting, information on inspections, violations, and enforcement actions; and tracks specific corrective action information needed to regulate facilities with hazardous waste releases. Op. Supports the National Pollutant Discharge Elimination System, a Clean Water Act program that issues permits and tracks facilities that discharge pollutants into our navigable waters. (continued) U.D. A replacement for the existing Comprehensive Environmental Response, Compensation, and Liability Information System described above. Op. A PC LAN version of the Comprehensive Environmental Response, Compensation, and Liability Information System database used by EPA regional offices for data input and local analysis needs. U.D. Acquire and install Tax System Modernization host-tier computers at three computing centers. U.D. Integrates five systems that control, assign, prioritize, and track taxpayer inquiries; provides office automation, case folder review and inventories, and display and manipulation of case inquiry folders; automates collection cases; provide access to current tax return information; automates case preparation and closure; and provides standardized hardware and custom software to the criminal investigation function on a nationwide basis. U.D. Integrates six systems that will receive and control information being transmitted to or from IRS; automates remittance processing activities; scans paper tax returns and correspondence for processing in an automated database; provides automated telephone assistance to customers; permits individual and business tax returns to be filed by utilizing a touch-tone phone; and provides access to all electronically filed returns that have been scored for potential fraud. Op. Provides case tracking, expanded legal research, a document management system for briefs, an integrated office system, time reporting, issue tracking, litigation support, and a decision support system. (continued) U.D. Integrates three systems that provide application programs to query, search, update, analyze and extract information from a database; aggregates tax information into electronic case folders and distributes them to field locations; and provides the security infrastructure to support all components of the Tax System Modernization. U.D. Provides a variety of workstation models, monitors, printers, operating systems and related equipment; provides for standardization of the small and medium-scale computers used by front line programs in the national and field offices and service centers. Op. Provides funding for (1) the mainframe and miscellaneous peripherals at each service center, (2) magnetic media and ADP supplies for all service centers, (3) lease and maintenance for support equipment, and (4) on-line access to taxpayer information and account status. U.D. Provides an interim hardware platform at two computing centers to support master file processing and full implementation of the CFOL data retrieval/delivery system. U.D. Provides upgradable software development workstations and workbench tools, including automated analysis and design tools; requirements traceability tools; construction kits with smart editors, compilers, animators, and debuggers; and static analyzers. U.D. Integrates four systems that provide for ordering and delivery of telecommunication systems and services for Treasury bureaus; serves as a Government Open Systems Interconnection Profile prototype; provides centralized network and operations management and will acquire about 14,000 workstations. U.D. Receives, processes, archives, and distributes earth science research data from U.S., European, and Japanese polar platforms, selected Earth probes, the Synthetic Aperture Radar free flyer, selected existing databases, and other sources of related data. Op. Provides telecommunications and computation services for Marshall Space Flight Center. Op. Supports most data systems, networks, user workstations and telecommunications systems and provides maintenance, operations, software development, engineering, and customer support functions at Johnson Space Center. Op. Provides a family of compatible computing systems covering a broad performance range that will provide ground-based mission operations systems support. Op. Op. Provides continuity of base operations, including federal information processing resources of sustaining engineering, computer operations, and communications services for Kennedy Space Center. Op. Acquisition of seven classes of scientific and engineering workstations plus supporting equipment. Op. Furnishes, installs, and tests the Advanced Computer Generated Image System; provides direct computational analysis and programming support to specific research disciplines and flight projects; provides for the analysis, programming, engineering, and maintenance services for the flight simulation facilities. Also provides support for the Central Scientific and Computing Complex operation and systems maintenance as well as Complex-wide communications systems support and system administration of distributed computing and data reduction systems. Op. Provides a wide array of supporting services, including computational, professional, technical, administrative, engineering, and operations at the Lewis Research Center. (continued) U.D. U.D. An information system including workstations, associated data processing, and communications, designed to integrate data from several National Weather Service information systems, as well as from field offices, regional and national centers, and other sources. Op. An initiative to acquire supercomputers necessary to run large complex numeric models as a key component of the weather forecast system. Op. A distributed-processing system architecture designed to acquire, process, and distribute satellite data and products. Op. An effort to replace a variety of obsolete technology in the National Marine Fisheries Service with a common computing infrastructure that supports distributed processing in an open system environment. The system stores, integrates, analyzes, and disseminates large quantities of living marine resource data. Op. Procurement of a high-performance computer system to provide support services for climate and weather research activities. Geostationary Operational Environmental Satellite (GOES I-M) Op. Ground system consisting of minicomputers with associated peripherals and satellite-dependent customized applications software to provide the monitoring, supervision, and data acquisition and processing functions for the GOES-Next satellites. Op. A system designed to support weather radars and associated display systems. (continued) Op. An effort to replace old mainframes as well as the associated channel-connected architecture with an open systems architecture. Op. Ground system consisting of minicomputers with associated peripherals and satellite-dependent customized applications software intended to provide the monitoring, supervision, and data acquisition and processing functions for the polar satellites. Imp. A system of sensors, computers, display units, and communications equipment to automatically collect and process basic data on surface weather conditions, including temperature, pressure, wind, visibility, clouds, and precipitation. This appendix is a compilation of work done by OMB and us on how federal agencies should manage information systems using an investment process. It is based upon analysis of the IT management best practices found in leading private and public sector organizations and is explained in greater detail in OMB’s Evaluating Information Technology Investments: A Practical Guide. How do you know you have selected the best projects? Based on your evaluation, did the systems deliver what you expected? Key Question: How can you select the right mix of IT projects that best meets mission needs and improvement priorities? The goal of the selection phase is to assess and prioritize current and proposed IT projects and then create a portfolio of IT projects. In doing so, this phase helps ensure that the organization (1) selects those IT projects that will best support mission needs and (2) identifies and analyzes a project’s risks and returns before spending a significant amount of project funds. A critical element of this phase is that a group of senior executives makes project selection and prioritization decisions based on a consistent set of decision criteria that compares costs, benefits, risks, and potential returns of the various IT projects. Initially filter and screen IT projects for explicit links to mission needs and program performance improvement targets using a standard set of decision criteria. Analyze the most accurate and up-to-date cost, benefit, risk, and return information in detail for each project. Create a ranked list of prioritized projects. Determine the most appropriate mix of IT projects (new versus operational, strategic versus maintenance, etc.) to serve as the portfolio of IT investments. An executive management team that makes funding decisions based on comparisons and trade-offs between competing project proposals, especially for those projects expected to have organizationwide impact. A documented and defined set of decision criteria that examines expected return on investment (ROI), technical risks, improvement to program effectiveness, customer impact, and project size and scope. Predefined dollar thresholds and authority levels that recognize the need to channel project evaluations and decisions to appropriate management levels to accommodate unit-specific versus agency-level needs. Minimal acceptable ROI hurdle rates that apply to projets across the organization that must be met for projects to be considered for funding. Risk assessments that expose potential technical and managerial weaknesses. Key Question: What controls are you using to ensure that the selected projects deliver the projected benefits at the right time and the right price? Once the IT projects have been selected, senior executives periodically assess the progress of the projects against their projected cost, schedule, milestones, and expected mission benefits. The type and frequency of the reviews associated with this monitoring activity are usually based on the analysis of risk, complexity, and cost that went into selecting the project and that are performed at critical project milestones. If a project is late, over cost, or not meeting performance expectations, senior executives decide whether it should be continued, modified, or canceled. Steps of the Control Phase Use a set of performance measures to monitor the developmental progress for each IT project to identify problems. Take action to correct discovered problems. Established processes that involve senior managers in ongoing reviews and force decisive action steps to address problems early in the process. Explicit cost, schedule, and performance measures to monitor expected versus actual project outcomes. An information system to collect project cost, schedule, and performance data, in order to create a record of progress for each project. Incentives for exposing and solving project problems. Key Question: Based on your evaluation, did the system deliver what was expected? The evaluation phase provides a mechanism for constantly improving the organization’s IT investment process. The goal of this phase is to measure, analyze, and record results, based on the data collected throughout each phase. Senior executives assess the degree to which each project met its planned cost and schedule goals and fulfilled its projected contribution to the organization’s mission. The primary tool in this phase is the postimplementation review (PIR), which should be conducted once a project has been completed. PIRs help senior managers assess whether a project’s proposed benefits were achieved and refine the IT selection criteria. Compare actual project costs, benefits, risks, and return information against earlier projections. Determine the causes of any differences between planned and actual results. For each system in operation, decide whether it should continue operating without adjustment, be further modified to improve performance, or be canceled. Modify the organization’s investment process based on lessons learned. Postimplementation reviews to determine actual costs, benefits, risks, and return. Modification of decision criteria and investment management processes, based on lessons learned, to improve the process. Maintenance of accountability by measuring actual project performance and creating incentives for even better project management in the future. The following sections briefly describe the information technology management processes at each of the five agencies we reviewed. These descriptions are intended to characterize the general workings of the agency processes at the time of our review. We used the selection/control/evaluation model (as summarized in appendix III and described in detail in OMB’s Evaluating Information Technology Investments: A Practical Guide) as a template for describing each agency’s IT management process. The Coast Guard had an IT investment process used to select IT projects for funding. IT project proposals were screened, evaluated, and ranked by a group of senior IRM managers using explicit decision criteria that took into account project costs, expected benefits, and risk assessments. The ranked list with recommended levels of funding for each project was submitted for review to a board of senior Coast Guard officers and then forwarded to the Coast Guard Chief of Staff for final approval. EPA used a decentralized IT project initiation, selection, and funding process. Under this broad process, program offices independently selected and funded IT projects on a case-by-case basis as the need for the system was identified. EPA had IRM policy and guidance for IT project data and analysis requirements—such as a project-level risk assessment and a cost-benefit study—that the program offices had to identify in order to proceed with system development. EPA did not have a consistent set of decision criteria for selecting IT projects. IT selection and funding activities within IRS differed depending on whether the project was part of the Tax System Modernization (TSM) or an operational system. In 1995, IRS created a senior-level board for selecting, controlling, and evaluating information technology investments and began to rank all of the proposed TSM projects using its cost, risk, and return decision criteria. However, these criteria were largely qualitative, data used were not validated or reliable, and the analyses were not based on calculations of expected return on investment. According to IRS, its investment review board used a separate process with different criteria for evaluating operational systems. The board did not review research and development systems or field office systems. IRS did not compare the results of its different evaluation processes. Within NASA, IT project selection and funding decisions were made by domain-specific program managers. NASA had two general types of IT funding—program expenditures and administrative spending. Most of NASA’s IT funding was embedded within program-specific budgets. Managers of these programs had autonomy to make system-level and system support IT selection decisions. Administrative IT systems were generally managed by the cognizant NASA program office or center. NASA has recently established a CIO council to establish high-level policies and standards, approve information resources management plans, and address issues and initiatives. The council will also serve as the IT capital investment advisory group to the proposed NASA Capital Investment Council. NASA plans for this Capital Investment Council to have responsibility for looking at all capital investments across NASA, including those for IT. While this Capital Investment Council may fill the need for identifying cross-functional opportunities, it is not yet operational. IT project selection and funding decisions at NOAA were made as part of its strategic management and budgeting process. NOAA had seven work teams—each supporting a NOAA strategic goal—that prioritized incoming funding requests. Managers on these work teams negotiated to determine IT project funding priorities within the scope of their respective strategic goals. These prioritization requests were then submitted to NOAA’s Executive Management Board, which had final agency decision authority over all expenditures. A key decision criterion used by the work teams was the project’s contribution to the agency’s strategic goals; however, no standard set of decision criteria was used in the prioritization decisions. Other data, such as cost-benefit analyses, were also sometimes used to evaluate IT project proposals, although use of these data sources was not mandatory. The Coast Guard conducted internal system reviews, but these reviews were not used to monitor the progress of IT projects. The review efforts were designed to address ways to improve efficiency, reduce project cost, and reduce project risk. Cost, benefit, and schedule data were also collected annually for some new IT projects, but the Coast Guard did not measure mission benefits derived from each of its projects. EPA had a decentralized managerial review process for monitoring IT projects. EPA’s IRM policy set requirements for the minimum level of review activity that program offices had to conduct, but program offices had primary responsibility for overseeing the progress of their IT projects. In an effort to provide a forum for senior managerial review of IT projects, EPA, in 1994, created the Executive Steering Committee (ESC) for IRM to guide EPA’s agencywide IRM activities. The ESC was chartered to review IRM projects that are large, important, or cross-organizational. The committee’s first major system review was scheduled for some time in 1996. EPA is currently formulating the data submission requirements for the ESC reviews. IRS regularly conducted senior management program control meetings (PCM) to review the cost and schedule activity of TSM projects. IRS had two types of PCMs. The four TSM sites—Submission Processing, Computing Center, Customer Service, and District Office—conducted PCMs to monitor the TSM activity under their purview. Also, IRS could hold “combined PCMs” to resolve issues that spanned across the TSM sites. IRS did not conduct PCMs to monitor the performance of operational systems. To date, (1) working procedures, (2) required decision documents, (3) reliable cost, benefit, and return data, (4) and explicit quantitative decision criteria needed for an effective investment control process are not in place for the IRS Investment Review Board. NASA senior executives regularly reviewed the cost and schedule performance of major programs and projects, but they reviewed only the largest IT projects. No central IRM review has been conducted since 1993. NASA put senior-level CIOs in place for each NASA center, but these CIOs exercised limited control over mission-related systems and had limited authority to enforce IT standards or architecture policies. NASA’s proposed Capital Investment Council, which is intended to supplement the Program Management Council by reviewing major capital investments, may address this concern once the Investment Council is operational. NOAA conducted quarterly senior-level program status meetings to review the progress and performance of major systems and programs, such as those in the NWS modernization. NOAA had defined performance measures to gauge the progress toward its strategic goals, but did not have specific performance measures for individual IT systems. Also, while some offices had made limited comparisons of actual to expected IT project benefits, NOAA did not require the collection or assessment of mission benefit accrual information on IT projects. The Coast Guard did not conduct any postimplementation reviews of IT projects. Instead the Coast Guard focused its review activity on systems that were currently under development. EPA did not conduct any centralized postimplementation reviews. EPA did conduct postimplementation reviews as part of the General Services Administration’s (GSA) triennial review requirement, but curtailed this activity in 1992 when the GSA requirement was lifted. IRS directives required that postimplementation reviews be conducted 6 months after an IT system is implemented. At the time of our review, IRS had conducted five postimplementation reviews and had developed a standard postimplementation review methodology. However, no mechanisms were in place to ensure that the results of these IRS investment evaluation reviews were used to modify the IRS selection and control decision-making processes or alter funding decisions for individual projects. NASA did not conduct or require any centralized project postimplementation reviews. NASA stopped conducting centralized IRM reviews in 1993 and now instead urges programs to conduct IRM self-assessments. While the agency conducted other reviews, NOAA’s IRM office has participated in only four IRM reviews over the last 3 years. These reviews tended to focus on specific IT problems, such as evaluating the merits of electronic bulletin board systems or difficulties being encountered digitizing nautical navigation maps. No postimplementation reviews had been conducted over the past 3 years. On February 10, 1996, the Information Technology Management Reform Act of 1996 (Division E of Public Law 104-106) was signed into law. This appendix is a summary of the information technology investment-related provisions from this act, it is not the actual language contained in the law. Information technology (IT) is defined as any equipment, or interconnected system or subsystem of equipment, that is used in the automatic acquisition, storage, manipulation, management, movement, control, display, switching, interchange, transmission, or reception of data or information. It may include equipment used by contractors. The OMB Director is to promote and be responsible for improving the acquisition, use, and disposal of IT by federal agencies The OMB Director is to develop a process (as part of the budget process) for analyzing, tracking, and evaluating the risks and results of major capital investments for information systems; the process shall include explicit criteria for analyzing the projected and actual costs, benefits, and risks associated with the investments over the life of each system. The OMB Director is to report to the Congress (at the same time the budget is submitted) on the net program performance benefits achieved by major capital investments in information systems and how the benefits relate to the accomplishment of agency goals. The OMB Director shall designate (as appropriate) agency heads as executive agents to acquire IT for governmentwide use. The OMB Director shall encourage agencies to develop and use “best practices” in acquiring IT. The OMB Director shall direct that agency heads (1) establish effective and efficient capital planning processes for selecting, managing, and evaluating information systems investments, (2) before investing in new information systems, determine whether a government function should be performed by the private sector, the government, or government contractor, and (3) analyze their agencys’ missions and revise the mission-related and administrative processes (as appropriate) before making significant investments in IT. Through the budget process, the OMB Director is to review selected agency IRM activities to determine the efficiency and effectiveness of IT investments in improving agency performance. Agency heads are to design and implement a process for maximizing the value and assessing and managing the risks of IT investments. The agency process is to (1) provide for the selection, management, and evaluation of IT investments, (2) be integrated with the processes for making budget, financial, and program management decisions, (3) include minimum criteria for selecting IT investments and specific quantitative and qualitative criteria for comparing and prioritizing projects, (4) provide for identifying potential IT investments that would result in shared benefits with other federal, state, or local governments, (5) provide for identifying quantifiable measurements for determining the net benefits and risks of IT investments, and (6) provide the means for senior agency managers to obtain timely development progress information, including a system of milestones for measuring progress, on an independently verifiable basis, in terms of cost, capability of the system to meet specified requirements, timeliness, and quality. Agency heads are to ensure that performance measurements are prescribed for IT and that the performance measurements measure how well the IT supports agency programs. (continued) Where comparable processes and organizations exist in either the public or private sectors, agency heads are to quantitatively benchmark agency process performance against such processes in terms of cost, speed, productivity, and quality of outputs and outcomes. Agency heads may acquire IT as authorized by law (the Brooks Act—40 U. S. C. 759—is repealed by sec. 5101) except that the GSA Administrator will continue to manage the FTS 2000 and follow-on to that program (sec. 5124(b)). Agency heads are to designate Chief Information Officers (in lieu of designating IRM officials—as a result of amending the Paperwork Reduction Act appointment provision). Agency Chief Information Officers (CIOs) are responsible for (1) providing advice and assistance to agency heads and senior management to ensure that IT is acquired and information resources are managed in a manner that implements the policies and procedures of the Information Technology Management Reform Act of 1996, is consistent with the Paperwork Reduction Act, and is consistent with the priorities established by the agency head, (2) developing, maintaining, and facilitating the implementation of a sound and integrated agency IT architecture, and (3) promoting effective and efficient design and operation of major IRM processes. Agency heads (in consultation with the CIO and CFO) are to establish policies and procedures that (1) ensure accounting, financial, and asset management systems and other information systems are designed, developed, maintained, and used effectively to provide financial or program performance data for agency financial statements, (2) ensure that financial and related program performance data are provided to agency financial management systems on a reliable, consistent, and timely basis, and (3) ensure that financial statements support the assessment and revision of agency mission-related and administrative processes and the measurement of performance of agency investments in information systems. Agency heads are to identify (in their IRM plans required under the Paperwork Reduction Act) major IT acquisition programs that have significantly deviated from the cost, performance, or schedule goals established for the program (the goals are to be established under title V of the Federal Acquisition Streamlining Act of 1994). This section establishes which provisions of the title apply to “national security systems.” “National security systems” are defined as any telecommunications or information system operated by the United States government that (1) involves intelligence activities, (2) involves cryptologic activities related to national security, (3) involves command and control of military forces, (4) involves equipment that is an integral part of a weapon or weapon system, or (5) is critical to the direct fulfillment of military or intelligence missions. This section requires the GSA Administrator to provide (through the Federal Acquisition Computer Network established under the Federal Acquisition Streamlining Act of 1994 or another automated system) not later than January 1, 1998, governmentwide on-line computer access to information on products and services available for ordering under the multiple award schedules. The Information Technology Management Reform Act takes effect 180 days from the date of enactment (February 10, 1996). David McClure, Assistant Director Danny R. Latta, Adviser Alicia Wright, Senior Business Process Analyst Bill Dunahay, Senior Evaluator John Rehberger, Information Systems Analyst Shane Hartzler, Business Process Analyst Eugene Kudla, Staff Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the: (1) information technology (IT) investment practices of several federal agencies and compared them to those used by leading private- and public-sector organizations; and (2) Office of Management and Budget (OMB) as it responds to the investment requirements of the Information Technology Management Reform Act (ITMRA). GAO found that: (1) leading private- and public-sector organizations manage their IT projects as investments and rank projects based on maximizing returns and minimizing risks; (2) the five federal agencies reviewed have some elements of an IT investment process in place, but they lack a complete, institutionalized approach that would fulfill the requirements of ITMRA or the Paper Reduction Act; (3) the agencies reviewed need to manage their IT projects as an investment portfolio, viewing each project as a competing investment and make decisions based on the project's overall contribution to the agency's goals; (4) the agencies do not conduct postimplementation reviews (PIR) to determine actual costs, returns, and risks; (5) agency IT decisions are based on inconsistent or inaccurate data; (6) with the exception of the Coast Guard, none of the agencies reviewed have a set of explicit criteria for making IT decisions; and (7) OMB has taken a proactive role in developing IT investment policy to assist federal agencies in implementing ITMRA requirements.
You are an expert at summarizing long articles. Proceed to summarize the following text: Joint STARS is a joint Air Force and Army wide-area surveillance and target attack radar system designed to detect, track, classify, and support the attack of moving and stationary ground targets. This $11 billion major defense acquisition program consists of air and ground segments— refurbished 707 aircraft (designated the E-8) equipped with radar, operation and control, data processing, and communications subsystems, together with ground stations equipped with communications and data processing subsystems. Low-rate initial production (LRIP) of the Joint STARS aircraft began in fiscal year 1993. In line with 10 U.S.C. 2399, DOD’s final decision to proceed beyond LRIP first required the DOD Director of Operational Test and Evaluation (DOT&E) to submit a report to Congress, referred to as the Beyond LRIP report, stating whether (1) the test and evaluation performed was adequate and (2) testing demonstrated that the system is effective and suitable for combat, that is, operationally effective and suitable. The Joint STARS aircraft was scheduled to begin its initial operational test and evaluation—referred to as the Joint STARS multi-service operational test and evaluation—in November 1995. That testing was delayed and then changed because of the deployment of Joint STARS assets to the European theater to support Operation Joint Endeavor in Bosnia. The Air Force Operational Test and Evaluation Center (AFOTEC) and the U.S. Army Operational Test and Evaluation Command conducted a combined development and operational test of Joint STARS from July through September 1995 and an operational evaluation of the system during Operation Joint Endeavor from January through March 1996. Two Air Force Joint STARS aircraft and 13 Army Joint STARS ground station modules were deployed to support Operation Joint Endeavor and operationally evaluated from January through March 1996. After analyzing the data from the combined development and operational test and the operational evaluation performed during Operation Joint Endeavor, AFOTEC issued its Joint STARS multi-service operational test and evaluation final report on June 14, 1996. DOT&E staff analyzed the same and additional data and the Director issued his Beyond LRIP report to Congress on September 20, 1996. On September 25, 1996, the Under Secretary of Defense for Acquisition and Technology signed an acquisition decision memorandum approving the Joint STARS program’s entry into full-rate production with a total planned quantity of 19 aircraft. LRIP of the Joint STARS aircraft began in fiscal year 1993. By statute, 10 U.S.C. 2399, the “Secretary of Defense shall provide that a major defense acquisition program may not proceed beyond low-rate initial production until initial operational test and evaluation of the program is completed.” Operational test and evaluation is the primary means of assessing weapon system performance in a combat-representative environment. It is defined as the (1) field test, conducted under realistic combat conditions, to determine an item’s effectiveness and suitability for use in combat by typical military users and (2) evaluation of the results of such a test. If used effectively, operational test and evaluation is a key internal control measure to ensure that decisionmakers have objective information available on a weapon system’s performance, thereby minimizing risks of procuring costly and ineffective systems. Joint STARS was moved from low-rate to full-rate production even though (1) it performed poorly during both the combined development and operational test and the operational evaluation in Bosnia, (2) excessive contractor effort was needed to support Operation Joint Endeavor, (3) the suitability and sustainability of the system is questionable since it uses refurbished 25-30 year old airframes, and (4) operational software is considered significantly immature. In DOT&E’s Beyond LRIP report, the DOT&E stated that Joint STARS had only demonstrated effectiveness for operations other than war. The report indicated that of three critical operational issues to judge effectiveness, only one had been demonstrated as met “. . . with limitations.” Those critical operational issues related to (1) performance of the tactical battlefield surveillance mission, that is, surveillance—“met with limitations”; (2) support of the execution of attacks against detected targets, that is, target attack support; and (3) the provision of information to support battlefield management and target selection, that is, battle management. The effectiveness critical operational issues were judged based on seven supporting measures. In its report to Congress, DOT&E listed four of those measures of effectiveness as “not met” during the system’s combined development and operational test and did not list any as having been demonstrated during the Operation Joint Endeavor operational evaluation. “In the current configuration, the [Joint STARS] aircraft has not demonstrated the ability to operate at the required maximum altitude; adequate tactics, techniques, or procedures to integrate [Joint STARS] into operational theaters have not been developed; [Joint STARS] exceeded the break rate and failed the mission reliability rate during [Operation Joint Endeavor]. During , [Joint STARS] did not achieve the effective time-on-station requirement.” He concluded that without corrective actions, “[Joint STARS] would not be suitable in higher intensity conflict” and later in the report judged that the system “as tested is unsuitable.” Analysis of DOT&E’s Beyond LRIP report indicates that not only did Joint STARS have disappointing test results but also that extensive follow-on operational testing of Joint STARS is needed. In its Beyond LRIP report, DOT&E presented a table that reported its findings of the combined development and operational test and Joint STARS Operation Joint Endeavor operational evaluation and indicates where further testing is required. Our analysis of that table indicates that at most only 25 of 71 test criteria could be judged met. DOT&E considers 18 of those 25 to require no further testing, that is, DOT&E judges them clearly met. However, our analysis also indicates that 19 test criteria were clearly not met and that as many as 26 might not have been met. Twenty-seven of the criteria could not be determined in either the combined development and operational test or the Operational Joint Endeavor operational evaluation. Of the 71 Joint STARS operational test and evaluation criteria listed, DOT&E indicates that 53, or about 75 percent, require further testing. In addition to the above, DOT&E also noted that there were several operational features present during Joint STARS Operation Joint Endeavor deployment that were essential to its mission accomplishment but were not included in the recent production decision. It provided two specific examples—satellite communications and a deployable ground support station. DOT&E believes these features “will be a necessary part of the production decision to achieve a capable [Joint STARS] system.” It also noted the need for other features—moving target indicator clutter suppression, communications improvements, terrain masking tools for ground station module operators, and linkage to operational theater intelligence networks. Since at least two of the features present during Operation Joint Endeavor were “essential” to its mission accomplishment have already been developed, and may be needed “to achieve a capable Joint STARS system,” those features should also be tested during the planned Joint STARS follow-on test and evaluation. “ must yield the most credible and objective results possible. All facets of the test effort must operate under the rules that support total objectivity and prevents improper data manipulation.” The test plan also states that interim contractor support “will be limited to perform ground maintenance only; no in-flight support.” Regarding the Army’s ground station modules, it states that “the Army maintenance concept does not call for at any level . . . .” “Approximately 80 contractors were deployed to support the E-8C. However, three or four systems engineers flew on each flight to ensure they could provide system stability and troubleshooting expertise during missions. Additionally, three or four software engineers were on the ground full time, researching and developing fixes to software problems identified during the deployment.” AFOTEC also reported that “Each of the had one contractor representative on site and on call with additional help available as necessary. Five contractor representatives remained at [Rhein-Main Air Base] and functioned as a depot.” The AFOTEC report stated that the “test director agreed to contractor participation in the to a greater extent permitted under US Public Law, Title 10, Section 2399.” When we formally expressed our concerns about the significant contractor involvement in Operation Joint Endeavor, DOD did not directly acknowledge that contractors were utilized beyond the constraints of the law governing operational test and evaluations. It stated that “were this solely an , contractors would not have been utilized beyond the constraints of 10 U.S.C. §2399,” and noted that the contractors were involved in the Joint STARS operation to support the mission. It further stated that employing Joint STARS in Operation Joint Endeavor “allowed the system to be operated and tested at a greater operational tempo than the system would have undergone in traditional testing.” DOD also stated that “because of the developmental nature of the aircraft, we needed to have more contractor personnel involved than we would otherwise have had.” It is understandable that DOD wanted to provide the best support possible in Operation Joint Endeavor. However, such significant contractor use neither supports a conclusion that the system is operationally effective or suitable for combat, nor is it indicative of a level of system maturity that justifies full-rate production. Joint STARS failure to meet its maintainability criteria during an operation less demanding than combat, even with such significant contractor involvement beyond that planned for in combat, also raises the question of the Air Force’s ability to develop a cost-effective maintenance plan for the system. This issue is recognized in the Under Secretary’s acquisition decision memorandum approving Joint STARS full-rate production. In that memorandum, the Under Secretary called for the Air Force to fully examine Joint STARS affordability, sustainability, and life-cycle costs, including the scope of contractor support. “If it is determined that the system will be operated at rates similar to AWACS [Airborne Warning And Control System], it is questionable whether the [Joint STARS aircraft] can be sustained over time. Airframe problems have already been experienced on the existing [Joint STARS airframes], including a hydraulics failure and a cracked strut in the fuselage between the wings.” In discussing the Joint STARS aircraft engines, DOT&E noted that they “are 1950s technology and may not be reliable” and cited AFOTEC’s reporting that engine failures were among the principal reasons that the aircraft failed to meet the break rate criteria during Operation Joint Endeavor. “. . . would face operational challenges taking off from five runways in Korea, each approximately 9,000 feet long. Operations out of Korea would likely require taking off with less fuel and subsequent aerial refueling or shortening the time on station.” Another area of Joint STARS suitability concern is the system’s growth potential. DOT&E has reported that it is not clear that the remanufactured 707 platforms will be capable of incorporating all of the planned upgrades, noting that the airframe limits the system’s growth potential both in weight and volume. It reported that as the current mission equipment already fills much of the fuselage, there is little room for expansion. DOT&E also noted that increasing the payload weight would require longer takeoff runways or taking off with less fuel, thus increasing the aerial refueling requirement or decreasing mission duration. DOT&E also noted that the system’s current computers limited its growth potential due to their having very little reserve processor time or memory. It stated that the Air Force requires that no more than 50 percent of central processor unit cycles or memory be utilized by a new system. DOT&E reported that “None of the E-8C computer subsystems meet these requirements.” It provided an example of the problem, stating that “the memory reserve of the operator workstations still does not meet the requirement, even after being increased from 128 megabytes to 512 megabytes just prior to .” This assessment is another indicator of the program’s elevated risk. As DOT&E noted “Future software enhancements and modifications may require significant hardware upgrades. . . .” The AFOTEC report specifically pointed to the lack of maturity in Joint STARS software. For example, AFOTEC reported that “during Joint STARS , software deficiencies were noted on every E-8C subsystem;” the software “does not adequately support operator in executing the mission;” and “Joint STARS software does not show the expected maturity trends of a system at the end of development.” In discussing Joint STARS software maturity, DOD advised us that the AFOTEC report judged the system overall operationally effective and suitable. Specifically, in reference to software problems, DOD stated that “the majority of software faults that occurred during Operation Joint Endeavor were resolved while airborne in less than 10 minutes.” However, both AFOTEC and DOT&E had some critical concerns regarding how Joint STARS software functioned. For example, according to AFOTEC, the “Joint STARS software is immature and significantly impedes the system’s reliability and effectiveness,” and according to DOT&E “Immature software was clearly a problem during [Operation Joint Endeavor]. . .” “. . .the prime contractor had to be called in to assist and correct 69 software-specific problems during the 41 E-8C missions . . . .an average of 1.4 critical failures per flight. . .” “Communications control was lost on 69 percent of the flights.” “The system management and control processor failed and had to be manually reset on half of the flights.” DOD has stated that the Air Force “plans several actions to mature the software and provide the required support resources” and that “an interim software release in April 1997 will correct some software deficiencies identified during the operational evaluation.” DOD also noted that software updates will be loaded each year thereafter and that software changes are easily incorporated. How easily these software changes are incorporated remains to be seen because much of this software, according to AFOTEC and DOT&E, is poorly documented. For example, AFOTEC has reported that there are 395 deficiency reports open against the Joint STARS program, 318 of which are software related. DOT&E also stated that the more than 750,000 lines of Joint STARS software code are “poorly documented” and later commented that “Software problems with the communications and navigation systems were never fully corrected, even after extensive efforts by the system contractor.” These facts in combination with DOD’s comments raise the serious question as to which software deficiencies are to be addressed in the planned April software update. There is an opportunity not currently under consideration that could reduce the Joint STARS program cost and result in an improved system. Since the Joint STARS was approved for LRIP, the procurement cost objective of the Air Force’s share of the Joint STARS has increased by about $1 billion. Program costs escalated from approximately $5.2 billion to approximately $6.2 billion in then-year dollars. A DOD official informed us that of the $1 billion cost growth, $760 million is attributed to the increased cost to buy, refurbish, and modify the used 707 airframes to receive the Joint STARS electronics. The remaining cost growth is attributed to other support requirements and growth in required spare parts. At least as early as 1992, the Boeing Company proposed putting Joint STARS on newer Boeing 767-200 Extended Range aircraft, but this proposal was not accepted as cost-effective. According to the 1996 Boeing price list, the commercial version of this aircraft can be bought for between $82 million and $93 million depending on options chosen (this is flyaway cost—the cost of a plane ready to be flown in its intended use). Furthermore, the flyaway cost of a commercial Boeing 757, which a Boeing representative informed us is in many respects more comparable to the 707s being used, is listed at between $61 million to $68 million. The actual cost of procuring either of these aircraft could be lowered by volume discounts and by the cost of the commercial amenities not required. On the other hand, these aircraft would require modifications to receive Joint STARS equipment, which would raise their cost. DOD informed us that the cost of procuring, refurbishing, and modifying the current 707 aircraft to receive Joint STARS equipment is now estimated to be about $110 million per airframe. The cost of procuring and preparing new aircraft might be comparable or even less than the current cost. In addition, the Air Force would acquire a new platform that could have (1) greater room for growth (both volume and weight), (2) take off capability from a shorter runway, (3) greater time-on-station capability, (4) significantly improved fuel efficiency, (5) extended aircraft life over the 707 currently used, and (6) reduced operational and support cost. In commenting on a draft of this report, DOD stated that it considered alternatives to the current air platform, both before LRIP started and at the full-rate production decision point. It also stated that the cost of moving the Joint STARS mission to an alternative platform would outweigh the benefits. We note, however, that at a meeting with DOD and service officials to discuss that draft, we asked about the reported DOD and service analyses. One Air Force official stated that the Air Force’s platform choice was not revisited prior to the full-rate production decision. None of the other 13 DOD and service officials present objected to that statement. Furthermore, when we asked for copies of the air platform analyses that were done in support of either the low-rate or the full-rate production decision, DOD was unable to supply those analyses. Finally, DOD officials have informed us that a Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance Mission Assessment has been performed that indicates that the Air Force could acquire a more effective system while saving $3 billion through the year 2010 by moving the Joint STARS mission to either a business jet or an unmanned aerial vehicle following the procurement of the twelfth current version Joint STARS aircraft. We have previously informed DOD of our concerns about the decision to move to full-rate production in spite of the numerous testing deficiencies reported by both AFOTEC and DOT&E. DOD responded that in making the decision to move to full-rate production, it “considered the test reports (both the services’ and the Director, Operational Test and Evaluation’s), the plans to address the deficiencies identified during developmental and operational testing, cost estimates, operational requirements, and other program information.” Although DOD believes that “none of the deficiencies identified are of a scope that warrants interrupting production,” the production decision memorandum clearly reflects a recognition that this program carries significant risk. In his memorandum, the Under Secretary of Defense for Acquisition and Technology directed (1) an update of the Joint STARS Test and Evaluation Master Plan to “address multi-service [operational test and evaluation] deficiencies (regression testing);” (2) acceleration of the objective and threshold dates for the planned Follow-on Operational Test and Evaluation; and (3) the Air Force to “fully examine [Joint STARS] affordability, sustainability, and life cycle costs including the scope of contractor use for field-level system support.” “I am writing to be sure you know that the President is personally committed to [Joint STARS], has engaged Chancellor Kohl on this issue and will continue his personal involvement with key allies to ensure our goal is achieved. I would ask that you underscore your personal support for our collective efforts on behalf of [Joint STARS] when you meet with your NATO and European counterparts.” Notwithstanding DOD’s September 1996 commitment to full-rate Joint STARS production, a DOD official informed us that the NATO armament directors in their November 1996 meeting delayed for 1 year any decision on designating Joint STARS as NATO’s common system or pursuing an alternate system to be developed. In the process of moving the Joint STARS program forward into full-rate production, DOD produced a Beyond LRIP report for Congress and thus moved past a key congressional reporting requirement that serves as an important risk management mechanism. The Beyond LRIP report to Congress that is required before major defense acquisition programs can proceed into full-rate production serves to inform Congress of the adequacy of the operational testing done on the system and to provide it with a determination of whether the system has demonstrated effectiveness and suitability. Having issued this report, DOT&E is under no further obligation to report to Congress at the Beyond LRIP report level of detail on the adequacy of the operational testing or on whether the system has demonstrated effectiveness and suitability for combat. However, DOD plans follow-on test and evaluation of the system to address the deficiencies identified during the system’s earlier testing. On September 20, 1996, DOT&E sent to Congress a Joint STARS “Beyond LRIP” report that (1) clearly indicates that further operational testing is needed, (2) could only declare effectiveness for operations other than war, and (3) stated that Joint STARS is unsuitable as tested. On September 25, 1996, DOD approved the full-rate production of Joint STARS. In the acquisition memorandum approving Joint STARS full-rate production, the Under Secretary of Defense for Acquisition and Technology called for an accelerated follow-on operational test and evaluation of Joint STARS that is to address the deficiencies identified in the initial operational test and evaluation DOT&E reported on in the Beyond LRIP report to Congress. The planned follow-on operational test and evaluation will provide an opportunity to judge the Joint STARS program’s progress in resolving the issues identified in earlier testing. Notwithstanding any concurrent efforts to have Joint STARS designated as a NATO common system, Joint STARS test performance and the clearly unresolved questions about its operational suitability and affordability should have, in our opinion, caused DOD to delay the full-rate production decision until (1) the system had, through the planned follow-on operational test and evaluation, demonstrated operational effectiveness and suitability; (2) the Air Force had completed an updated analysis of alternatives for the Joint STARS to address the identified aircraft suitability and cost issues; and (3) the Air Force had developed an analysis to determine whether a cost-effective maintenance concept could be designed for the system. Furthermore, as they were judged “essential” to mission accomplishment and needed “to achieve a capable Joint STARS system,” the satellite communications and deployable ground support station features (present, but untested, during Operation Joint Endeavor) should also be tested during the planned Joint STARS follow-on operational test and evaluation. Concerns of the magnitude discussed in this report are not indicative of a system ready for full-rate production. The program should have continued under LRIP until the issues identified by AFOTEC and DOT&E were resolved and the system was shown to be effective and suitable for combat. Furthermore, the recent cost growth related to refurbishing and modifying the old airframes being used for Joint STARS and questions regarding the suitability of those platforms indicate an opportunity to reduce the program’s cost and improve the systems acquired. We believe, therefore, that an updated study of the cost effectiveness of placing Joint STARS on new, more capable aircraft is warranted. We recommend that the Secretary of Defense direct the Air Force to perform an analysis of possible alternatives to the current Joint STARS air platform, to include placing this system on a new airframe. Because of (1) DOD’s decision to commit to full-rate production in the face of the test results discussed in this report and (2) its subsequent decision to do additional tests while in production to address previous test deficiencies, we are convinced that DOD plans to proceed with the program. However, if Congress agrees that there is unnecessarily high risk in this program and believes the risk should be reduced, it may wish to require that: The Air Force obtain DOT&E approval of a revised test and evaluation master plan (and all plans for the tests called for in that master plan) for follow-on operational testing to include adequate coverage of gaps left by prior testing and include testing of any added features considered part of the standard production configuration and that DOT&E considers key system components. DOT&E provide a follow-on test and evaluation report to Congress evaluating the adequacy of all testing performed to judge operational effectiveness and suitability for combat and a definitive statement stating whether the system has demonstrated operational effectiveness and suitability. DOD develop and provide Congress an analysis of alternatives report on the Joint STARS air platform that considers the suitability of the current platform and other cost-effective alternatives, and the life-cycle costs of the current platform and best alternatives. In commenting on a draft of this report, DOD disagreed with our recommendation that the Air Force be directed to perform an analysis of possible alternatives to the current Joint STARS air platform. It also disagreed with our suggestion that Congress may wish to require DOD to develop and provide Congress a report on that analysis. DOD stated that alternative platforms were considered prior to both the start of LRIP and the full-rate production decision. DOD stated that based on (1) the fact that over half the fleet is already in the remanufacturing process or delivered to the user; (2) the large nonrecurring costs that would be associated with moving the Joint STARS mission to a different platform; (3) the additional cost to operate and maintain a split fleet of Joint STARS airframes; and (4) the expected 4-year gap in deliveries, such a strategy would force the costs of moving the Joint STARS mission to a different platform outweigh the benefits. DOD’s comment about having previously considered alternative platforms is inconsistent with the information we developed during our review and with Air Force comments provided at our exit conference. In an effort to reconcile this inconsistency, we requested copies of the prior analyses of alternative platforms, but DOD was not able to provide them. DOD’s statement that the costs of moving the Joint STARS mission to another platform would outweigh the benefits contradicts Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance Mission Assessment briefings given the Quadrennial Defense Review. Those briefings recommend (1) limiting Joint STARS production to 12 aircraft, (2) moving the Joint STARS mission to either corporate jets or Unmanned Aerial Vehicles, and (3) phasing out Joint STARS 707 variants as quickly as the new platform acquisitions will allow. According to those briefings, implementation of this recommendation could result in a more effective system and save over $3 billion through fiscal year 2010. We believe that the issue clearly warrants further consideration. Furthermore, given DOD’s resistance to the concept, we are more convinced of the merits of our suggestion that Congress might wish to require a report on such an analysis. In commenting on our draft report, DOD also indicated that congressional direction was unneeded on our suggestions that Congress might wish to require (1) DOT&E approval of a revised test and evaluation master plan for the planned Joint STARS follow-on operational test and evaluation and (2) DOT&E to provide Congress with a follow-on operational test and evaluation report on the adequacy of Joint STARS testing and stating whether Joint STARS has demonstrated operational effectiveness and suitability. DOD stated that congressional direction on the first point was unneeded because the Joint STARS full-rate production decision memorandum required that the test and evaluation master plan be updated for Office of the Secretary of Defense approval and current DOD policy is that DOT&E will review, approve, and report on oversight systems in follow-on operational test and evaluation. DOD also stated that congressional direction on the second point is unneeded because DOT&E has retained Joint STARS on its list of programs for oversight and is to report on the system in its annual report to Congress as appropriate. DOD’s response did not directly address our point since, as DOD pointed out, the acquisition decision memorandum that approved full-rate production required Office of the Secretary of Defense approval, not DOT&E approval, of the follow-on operational test and evaluation master plan. During the course of our review, DOD officials informed us that there was significant disagreement between the Air Force and DOT&E as to what follow-on testing was needed. It was indicated that the issue would probably have to be resolved at higher levels within the department, an indication of greater flexibility than DOD implies. Furthermore, while DOD stated there were some improvements and enhancements “that could benefit the warfighter” and acknowledged that those features were not tested, it did not respond to our comments that DOT&E judged those features “essential” to mission accomplishment or commit to their operational test and evaluation. Given these facts, we have not only maintained our suggestion that Congress may wish to require the Air Force to obtain DOT&E approval of a revised test and evaluation master plan, but also strengthened it to include DOT&E approval of supporting test plans. In its response to our suggestion that Congress may wish to require that DOT&E provide it a detailed, follow-on test and evaluation report, DOD states congressional direction is unnecessary as DOT&E will report on the system, among many others, in its annual report to Congress. DOD’s comment fails to recognize, however, that we are suggesting that, given the already reported test results, Congress may wish a more detailed report outlining the adequacy of and the system’s performance during follow-on operational testing to help in its oversight and provide it assurance that the system’s problems have been substantially resolved. Given that (1) Congress felt such reporting to be beneficial enough to require it before a system can proceed beyond LRIP and (2) the fact that DOT&E, in the required report provided for Joint STARS, could not certify effectiveness for war and found the system unsuitable as tested, we continue to believe that Congress may wish to require a similar report based on the follow-on operational test and evaluation planned. DOD’s comments are reprinted in their entirety in appendix I, along with our evaluation. To determine whether Joint STARS test performance indicates a maturity justifying full-rate production, we interviewed officials and reviewed documents in Washington, D.C., from the DOD Office of the Director of Operational Test and Evaluation and the Joint STARS Integrated Product Team. We reviewed the Air Force Operational Test and Evaluation Center’s multi-service operational test and evaluation plan and its final report on that testing and the DOD Director of Operational Test and Evaluation’s Beyond LRIP report. To determine whether DOD considered and resolved important cost and performance issues prior to making its full-rate production decision, we reviewed Joint STARS program budget documents and program-related memoranda issued by the Under Secretary of Defense for Acquisition and Technology. To determine whether it is possible that a more useful operational test and evaluation report can be provided Congress, we reviewed the statute governing operational testing and evaluation, examined DOT&E’s Beyond LRIP report, and considered other relevant program information. We considered and incorporated where appropriate DOD’s response to our September 20, 1996, letter of inquiry and its response to a draft of this report. We conducted this review from October 1996 through April 1997 in accordance with generally accepted government auditing standards. We are sending copies of this letter to other appropriate congressional committees; the Director, Office of Management and Budget; and the Secretaries of Defense, the Army, and the Air Force. Copies will also be made available to others upon request. If you or your staff have any questions, please contact me, Mr. Charles F. Rey, Assistant Director, or Mr. Bruce Thomas, Evaluator-in-Charge, at (202) 512-4841. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated March 31, 1997. 1. We have not suggested or recommended that Joint STARS production be interrupted. We have, however, suggested actions that we believe (1) will help reduce the program’s risk; (2) could result in the acquisition of a more effective, less costly system; and (3) could help decisionmakers ensure that the Joint STARS program continues to make progress. 2. The report has been modified in light of DOD’s comments. 3. DOD’s indication that other factors were considered in deciding to proceed to full-rate production is a signal that DOD and the Air Force are willing to accept a high level of risk even when the Director, Operation, Test, and Evaluation (DOT&E) has concluded that the system was unsuitable as tested and operational effectiveness for war remains to be demonstrated. We believe, given the system’s test performance as reported by both the Air Force Operational Test and Evaluation Command (AFOTEC) and DOT&E and the program’s procurement cost growth of $1 billion between the low-rate and full-rate production decision points, that an informed full-rate production decision required the following information: (1) an approved test and evaluation master plan for follow-on operational testing and specific plans for the tests called for in that master plan, (2) the results of the already ongoing study of ways to reduce the program’s cost, and (3) an analysis of alternatives to the current platform. DOD did not have these items in hand when it made its decision. We must also note that DOD implies that our recommendations would require a break in production. This is inaccurate. As we stated in the body of our report, the program could have continued under low-rate initial production (LRIP) until operational effectiveness and suitability for combat were demonstrated and plans to address identified deficiencies and reduce program costs were completed. 4. In its report on the Joint STARS multi-service operational test and evaluation, AFOTEC stated that “Joint STARS software is immature and significantly impedes the system’s reliability and effectiveness.” We do not believe that, given the software intensive nature of the system, this statement supports a conclusion that the system could be judged operationally effective. 5. We must note that follow-on operational test and evaluation of the system was planned before the full-rate production decision. The full-rate production decision called for acceleration of that testing and for that testing to address deficiencies identified in the earlier tests. Joint STARS could have continued under LRIP pending a demonstration of operational effectiveness and suitability. 6. This speaks to the number of aircraft missions planned and the number for which an aircraft was provided. It does not address the quality or quantity of the support provided during those missions. Furthermore, DOD’s comment refers to the same—operation that is reported on in both the Air Force and DOT&E reports and in this report. 7. U.S.-based contractor support was utilized during the first Operation Joint Endeavor deployment. It is also our understanding that during the second Operation Joint Endeavor deployment the Air Force may have utilized a “reach-back” maintenance concept in which U.S. stationed contractor staff were providing field support through satellite communications. Moreover, DOD and Air Force officials told us that at least at the beginning of the second Operation Joint Endeavor deployment, contractor staff were flying on the deployed aircraft. This clearly raises the question of what the overall level of contractor support was for both the first and second deployments. “As already discussed, extensive efforts by the system contractor were required to achieve the demonstrated availability for the E-8C aircraft. Even with those efforts the system was not able to meet the user criteria for several measures directly related to the maintenance concept in place during —a concept that involved considerably more contractor support than previously envisioned.” 11. As we noted in the body of our report, Joint STARS failed to meet test criteria during an operation less demanding than combat, even with such significant contractor involvement beyond that planned for in combat. In discussing operational tempo in its Beyond LRIP report, DOT&E stated that if the system is operated at rates similar to the Airborne Warning and Control System, “it is questionable whether the [Joint STARS aircraft] can be sustained over time.” DOD commented that an unbiased assessment of the measure of Joint STARS’ ability to maintain the required tempo could not be made and would be tested during the follow-on operational test and evaluation. We believe that an informed full-rate production decision requires knowledge of a system’s ability to satisfy the operational tempo expected of it. DOD made its Joint STARS full-rate production decision without this knowledge. 12. We understand that Joint STARS, like most systems, has limitations that need to be planned around. At issue here is a question of how great those limitations are and whether they are acceptable. DOD states that “the user is satisfied that the system meets requirements.” However, we must note that the Air Force’s own Operational Test and Evaluation Center reported that the “two critical suitability [measures of performance, sortie generation rate and mission reliability rate], were affected by [Operation Joint Endeavor] contingency requirements and system stability problems.” “The high failure rate of aging aircraft components affected as critical failures were statistically determined to affect over 30 percent of the sorties flown. Analysis revealed the elevated critical failure rate was steady and showed no potential for improvement. Technical data and software immaturity affected the maintainability of the aircraft, and contractor involvement further compromised clear insight into the Air Force technicians’ ability to repair the system.” AFOTEC also reported on Joint STARS performance relative to 15 supporting suitability criteria. It stated “Eight did not meet users’ criteria. One was not tested. Only one . . . met the users’ criteria. The remaining five are reported using narrative results.” 13. DOD discusses only the weight growth of funded activities, leaving open the question of whether there are future, but currently unfunded, improvements planned that will add weight growth. Air Force officials told us that the Airborne Warning and Control System had experienced weight growth over the life of its program. That growth was attributed to the system’s being given added tasks over time. We believe it reasonable to expect that the Joint STARS program experience might track that of the Airborne Warning and Control Systems program, that is, be given added tasks and face weight growth as a result. Also, regarding Joint STARS room for growth, DOD previously advised us that Joint STARS currently has about 455,000 cubic inches of space available. We must note that this equates to a volume of under 7 feet cubed and that in commenting on the system’s space limitation, DOT&E stated “There is little room available for additional people or operator workstations.” 14. As we stated in the body of our report, how easily these software changes are incorporated remains to be seen. 15. We requested and DOD provided additional information on this point. DOD’s subsequent response indicates that this DOD comment was in error. In its subsequent response, DOD stated that the follow-on test and evaluation was accelerated “to reflect desire for earlier to evaluate fixes to deficiencies.” We believe this statement reflects a recognition of increased program risk. 16. The acquisition decision memorandum approving Joint STARS production clearly indicates that the Skantze study mentioned was not completed at that time. We believe that the full-rate production decision should have been made with the Skantze study in hand. Furthermore, we do not understand why DOD felt the need to direct the Air Force to fund and implement a plan that is to save it money, but felt no need to direct the Air Force to examine alternative platforms that at least one other DOD panel had stated would not only save $3 billion but also provide greater effectiveness. 17. We believe that not only should DOT&E approval of the Joint STARS Test and Evaluation Master Plan be required, but also of all supporting test plans. We have changed the language of this matter for congressional consideration accordingly. 18. We are suggesting that Congress may wish to request a more detailed report, one at the Beyond LRIP report level of detail, a level of detail not provided in DOT&E’s annual report. Given that DOT&E could only state effectiveness for operations other than war—could not state a belief as to whether the system would be effective in two of the three critical operational roles it is expected to perform in war—and found the system unsuitable as tested, we believe that such report would help Congress maintain program oversight. DOD’s comment of “other reports as appropriate” leaves the matter in DOD’s hand to decide if Congress would benefit from such a report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Department of Defense's (DOD) recent decision to commit to the full-rate production of the Joint Surveillance Target Attack Radar System (Joint STARS), focusing on whether: (1) the system had demonstrated a level of maturity through testing to justify a full-rate production commitment; (2) DOD considered and resolved important cost and performance issues prior to making its decision; and (3) there are future actions that could reduce program risk. GAO noted that: (1) Joint STARS' performance during its combined development and operational test and the operational evaluation done in Bosnia do not support a decision to commit the system to full-rate production; (2) the system's operational effectiveness and suitability were not demonstrated during the operational testing; (3) DOD's decision to move Joint STARS into full-rate production was premature and raised the program's level of risk; (4) the program could have continued under low-rate initial production (LRIP) until operational effectiveness and suitability for combat were demonstrated and plans to address identified deficiencies and reduce program costs were completed; (5) DOD decided in favor of Joint STARS full-rate production without the benefit of that information; (6) during the period that the full-rate production decision was being considered, the Assistant to the President for National Security was promoting the sale of the system to the North Atlantic Treaty Organization (NATO); (7) in an August 10, 1996, memorandum to the Secretaries of State, Defense, and Commerce and to the Chairman of the Joint Chiefs of Staff, the Assistant to the President stated that: "We have been working through various military, diplomatic, and political channels to secure NATO support for a fall 1996 decision in principle by the Conference of Armament Directors...to designate (Joint STARS) as NATO's common system"; (8) a DOD official informed GAO that in November 1996, the NATO armament directors delayed their decision on Joint STARS for 1 year; (9) before DOD approved the full-rate production of Joint STARS, the Director of Operational Test and Evaluation (DOT&E) provided Congress with a Joint STARS "Beyond LRIP" report; (10) the report clearly indicates that further operational testing is needed, DOT&E could only declare effectiveness for operations other than war, and the system was unsuitable as tested; (11) DOD plans follow-on test and evaluation to address the deficiencies identified during the earlier testing; (12) there is an opportunity not currently under consideration that could reduce the Joint STARS' program cost and result in an improved system; (13) since the Joint STARS was approved for LRIP, the procurement cost objective of the Air Force's share of the Joint STARS has increased by about $1 billion, primarily due to the greater effort and more resources needed to refurbish the 25- to 30-year-old Boeing 707 airframes than previously anticipated; and (14) it may now be more cost effective for the Air Force to buy the Boeing 767-200 Extended Range aircraft or some other new, more capable aircraft.
You are an expert at summarizing long articles. Proceed to summarize the following text: FECA provides cash and other benefits to eligible federal employees who suffer temporary or permanent disabilities resulting from work-related injuries or diseases. DOL’s Division of Federal Employees’ Compensation in the Office of Workers’ Compensation Programs (OWCP) administers the FECA program and charges agencies for whom injured employees worked for benefits provided. These agencies subsequently reimburse DOL’s Employees’ Compensation Fund from their next annual appropriation. FECA benefits are adjusted annually for cost-of-living increaseshas large FECA program costs. At the time of their injuries, 43 percent of and are neither subject to age restrictions nor taxed. USPS FECA beneficiaries in 2010 were employed by USPS, as shown in table 1. One way to measure the adequacy of FECA benefits is to consider wage replacement rates, which are the proportion of pre-injury wages that are replaced by FECA. Wage replacement rates that do not account for missed career growth capture the degree to which a beneficiary is able to maintain his or her pre-injury standard of living whereas wage replacement rates that account for missed career growth capture the degree to which a beneficiary is able to maintain his or her foregone standard of living (i.e., standard of living absent an injury). Data limitations can preclude calculating wage replacement rates that account for missed career growth; however, doing so provides a more complete story of the comparison between an injured worker and his or her counter-factual of having never been injured. Wage replacement rates can be targeted by policymakers; however, there is no consensus on what wage replacement rate policies should target. FECA beneficiaries receive different benefits past retirement age than workers who retire under a federal retirement system. Specifically, under FERS, federal retirees have a benefit package comprised of three components: the FERS annuity, which is based on years of service and high-3 average pay; the TSP, which is similar to a 401(k); and Social Security benefits. FECA benefits do not change at retirement age and beneficiaries cannot receive a FERS annuity and FECA benefits simultaneously. In addition, FECA beneficiaries cannot contribute to their TSP accounts post-injury, but they can receive benefits from contributions made to their TSP accounts prior to being injured. In addition, Social Security benefits attributable to federal service are offset by FECA. If an individual has a disability and no current capacity to work, OWCP determines that he or she is a total-disability beneficiary and calculates long-term FECA benefits as a proportion of the beneficiary’s entire income at the time of injury. In 2010, 31,880 FECA beneficiaries received long-term total-disability cash benefits. Alternatively, if an individual recovers sufficiently to return to work in some capacity, OWCP determines that he or she is a partial-disability beneficiary and reduces his or her FECA benefits from the total-disability amount. For such partial-disability beneficiaries, OWCP calculates long- term benefits based on any loss of wage earning capacity (LWEC), as compared to their pre-injury wages. A beneficiary’s LWEC may be based on the difference between their pre-injury wages and their actual post-injury earnings if the beneficiary has found employment that OWCP determines to be commensurate with their rehabilitation. Alternatively, OWCP may construct a beneficiary’s LWEC based on the difference between pre-injury wages and OWCP’s estimate of what the FECA beneficiary could earn in an appropriate job placement (constructed earnings). In 2010, 10,594 FECA beneficiaries received long-term partial- disability cash benefits. In addition to our work on FECA benefit levels, we have also conducted work on program integrity and management. We have identified several weaknesses in these areas. Most recently, in April 2013, we found examples of improper payments and indicators of potential fraud in the FECA program, which could be attributed, in part, to oversight and data- access issues. For example, we found cases where, in comparison to state wage data reports, individuals underreported their employment wages to DOL. Underreporting earnings could result in overpayments of FECA benefits. DOL did not have access to wage data to corroborate the self-reported incomes. Because of this limitation, we included a matter for congressional consideration, stating that Congress should consider granting DOL additional authority to access wage data to help verify claimants’ reported income and help ensure the proper payment of benefits. Congress has not yet passed legislation to give DOL additional authority to access wage data. We also found that, although some FECA claimants may be eligible to receive both FECA and unemployment insurance benefits, DOL did not have a process to share the necessary data to help the states identify overlapping payments. We recommended that the Secretary of Labor assess the feasibility of developing a cost- effective mechanism to share FECA compensation information with states. DOL agreed with the recommendation and has taken steps to develop a cost-effective mechanism to share FECA compensation information with states, but has not completed its efforts. Specifically, in August 2014, a DOL official stated that the agency’s Office of Workers’ Compensation Programs and Office of Unemployment Insurance are working with DOL’s Office of the Solicitor to develop agreements that would allow DOL to provide information on FECA compensation to states for use in determining unemployment benefits. When completed, these actions should help states to identify whether claimants are inappropriately receiving overlapping UI and FECA payments. Our simulations of the effects of compensating non-USPS and USPS total-disability beneficiaries at the single rate of either 66-2/3 or 70 percent of wages at injury, regardless of the presence of dependents, reduced the median wage replacement rates. Median wage replacement rates overall, and within the subgroups we examined, were generally lower under the 66-2/3 percent compensation proposal. As we reported in 2012, compared to the current FECA program, both proposals reduced 2010 median wage replacement rates for total- disability non-USPS and USPS beneficiaries, as shown in figure 1. The decreases in the overall median wage replacement rates were due to the greater proportion of beneficiaries who had a dependent—73 percent of non-USPS beneficiaries and 71 percent of USPS beneficiaries. Beneficiaries with a dependent received lower compensation under both proposals whereas beneficiaries without a dependent saw their compensation increase or stay the same. As shown in the middle group of bars in figure 1, the results of our simulation indicate that median wage replacement rates for USPS beneficiaries were generally higher than those for non-USPS beneficiaries. In both cases, the wage replacement rates account for missed income growth, as they are simulated based on 2010 take-home pay. All else equal, FECA beneficiaries who would have experienced more income growth—from the time of injury through 2010—had lower wage replacement rates than did those beneficiaries who would have experienced less income growth absent their injury. In general, USPS beneficiaries missed less income growth due to their injury than did non- USPS beneficiaries. Consequentially, USPS beneficiaries had higher wage replacement rates than non-USPS beneficiaries. For example, 4 out of 5 USPS beneficiaries in our analysis would have had less than 10 percent income growth had they never been injured. In contrast, 2 out of 5 non-USPS beneficiaries would have had less than 10 percent income growth, absent an injury. Under our simulations, both proposals increased the difference in wage replacement rates between beneficiaries with and without a dependent, increasing the magnitude and reversing the direction of the difference in median wage replacement rates, as shown in figure 2. Had we been able to account for the actual number of dependents, beneficiaries with dependents would have had lower wage replacement rates and thus the difference between median wage replacement rates would have been smaller under FECA and larger under both proposals. For other beneficiary subgroups we examined for our 2012 reports, the proposals did not reduce wage replacement rates disproportionately to the reduction in the overall median. However, we found that under the current FECA program and both proposals, wage replacement rates for some beneficiaries, such as those who, due to injury earlier in their careers, missed out on substantial income growth, were substantially lower than the overall median. FECA was not designed to account for missed income growth and thus total-disability beneficiaries who missed substantial income growth had lower wage replacement rates— outweighing the cumulative effect of FECA’s annual cost of living adjustments—as shown in figure 3. According to the retirement simulations from our 2012 reports comparing current FECA benefits to FERS benefits, we found that the overall median FECA benefit package (FECA benefits and TSP annuity) for both USPS and non-USPS FECA beneficiaries was greater than the current median FERS retirement benefit package (FERS annuity, TSP annuity, and Social Security). Specifically, the median FECA benefit package for non- USPS beneficiaries was 32 percent greater than the current median FERS—and 37 percent greater for USPS FECA beneficiaries. This implies that in retirement, FECA beneficiaries generally had greater income from FECA and their TSP in comparison to the FERS benefits they would have received absent an injury. As we reported in 2012, although the overall median FECA benefit was substantially higher than the median FERS benefit for 2010 annuitants, the difference between the two varies based on years of service. Our simulation showed that median FECA benefit packages were consistently greater than median FERS benefit packages across varying years of service; however, the gap between the two benefits narrowed as years of service increased. This occurred in large part because FERS benefits increase substantially with additional years of service. For example, under our simulation non-USPS beneficiaries whose total federal career would have spanned less than 10 years had a median FECA benefit that was about 46 percent greater than the corresponding FERS benefit. In contrast, non-USPS beneficiaries whose total federal career would have spanned 25 to 29 years had a median FECA benefit that was 16 percent greater than the corresponding FERS benefit. For USPS beneficiaries, those whose total federal career would have spanned less than 10 years had a median FECA benefit that was about 65 percent greater than the corresponding FERS benefit, while beneficiaries whose total federal career would have spanned between 20 and 24 years had a median FECA benefit that was 23 percent greater than the corresponding FERS benefit. Based on the simulations we conducted for our 2012 reports, we found that reducing FECA benefits once beneficiaries reach retirement age to 50 percent of wages at the time of injury would result in an overall median for the reduced FECA benefit package (reduced FECA plus the TSP) that is about 6 percent less than the median FERS benefit package for non- USPS annuitants. Under our simulation for USPS annuitants, the reduced FECA benefit package would be approximately equal to the median 2010 FERS benefit package. This implies that under the proposed reduction, both USPS and non-USPS FECA beneficiaries would have similar income from their FECA benefit package in comparison to their foregone FERS benefit. In addition, under our simulation reduced FECA benefits were similar or less than FERS benefits across varying years of service. However, as years of service increase, the gap between the two benefits widened. For example, we found that non-USPS beneficiaries whose total federal career would have spanned less than 10 years had a median reduced FECA benefit that was about 2 percent greater than the corresponding FERS benefit. In contrast, those non-USPS beneficiaries whose total federal career would have spanned 25 to 29 years had a median reduced FECA benefit that was 19 percent less than the corresponding FERS benefit. Similarly, USPS beneficiaries whose total federal career would have spanned less than 10 years had a median reduced FECA benefit that was about 13 percent greater than the corresponding FERS benefit. In contrast, USPS beneficiaries whose total federal career would have spanned 25 to 29 years had a median reduced FECA benefit that was 20 percent less than the corresponding FERS benefit. When we conducted simulations for our 2012 reports, FERS had only been in place for 26 years in 2010, and therefore our simulation did not capture the “mature” FERS benefit that an annuitant could accrue with more years of service. Consequently, it is likely that our analysis understated the potential FERS benefit when we considered 2010 benefit levels. As a result, we conducted a simulation of a “mature” FERS that was coupled with the assumption that individuals have 30-year federal careers. Based on this simulation, we found that the median current FECA benefit packages for non-USPS beneficiaries were on par or less than the median FERS benefit package—depending on the amount an individual contributes toward their TSP account for retirement. As shown on the right sides of figures 4 and 5, under the default scenario where there is no employee contribution and the employing agency contributes 1 percent to TSP, the median FECA benefit package is about 1 percent greater than the median FERS benefit package. However, under a scenario where each employee contributes 5 percent—and receives a 5 percent agency match—the median FECA benefit package is about 10 percent less than the median FERS benefit package. Similarly, our simulation showed that for USPS annuitants, the median FECA benefit package was about 13 percent greater than the median FERS benefit package under the 1 percent agency contribution scenario, and about 4 percent less than the median FERS benefit package under the 10 percent contribution scenario. Our simulation also found that, for both non-USPS and USPS annuitants, the median reduced FECA benefit package under the proposed changes was less than the median FERS benefit package—regardless of the simulated contributions to TSP accounts. Specifically, under a scenario where there is no employee contribution—and a 1 percent contribution from the employing agency—the median reduced FECA benefit package is about 31 percent less than the median FERS benefit package for non- USPS annuitants and 22 percent less than the median FERS benefit package for USPS annuitants. Under a scenario where each employee contributes 5 percent—and receives a 5 percent agency match—the median reduced FECA benefit package is about 35 percent less than the FERS benefit package for non-USPS annuitants and about 29 percent less than the FERS benefit package for USPS annuitants. As we reported in 2012, partial-disability beneficiaries are fundamentally different from total-disability beneficiaries, as they receive reduced benefits based on their potential to be re-employed and have work earnings. However, there is limited information available about the overall population of partial-disability beneficiaries. They do not all find work and their participation in the workforce may change over time, and their individual experiences will determine how they would fare under the proposed revisions. As we reported in 2012, we found that partial-disability beneficiaries in the case studies we examined fared differently under both FECA and the proposed revisions to pre-retirement compensation, depending on the extent to which they had work earnings in addition to their FECA benefits. To consider this larger context, we conducted total income comparisons for the partial-disability case studies we examined. We defined the post- injury total income comparison to be the sum of post-injury FECA benefits and any gross earnings from employment at the time of the LWEC decision, as a percentage of pre-injury gross income. Among the seven partial-disability case studies we examined, those beneficiaries with constructed earnings LWECs had post-injury total income comparisons that were substantially less than those with actual earnings LWECs. As shown in table 2, the beneficiaries in case studies 5 to 7 had constructed earnings LWECs and had post-injury total incomes that ranged from 29 to 65 percent of their pre-injury income under the current FECA program. This range was substantially lower than the total income comparisons for the beneficiaries in case studies 1 to 4 with actual earnings LWECs (77 to 96 percent). We found that by definition, at the time of their LWEC decision, those beneficiaries with constructed earnings LWECs earned less than the income OWCP used to calculate their LWECs. Consequently, their total income comparisons—FECA benefits plus earnings, as a percentage of pre-injury wages—are necessarily lower than those with actual earnings LWECs. We also found that beneficiaries in our case studies were affected differently by the proposed revisions to pre-retirement benefits. As expected, the beneficiaries who did not have a dependent (case studies 2, 4, and 7) experienced either slight increases or no change in their post- injury total income comparisons under the proposed revisions to pre- retirement benefits. Under both proposals, the beneficiaries in our case studies who had a dependent (case studies 1, 3, 5, and 6) experienced declines in their post-injury total income comparisons.decreases in total income comparisons were relatively small compared to the impact of not having actual earnings. For instance, the beneficiary with a constructed earnings LWEC in case study 6 experienced declines in total income comparisons of about 3 to 4 percentage points between the current FECA program and the proposals. However, the beneficiary’s total income comparisons under the current FECA program and the proposals were over 30 percentage points lower than those of the beneficiary in case study 3 who had the lowest total income comparisons of those beneficiaries with actual earnings LWECs. Due to the importance of actual work earnings on partial-disability beneficiaries’ situations, we have previously concluded that a snapshot of post-injury total income comparisons is insufficient to predict how beneficiaries fare over the remainder of their post-injury careers. Employment at the time of OWCP’s LWEC decision does not necessarily imply stable employment over time, as beneficiaries can find, change, or lose jobs over time. We have also found that the proposals to reduce FECA benefits at retirement age would primarily affect those partial-disability beneficiaries who continue to receive FECA benefits past retirement age. As we reported in December 2012, among those partial-disability beneficiaries who stopped receiving FECA benefits in 2005-2011, 68 percent did so due to their election of Office of Personnel Management (OPM) retirement or other benefits, such as Veterans Affairs disability benefits. At that time, DOL officials told us that because many variables affect retirement benefits, they cannot predict why partial-disability beneficiaries would potentially choose to retire instead of continuing to receive FECA benefits. Only 17 percent of partial-disability beneficiaries who stopped receiving FECA benefits were beneficiaries who died (i.e., received benefits from injury until death). These aggregate numbers do not track individual beneficiaries’ decisions to elect retirement or to continue receiving FECA benefits past retirement age, but they suggest that there is a substantial percentage of partial-disability beneficiaries that elects other benefits instead of FECA at some point post-injury. Since those beneficiaries who elect FERS retirement would not be affected by the proposed revisions to FECA compensation at retirement age, the overall effects of the proposals on partial-disability beneficiaries should be considered in the larger context of retirement options. To do so, in our December 2012 report, we used data from the seven partial- disability case studies to simulate and compare FERS and FECA benefits and to highlight various retirement options these partial-disability beneficiaries may face. As shown in table 3, we found: The beneficiaries in case studies 2, 4, and 6 had potential FERS benefit packages that were higher than their FECA benefits under the current FECA program and the proposed revision—they would likely not be affected by the proposed revision. The beneficiaries in case studies 1, 3, and 7 had potential FERS benefit packages that were lower than their FECA benefits under the current FECA program and the proposed revision—they would likely face a reduction in FECA benefits in retirement under the proposed revision. The beneficiary in case study 5 had a potential FERS benefit package that was lower than his FECA benefits under the current FECA program, but higher than his benefits under the proposed FECA reduction—he would likely face a reduction in FECA benefits in retirement under the proposed revision. Based on our prior work, we have concluded that the differences in retirement options that individual beneficiaries face stem from two key factors: (1) OWCP’s determination of their earning capacities, and (2) their total years of federal service. Partial-disability beneficiaries with greater potential for earnings from work receive relatively lower FECA benefits to account for their relatively lower loss of wage earning capacity, all else equal. In table 2, beneficiaries with: low earning capacities post-injury (case studies 1, 3, and 5) had FECA benefits that were more favorable than FERS benefits; high earning capacities post-injury (case studies 2 and 4) had FECA benefits that were less favorable than FERS benefits; and mid-range earning capacities post-injury (case studies 6 and 7) had FECA benefits whose favorability depended on their total years of federal service. Fewer years of federal service resulted in a lower FERS annuity and lower Social Security benefits attributable to federal service, all else equal. We have also found that partial-disability beneficiaries who choose to remain on FECA past retirement age currently face lower FECA benefits in retirement as compared with total-disability beneficiaries, and would experience a reduction in benefits under the proposals. Partial-disability beneficiaries receive FECA benefits that are lower than those of otherwise identical total-disability beneficiaries to account for their potential for work earnings. As long as they work, their income is comprised of their earnings and their FECA benefits. However, once they choose to retire, partial-disability beneficiaries who choose to stay on FECA likely no longer have any work earnings and are not eligible to simultaneously receive their FERS annuity. Thus, we found that because of the way FECA benefits are currently calculated, such partial- disability beneficiaries may have less income in retirement than otherwise identical total-disability beneficiaries, and the proposals would reduce benefits in retirement without differentiating between partial and total- disability beneficiaries. The proposed reduction may serve as a long- term incentive for partial-disability beneficiaries to return to work,particularly because their initial FECA benefits are lower than those of total-disability beneficiaries. In conclusion, FECA continues to play a vital role in providing compensation to federal employees who are unable to work because of injuries sustained while performing their federal duties and FECA benefits generally serve as the exclusive remedy for being injured on the job. Our simulations of the potential effects of proposed changes to FECA benefit levels incorporated the kinds of approaches used in the literature on assessing benefit adequacy for workers’ compensation programs, such as accounting for missed career growth. More specifically, we assessed the proposed changes by simulating the level of take-home pay or retirement benefits FECA beneficiaries would have received if they had not been injured, which provides a realistic basis for assessing how beneficiaries may be affected. However, we did not recommend any particular level of benefit adequacy. As policymakers assess proposed changes to FECA benefit levels, they will implicitly be making decisions about what constitutes an adequate level of benefits for FECA beneficiaries before and after they reach retirement age. While our analyses focused on how the median FECA beneficiary might be affected by proposed changes, it also highlighted how potential effects may vary for different subpopulations of beneficiaries, which can assist policymakers as they consider such changes to the FECA program. Apart from proposed changes to FECA benefit levels, the legislative proposal in the 2016 congressional budget justification for OWCP also seeks to strengthen FECA program integrity. As policymakers examine proposed changes to reduce improper payments in the FECA program, they should consider granting DOL authority to access wage data so the agency does not have to rely on self-reported income data, as our prior work has recommended. Chairman Walberg, Ranking Member Wilson, and Members of the Subcommittee, this concludes my prepared statement and I would be happy to answer any questions that you may have at this time. For further information regarding this testimony, please contact Andrew Sherrill at (202) 512-7215 or sherrilla@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include: Nagla’a El-Hodiri (Assistant Director); Michael Kniss (Analyst-in-Charge), James Bennett, Jessica Botsford, Holly Dye, Kathy Leslie, James Rebbe, and Walter Vance. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The FECA program, administered by the Department of Labor, provides wage-loss compensation to federal workers who sustained injuries or illnesses while performing federal duties. Benefits are adjusted for inflation and are not taxed nor subject to age restrictions. Initial FECA benefits are set at 75 percent of gross wages at the time of injury for beneficiaries with eligible dependents and 66-2/3 for those without. Some policymakers have raised questions about the level of FECA benefits, especially compared to federal retirement benefits. Prior proposals to revise FECA for future total- and partial-disability beneficiaries have included: setting initial FECA benefits at a single rate, regardless of whether the beneficiary has eligible dependents; and converting FECA benefits to 50 percent of applicable wages at time of injury—adjusted for inflation—once beneficiaries reach full Social Security retirement age. This testimony presents results from GAO reports issued in fiscal year 2013. It summarizes (1) potential effects of the proposals to compensate total-disability FECA beneficiaries at a single rate; (2) potential effects of the proposal to reduce FECA benefits to 50 percent of applicable wages at full Social Security retirement age for total-disability beneficiaries; and (3) how partial-disability beneficiaries might fare under the proposed changes. For this work, GAO conducted simulations for USPS and non-USPS beneficiaries comparing FECA benefits to income (take-home pay or retirement benefits) a beneficiary would have had absent an injury and conducted case studies of partial-disability beneficiaries. In 2012, GAO ran simulations to analyze proposals—similar to a proposal discussed in the Department of Labor's 2016 budget justification—to set initial Federal Employees' Compensation Act (FECA) benefits at a single compensation rate. GAO found that the proposals reduced the median wage replacement rates—the percentage of take-home pay replaced by FECA—for total-disability beneficiaries. Specifically, according to GAO's simulation, in 2010 under the existing FECA program, the median wage replacement rates were 88 percent for U.S. Postal Service (USPS) beneficiaries and 80 percent for non-USPS beneficiaries. The proposal to use a single rate of 70 percent to compensate both those with and without dependents would reduce the beneficiaries' median wage replacement rates by 3 to 4 percentage points. The proposal to use a single rate of 66-2/3 percent would reduce the beneficiaries' median wage replacement rates by 7 to 8 percentage points. The simulations for GAO's 2012 reports also found that proposals to reduce FECA benefits upon reaching Social Security retirement age would reduce beneficiaries' retirement income, bringing the median FECA benefits on par with or below the median retirement incomes individuals would have received absent their injuries. In simulations comparing FECA benefits to retirement benefits—both in 2010—GAO found that under the existing FECA program, the median FECA benefit package for total-disability retirement-age beneficiaries was 37 and 32 percent greater than the median 2010 retirement benefit package for USPS and non-USPS beneficiaries, respectively. This analysis focused on individuals covered under the Federal Employees Retirement System (FERS), which generally covers employees first hired in 1984 or later. GAO found that the proposal to reduce FECA benefits at the full Social Security retirement age would result in a median FECA package roughly equal to the median 2010 FERS retirement package. However, the median years of service for the FERS annuitants GAO analyzed were 16 to 18 years—which did not constitute a mature FERS system—so these simulations understated the future FERS benefit level. GAO then simulated a mature FERS system—intended to reflect benefits of workers with 30-year careers—and found that the median FECA benefit package under the proposed change would be 22 to 35 percent less than the median FERS retirement package. The potential effects of the proposed changes to FECA on partial-disability beneficiaries would vary based on individual circumstances. Partial-disability beneficiaries differ fundamentally from total-disability beneficiaries, as they receive reduced FECA benefits based on a determination of their post-injury earning capacity. GAO's seven case studies of partial-disability beneficiaries showed variation based on characteristics such as earning capacity and actual earnings. For example, beneficiaries with high earning capacities based on actual earnings might elect to retire under FERS if their potential retirement benefits were higher than their current or reduced FECA benefit levels. They would, thus, not be affected by the proposed changes. In contrast, those beneficiaries with low earning capacities who might remain on FECA past retirement age would have their benefits reduced under the proposed change.
You are an expert at summarizing long articles. Proceed to summarize the following text: Defense considered appropriate. Since that time, the joint non-lethal weapons program, with the Commandant of the Marine Corps assigned executive agent, has assumed a role in the requirements development process as well as research, development, test and evaluation. DOD Directive 3000.3 assigns the Under Secretary of Defense for Acquisition, Technology, and Logistics principal oversight for th Non-Lethal Weapons Program, including joint service program coordination to help highlight and prevent duplication of program development, while the Assistant Secretary of Defense for Special Operations and Low- Intensity Conflict under the Under Secretary Defense for Policy, has policy oversight for the development and employment of non-lethal weapons. The Commandant of the Marine Corps is the executive agent for the DOD Non-Lethal Weapons Program. The executive agent serves as the prim DOD/U.S. Coast Guard point of contact for non-lethal weapons and is tasked to coordinate, integrate, review, and recommend, the Join Lethal Weapons program to the Under Secretary of Defense for Acquisition, Technology, and Logistics; and coordinate requirements among the services. Such action is to be taken in consultation with the services, combatant commanders, DOD agencies, and U.S. Coast Guard. Within the Commandant of the Marine Corps’ command, the Joint Non- Lethal Weapons Directorate (JNLWD) was established to perform the day- to-day management of the NLW program. The JNLWD’s director repor the executive agent and, in addition to overseeing joint development efforts which are led by a designated service, monitors the status of service-unique NLW programs. The directorate consists of three divisio — Concepts and Requirements, Technology, and Acquisition — and a support branch. The JNLWD provides program research and de funds but the services are responsible for acquisition program management in accordance with applicable instructions and regulations. The services also established a joint coordination and integration groupcomprised of representatives from each service as well as U.S. Special Operations Command and the U.S. Coast Guard, whose principal role is to advise on and assist in NLW sy structure of the program is illustrated in figure 1. Joint planning to a capabilities-based model, DOD implemented JCIDS in 2 as the department’s principal process for identifying, assessing, and prioritizing joint military capabilities. The requirements and acquisition systems interlock to create products that are intended to meet DOD’s needs. DOD’s oversight of its systems acquisitions is described in a set of documents that provide the policies and guidance for departmental efforts to acquire service capabilities and systems. As figure 2 illustrates, the acquisition process consists of five phases, and at certain points the designated individual with overall responsibility for the program (known as the milestone decision authority) reviews the status of the effort and decides whether to approve entry into the next phase of the acquisition process. The materiel solution analysis phase begins with the Materiel Development Decision review, “at which point the Joint Staff shall present the JROC recommendations and the DOD component presents the including: the preliminary concept of operations, a description of the needed capability, the operational risk, and the basis for determining that non- materiel approaches will not sufficiently mitigate the capability gap.” The Technology Development phase begins at Milestone A, when the Milestone Decision Authority has approved a material solution and a Technology Development Strategy and has documented the decision in an Acquisition Decision Memorandum. Its purpose is to “reduce technology risk, determine and mature the appropriate set of technologies to be integrated into a full system, and to demonstrate on prototypes.” The Technology Development Strategy documents a number of things, including “a preliminary acquisition strategy, including overall cost, schedule, and performance goals for the total research and development program,” and exit criteria for the Technology Development phase. The Engineering and Manufacturing Development phase begins at Milestone B, when the Milestone Decision Authority approves the Acquisition Strategy and the Acquisition Program Baseline and documents the decision in an Acquisition Decision Memorandum. Its purposes include: to develop a system or an increment of capability; complete full system integration… develop an affordable and executable manufacturing process; ensure operational supportability with particular attention to minimizing the logistics footprint… ensure affordability… and demonstrate system integration, interoperability, safety, and utility. This phase includes a System Capability and Manufacturing Process Demonstration, which ends when “the system meets approved requirements and is demonstrated in its intended environment … manufacturing processes have been effectively demonstrated in a pilot line environment; industrial capabilities are reasonably available; and the system meets or exceeds exit criteria and Milestone C entrance requirements.” Successful developmental test and evaluation is also required during this effort. Test and evaluation are used to assess improvements to mission capability and operational support based on user needs. This phase concludes with Milestone C, where the Milestone Decision Authority must commit to the program or decide to end the effort. The purpose of the Production and Deployment phase is to “achieve an operational capability that satisfies mission needs,” utilizing operational test and evaluation to determine the effectiveness and suitability of the system. Criteria for entrance into this phase include: “acceptable performance in developmental test and evaluation and operational assessment… an approved Initial Capabilities Document (if Milestone C is program initiation); an approved Capability Production Document (CPD)… acceptable interoperability; acceptable operational supportability; and demonstration that the system is affordable throughout the life cycle, fully funded, and properly phased for rapid acquisition.” The Operations and Support phase is used “to execute a support program that meets materiel readiness and operational support performance requirements, and sustains the system in the most cost-effective manner over its total life cycle.” Criteria for entrance into this phase include “an approved CPD; an approved ; and a successful Full-Rate Production Decision.” Life-cycle sustainment includes considerations such as “supply; maintenance; transportation; sustaining engineering… environment, safety, and occupational health; supportability; and interoperability.” The fundamental purpose of testing and evaluation is the same for military and commercial products. Testing is the main instrument used to gauge the progress being made when an idea or concept is translated into an actual product. According to DOD guidance, a test is any procedure designed to obtain, verify, or provide data for the evaluation of research and development; progress in accomplishing development objectives; or performance and operational capability of systems, subsystems, components, and equipment items. Evaluation refers to what is learned from a test. The test and evaluation process provides an assessment of the attainment of technical performance, specifications, and system maturity to determine whether systems are operationally effective, suitable, and survivable for intended use. Testing and evaluation is used at a variety of levels, including basic technology, components and subsystems, and a complete system or product. The fundamental purpose of testing and evaluation is to provide knowledge to assist in managing the risks involved in developing, producing, operating, and sustaining systems and capabilities. In both DOD and commercial firms, product testing is conducted by organizations separate from those responsible for managing product development. Standard fielding processes require extensive testing for new programs. For example, the Army’s process to certify weapons for normal fielding requires several different kinds of assessment before a weapon can be fielded. Environmental and air worthiness statements are required. Results of user safety reviews, inspections, and analyses are also required, in addition to a safety confirmation from the Army Test and Evaluation Command. The Army Test and Evaluation Command must also provide an operational test report. Operational test and evaluation is conducted to estimate a system’s operational effectiveness and operational suitability. The testing agency will identify needed modifications; provide information on tactics, doctrine, organizations and personnel requirements; and evaluate the system’s logistic supportability. Thorough and complete testing not only provides assurance that weapons achieve the desired results intended by design, but also allows decision makers in charge of fielding determinations some level of confidence that their selections will perform as advertised. By contrast, an urgent or abbreviated fielding decision allows much of this testing to be bypassed when there is an established and immediate operational need, and then usually only user safety testing is required. For example, one Marine Corps policy states that abbreviated acquisition programs—which the policy identifies as generally small, low cost and low risk programs—are not required to undergo operational testing. Furthermore, another Marine Corps policy states that with appropriate commanding general authority, all testing can be waived, allowing weapons to be fielded in limited quantities to meet urgent operational requirements. Testing for commercial-off-the-shelf items can be even more limited than testing for those that are urgently fielded. DOD guidance states that—in order to take advantage of reduced acquisition time and to ensure that testing is not redundant and is limited to the minimum effort necessary to obtain the required data—“testing can be minimized by 1) obtaining and assessing contractor test results; 2) obtaining usage and failure data from other customers of the item; 3) observing contractor testing; 4) obtaining test results from independent test organizations (e.g., Underwriter’s Laboratory); and 5) verifying selected contractor test data.” Agency officials must determine that a contractor’s test results are sufficient before making the decision to use those test results instead of conducting their own tests. In 1998 the JNLWD contracted The Pennsylvania State University to convene the Human Effects Advisory Panel, a group of scientists who provide assessment of NLW. The panel issued recommendations on the following subjects: A quantitative definition of “non-lethal” and other associated terms, including incapacitation. An assessment of DOD’s methods to generate and verify human effects. An evaluation of DOD’s methodology to generate and validate data. An evaluation of data to support NLW effect analysis. The Human Effects Advisory Panel report concluded that there was a knowledge gap between the expectations of the warfighter and the information provided by the scientific community’s simulation tools. In response to the panel’s recommendations, the JNLW Integrated Process Team Chairman directed that the Human Effects Process Action Team be formed and requested membership from all of the services. The Human Effects Process Action Team was chartered to study the deficiencies in the process of understanding NLW human effects and to recommend policy changes that will help resolve these issues. The team examined current processes for evaluating NLW human effects and made three primary recommendations to DOD: (1) establish an independent board to review the human effects assessments accompanying NLW systems and to ensure that all reasonable assessments have been performed based on available technology and resources; (2) create a NLW Human Effects Center of Excellence to serve as the NLW program managers one-stop resource for information on human effects testing; and (3) adopt a risk assessment approach to evaluating the NLW human effects data due to the uncertainties involved with the science of human effects characterization. The first two recommendations have been implemented. In 2001, the Human Effects Center of Excellence was created via a memorandum of agreement between the Air Force Research Laboratory and the Joint Non-Lethal Weapons Program. The center was founded to provide assistance and advice to program managers concerning likely effects of non-lethal technologies and the risks associated with those effects. The center also serves as a central location for non-lethal human effects data and provides recommendations on which laboratories or field activities can collect scientifically derived information when such information is not already available. The Human Effects Review Board was established in 2000 to independently review non-lethal human effects research and analyses associated with specific NLW systems or technologies. The board consists of representatives from the services’ offices of the Surgeons General, the Medical Officer of the Marine Corps, and the services’ Safety Officers and includes legal and DOD policy participation. The board provides NLW program managers and milestone decision authorities with an independent measure of health risks and recommendations for mitigating potential risks. The Joint Non-Lethal Weapons Program has conducted more than 50 research and development efforts and spent at least $386 million since 1997, but it has not developed any new weapons, and the military services have fielded 4 items stemming from these efforts that only partially fill some capability gaps identified since 1998. Among the contributing factors, we found that DOD did not prioritize departmentwide non-lethal capability gaps until 2007 and still does not have efforts under way to fully address these gaps, that DOD did not give consistent consideration to logistics and supportability in its NLW development process; and that DOD exercises limited general oversight of the program. The Joint Non-Lethal Weapons Program is sponsoring efforts that address about two-thirds of DOD’s NLW capability gaps, but even those efforts provide incomplete solutions, according to the current joint capability assessment. Under the JCIDS process, formal capability assessments are to be used for identifying gaps in military capabilities and potential material and nonmateriel solutions for filling those gaps. Using this approach, the JNLWD, which sponsored the 2008 Joint Capabilities Document for Joint Non-Lethal Effects, identified 36 capability gaps that represented specific tasks where needs were not met by existing or planned systems. The tasks were categorized as either counter-personnel or counter-materiel, and included numerous variations, such as stopping a vehicle or vessel, suppressing individuals, and denying individuals access to an area, all under varying conditions. The gaps were then prioritized by service and combatant command representatives. The resulting list represented the areas in which Joint Non-Lethal Weapons Program research and development initiatives, service acquisition decisions, and other related resource investments are most needed to satisfy the needs of joint force commanders. Table 1 shows the 36 tasks that were analyzed and found to represent gaps in DOD’s NLW capability, as well as their relative priorities. While DOD is now building on the results of this process to determine how to fill the capability gaps, most of the gaps were already broadly identified 11 years ago. The list of 36 gaps is consistent with needs that were acknowledged in DOD’s 1998 Joint Concept for Non-Lethal Weapons as well as its 2002 Mission Needs Statement for a Family of Non-Lethal Capabilities. Though the JNLWD and the services have been working on non-lethal capabilities since 1997, most of these gaps in non-lethal capabilities still exist today. Table 2 compares non-lethal capability needs that DOD has identified prior to the ongoing capability-based assessment process. During the recent capability-based assessment, several NLW efforts were examined, including the 4 programs that have completed the development process and been fielded by one or more of the military services. These programs are 40 mm non-lethal crowd dispersal cartridge, modular crowd control munition, portable vehicle arresting barrier, and vehicle lightweight arresting device. Of these, 3 are variations of or munitions for existing weapons, and the portable vehicle arresting barrier was in an early stage of development when the JNLWD began funding the program. Even when combined with the 12 additional efforts that are ongoing as of March 2009, these programs will not completely eliminate the capability gaps they were designed to address. Existing joint efforts will not fully satisfy all of the tasks, conditions, and standards that DOD analyzed in the process of identifying NLW capability gaps. Based on our analysis of the JNLWD’s program information worksheets and other documents, we found that there are efforts under way to address the top two-thirds of the list of 36 gaps, although we note that there was no comprehensive source that identified each ongoing effort and linked it to the capability gap(s) it addressed. Appendix III provides further detail on the gaps that lack a corresponding effort to address them. Table 3 shows a list of current NLW programs and the priorities of the gaps they are supposed to address. Even though the programs listed in table 3 are intended to address about two-thirds of the capability gaps, they (along with systems already fielded) still only partially meet DOD’s NLW needs, based on the latest joint capability assessment. For example, a vehicle stopper that uses spikes and netting may not cause a quickly moving car to come to a complete stop before it reaches a checkpoint. Therefore, those capability gaps will not be fully addressed and will remain identified gaps. By not assessing and describing the extent to which efforts are expected to satisfy capability gaps, for example, in forums where information on ongoing and proposed programs is presented, the JNLWD has missed an opportunity to fully meet the warfighters’ highest-priority needs for non-lethal capabilities. The military services may also fund their own separate development efforts to address service-unique needs, but with few exceptions have not done so. According to both service and joint program officials, this reflects the low priority that services place on funding non- lethal weapons development. In addition, service NLW proponents have said that the existence of joint funding has made service funding more difficult to obtain. With little progress made toward filling the capability gaps with fielded equipment, joint force commanders continue to lack sufficient non-lethal capabilities. The Joint Coordination and Integration Group, by joint service agreement, is responsible in coordination with the JNLWD for cataloging and tracking progress of programs to include logistics sustainment and logistics requirements planning and ensuring that program managers in the military services conduct appropriate integrated logistic support planning and execution. However, DOD has not given timely and consistent consideration to NLW logistics and supportability because fielded NLW items, which have generally been urgently requested, commercially purchased, or both, have not been subject to the logistics requirements of the normal acquisition process. Moreover, the JNLWD does not make the best use of its own tools for assessing the status and progress of NLW efforts. Specific logistics planning procedures vary by service. For example, under Navy acquisitions policies, program managers are required to complete an independent logistics assessment before a research and development effort may advance through the acquisition process to the point, known as Milestone B, at which an acquisition program is formally initiated. However, only 6 ongoing joint directorate-funded NLW efforts have passed Milestone B, of which 4 have reached Milestone C, by which point operational supportability with particular attention to minimizing the logistics footprint should be ensured. Another 18 efforts were terminated for various reasons (one after passing milestone B), and two were advanced concept technology demonstrations, which were not required to follow the normal acquisition process while the demonstrations were under way. One of the advanced concept technology demonstrations pursued directed-energy technology research to develop a NLW that uses millimeter waves to produce an intense heating sensation on the surface of skin, causing an immediate response and movement by target personnel. This effort, which cost about $35.5 million, yielded two prototypes known as Active Denial Systems 1 and 2. The second prototype weighs more than 9 tons, and has been mounted on a heavier vehicle than the first prototype to accommodate additional armor and air-conditioning (see fig. 3). Because of its weight, it is not easily used for missions requiring mobility. This system also needs about 16 hours to cool down to its operating temperature of 4 degrees Kelvin (-452 degrees Fahrenheit), making it difficult to use on short notice unless the compressor is kept continuously running. In addition, the Marine Corps considered this system’s gyrotron, waveguides, super-conducting magnets, antenna, and some other major subsystems too complex to allow extensive field repair, so its utility could be further reduced. Combat damage to the antenna could create a logistics problem as it is a large item making storage and replacement difficult. The Joint Non-lethal Weapons Program sponsored a Concept Exploration Program for crowd-control technologies which published an analysis of multiple concepts in 2003. The report evaluated eight systems using a variety of criteria, including logistics, and found that the Active Denial System received the lowest benefit:cost score of these. JNLWD officials have made multiple attempts to field the Active Denial System under the rubric of an urgent operational need despite the logistics problems noted above and even though, if it is deployed, its mobility could be further limited where highway overpasses are present. In December 2008, the joint NLW program executive agent terminated efforts to deploy Active Denial System 2 overseas. The manner of purchase and fielding can also affect whether NLW undergo full suitability and supportability evaluations. The fielding process normally provides an opportunity to scrutinize items that a service has procured and intends to provide to its personnel. They receive a formal certification that they are safe, meet performance requirements, and are logistically supportable when used within stated operational parameters. However, nine NLW systems were purchased from commercial vendors and fielded under urgent processes which allow services to certify materiel on a limited basis in order to rapidly support an operational need. In some cases, logistics weaknesses that might have been uncovered by the normal fielding process were not discovered during the abbreviated analysis that takes place prior to fielding. For example, the FN-303 Less- Lethal Launching System program, which DOD spent about $2 million to evaluate, was terminated because the weapon was too heavy and ergonomically cumbersome, the weapon and ammunition magazine was too fragile, and the weapon required compressed air canisters in order to launch its non-lethal munitions. However, several dozen FN-303s were fielded to units even though their utility was limited by the availability of the canisters and the infrastructure to replenish them (see fig. 4). In addition, human electro-muscular incapacitation is an ongoing program that was initiated in fiscal year 2005. One such device, the TASER® X-26 (see fig. 5), has already been fielded to units—both domestically and overseas—as part of the multiple-item Non-Lethal Capability Set. According to NLW training course materials, however, the TASER® X-26E will not be deployed near flammable materials or liquids, as the arcing from the probes could ignite flammable material. In addition, if it is exposed to significant moisture, operators should dry the weapon thoroughly and wait at least 24 hours before proceeding. We believe that these factors could limit the range of environments in which the X-26E (which, like the X-26, has already been fielded) could be employed. We believe that the JNLWD is missing the opportunity to provide sufficient visibility to logistics concerns in part because it does not make optimum use of available tools to catalog and track progress and in part because most of the efforts it funds have not advanced to the stage where these concerns are paramount. The directorate’s program information worksheet, for example, is one of the means that the directorate uses to gather information from program managers about ongoing and proposed efforts, but it does not include a specific space for the program manager to describe logistics supportability goals and how they will be met. The JNLWD uses the program information worksheet to develop an Investment Decision Support Tool, in which the directorate ranks proposed efforts overall and according to five subcategories: cost, schedule, operational contribution, and technology and human effects readiness levels. In our review of available worksheets, we found that there was not necessarily a direct link between the operational contribution score and logistics concerns. For example, the Active Denial System and Mobility Denial System have both received operational contribution scores of 100 despite their supportability problems. Further, requirements for logistics analysis in preparation for a Milestone B decision are often not yet applicable, since 10 of the efforts that the JNLWD was funding as of March 2009 had not yet advanced to that step. An Army official told us that his senior leadership is beginning to require this earlier in the process for Army programs. Without giving full and early consideration to logistics and supportability issues, DOD increases the risk that developmental efforts may not meet service requirements and obtain service funding beyond research and development into acquisition and fielding. While the joint program funds technology research, it is the services that pay to procure, operate, and maintain equipment. DOD also increases its risk of fielding items under urgent processes that are infeasible, difficult to sustain, or both. While DOD has procedures to try to field needed capabilities quickly, these procedures are designed to maximize utility to the warfighter. To the extent that these procedures result in fielding cumbersome or fragile equipment, they may not achieve that goal. Without building these considerations into the earliest stages of development or consideration of commercial off-the-shelf items, DOD may miss opportunities to allocate resources more effectively. DOD’s oversight of the Joint NLW Program, for which the Under Secretary of Defense for Acquisition, Technology, and Logistics (AT&L) has principal oversight responsibility, and for which the Commandant of the Marine Corps has been assigned as Executive Agent, has been limited. This has resulted in gaps in the timeliness and utility of key program guidance as well as limited measurement of progress and performance. A well-managed program, according to federal internal control standards, sets clear and consistent objectives, monitors performance, and ensures that findings of audits and other reviews are promptly resolved. Further complicating DOD’s oversight, no single organization has visibility over all spending categories and available budget information may not fully capture all spending associated with the development of non-lethal capabilities. Both AT&L and the Executive Agent have broad responsibilities for oversight and management of DOD’s NLW program. Although DOD’s NLW policy directive does not specify how AT&L should carry out its oversight of the NLW program, AT&L’s general oversight responsibilities, including the development of acquisition-related plans, strategies, guidance, and assessments, and the principles of good program management are delineated in other DOD directives. According to the 2002 joint service memorandum of agreement, meanwhile, the Executive Agent is supposed to draft, staff, publish, and maintain a master plan that defines the vision, goals, and objectives of the program and includes an overarching framework for research, development, and acquisition as well as modeling and simulation and experimentation plans. However, these plans, along with other key program documents, are outdated and some are currently being revised, as noted in table 4. The Memorandum of Agreement is an agreement among the service chiefs of staff, Commandant of the U.S. Coast Guard, and the Commander of the U.S. Special Operations Command to implement procedures for the NLW program. The 2002 memorandum is outdated, for example, in its lack of provision for oversight of science and technology programs, which the Joint Non-Lethal Weapons Program began to fund in fiscal year 2005. Army and Navy officials have identified science and technology oversight as an issue to be addressed. The Joint NLW Program Master Plan’s purpose is to define the vision, goals, and objectives of the program and it includes an overarching framework for research, development and acquisition as well as modeling and simulation and experimentation plans. It is supposed to be updated biennially. The JNLWD started to update this plan but decided to await the 2003 release of JCIDS to accommodate its requirements. During this same time frame, the directorate was tasked to develop the NLW Capabilities Roadmap, to which it has turned its efforts. The Roadmap is designed to assist in the planning process and to support DOD leaders in making informed decisions regarding resources, priorities, and policies for NLW capabilities. While the existing version describes current efforts and lists anticipated milestones for them, it lacks some elements that could be helpful to decision makers. For example, the Roadmap does not provide guidance on how to allocate resources among priority areas, nor does it relate funding to overall DOD policy and strategy or provide guidance about how to evaluate program performance. Program officials recognized that the Roadmap had limitations and began to revise the initial version as soon as it was approved. As of March 2009, the Roadmap is still being revised. A key AT&L official said that NLW program oversight is exercised through participation in the semiannual meetings of two departmentwide NLW groups: the Joint Coordination and Integration Group and the general officer-level Integrated Product Team. The former advises on and assists in NLW system acquisition while the latter coordinates and integrates joint requirement and priorities and approves consolidated plans and programs. In addition, AT&L officials meet quarterly with the JNLWD Director and participate in other activities, such as the development of the Roadmap, as necessary. Notwithstanding the participation in meetings, however, strategic program direction rests on documents that have been delayed or that lack important elements necessary to make effective decisions. As a result, there is limited strategic direction for the program to guide its day- to-day efforts. While DOD’s NLW program lacks the visibility of other programs with higher priorities and larger budgets, sound program management and oversight practices should apply. Without a greater degree of participation in setting program priorities and reviewing and reporting on performance by AT&L, DOD will not have the level of necessary information needed to make informed decisions about the effective and efficient management of the NLW program. We identified 6 cases in which the joint non-lethal weapons program did not make timely decisions about when to discontinue its research efforts when several years have passed without substantive progress. Although not designed specifically for NLW efforts, DOD’s Financial Management Regulation sets a goal of 6 years (within the future years defense program) for a program to advance through advanced technology development into the acquisition process. Based on our analysis of JNLWD programs, three active development efforts and three terminated efforts reached or exceeded this time frame (see table 5). For example, the Airburst Non-Lethal Munition has been under development for the Army since 1999 (at a cost of nearly $15 million) and has yet to be fielded. In another example, the Mobility Denial System— which relied on slippery foam to limit vehicle traction—continued for 8 years (at a cost of about $10 million) before being terminated because it did not meet combat developers’ needs and its extensive water requirement was considered a logistics burden. Although the Active Denial concept demonstration only lasted 5 years, active denial technology research projects have been underway since at least 1997, at a combined cost of $55.2 million. According to a JNLWD official, the criteria used to determine when to cancel a program are formulated by the program sponsor, program manager, and joint representatives. They reflect technical, programmatic, and policy objectives to be accomplished, and program decisions are made based on the program’s achievement of these tasks with recommendations from the Joint Coordinating and Integration Group. However, as each program is addressed individually, there are no standard termination criteria that are applied to all NLW programs. The directorate uses an Investment Decision Support Tool to evaluate proposed NLW programs as well as active development programs, scoring them on a scale of 0 (lowest) to 100 (highest). The directorate uses this tool to assist in deciding if certain programs are worthy of JNLWD attention and funding, and its results are briefed to service representatives. However, according to a JNLWD official, the scores programs receive on the tool do not directly correlate to the priority for investments because the tool does not incorporate all factors that decision makers need to consider. For instance, the tool does not include the relative degrees of service and combatant command support, past technical performance, or technological feasibility. Appendix IV presents further detail on all of the JNLWD-sponsored programs currently under development and their most recent total scores on the investment decision support tool. While such a tool may be useful to decision makers, a method that incorporates all of the factors needed to make an informed decision, such as logistics and supportability and exit timeframes and criteria, would be a more effective instrument in allocating limited resources. By continuing to fund over long periods of time programs that have not demonstrated their intended capability or have logistics and supportability challenges, the directorate is encumbering resources that might better be used toward the development of other non-lethal weapons programs and capabilities. Further complicating DOD’s ability to oversee its NLW program is the fact that no single organization has visibility over all spending categories, and available budget information may not fully capture all spending associated with the development of non-lethal capabilities. To identify funding for all DOD NLW programs, the JNLWD could only provide us full budgetary figures for its own programs for science and technology or research, development, test and evaluation. Directorate officials are not assigned oversight over service-unique programs and so could not determine exactly how much the services and other DOD components (such as U.S. Special Operations Command) are spending on their own NLW programs. Conversely, the interservice coordinating groups may not provide all necessary information, as the representatives review and approve the approximately 70 percent of the JNLWD budget that goes to research and development, but do not receive a detailed breakdown of the remaining costs. These include such items as studies and analysis, contract support, and salaries for liaisons to the services and combatant commands. We examined budget documents to ascertain what DOD invested for NLW and we found that—in addition to the JNLWD—all four military services, U.S. Special Operations Command, and the Office of the Secretary of Defense have spent money on NLW programs in some capacity during at least some of the last 12 years, but documentation did not always show which non-lethal programs were being funded, nor was it always evident what year the money was being spent. Based on this limited documentation, we identified funding for the military services and other organizations that totaled about $355 million from 1997 through 2008. The JNLWD provided data showing that funding for JNLWD programs totaled about $462 million, giving DOD a total of $817 million budgeted for that time period. The reliability of our estimate of total spending was also affected because DOD budgets do not isolate the portion of weapon procurement budgets that should be attributed to non-lethal effects. Several lethal weapon systems may have non-lethal capabilities. For example, the Army’s Spider Anti-Personnel Landmine Alternative system can use non-lethal munitions to deter, rather than destroy, enemy personnel. However, none of the $172.1 million budgeted for the Spider through fiscal year 2009 was listed as part of the NLW program. Table 6 lists several examples of normally lethal weapon programs that have the capability to be used in a non-lethal manner or use non-lethal munitions. We also found that part of what the military services categorized as non- lethal weapons spending included items that were not necessarily developed as NLW. For example, non-lethal capability sets, which account for about $122 million, are being procured and distributed to units overseas as well as to National Guard troops stationed in units in U.S. states and territories, and are packed in several modules that may be tailored according to mission, for example, checkpoint guards. However, the sets contain such items as riot gear (face shields, shin guards, flex cuffs, etc.) that are not, by DOD’s own definition, non-lethal weapons. Rather, they could more accurately be described as personal protective equipment. Because of definition considerations such as this, we found determining the exact amounts of NLW-related spending to be problematic. DOD plans to spend about $789 million on non-lethal weapons from fiscal years 2009 through 2013. The complex nature of categorizing lethal versus non-lethal weapons and programs makes it all the more important for DOD to have a much clearer understanding of all the programs and investments it is making in NLW. DOD officials told us that they are trying to reach consensus among the services on defining what constitutes a non-lethal weapon in order to more accurately categorize them for budgetary and other purposes. The inability to easily track all money spent specifically on non-lethal capabilities—be they lethal weapons that have non-lethal capabilities or programs that contain items that are not NLW by definition—puts JNLWD and service officials at a distinct disadvantage as they will not have all the information they need to make informed budget decisions. Without adequate oversight, including program direction and visibility of all costs and individual program efforts, the directorate, the services, and DOD at large lack assurance that they are making the most effective use of departmentwide resources and meeting warfighters’ needs. DOD has begun to incorporate ideas about non-lethal capabilities into policy, doctrine, and training, but gaps in key policy decisions limit the effectiveness of doctrine changes and subsequent training. DOD has not yet clearly defined the accepted level of risk for fatality, nor has it fully developed weapons employment policies for overseas warfighting or homeland applications or ensured that warfighters and domestic responders are fully trained in NLW use. Without resolving these policy problems, DOD’s ability to integrate NLW concepts into doctrine and subsequently train personnel in those operations is limited. DOD published a directive in 1996 establishing policy and assigning responsibilities for the development and employment of non-lethal weapons. According to this directive, non-lethal weapons, doctrine, and concepts of operation are to be designed to reinforce deterrence and expand the range of options available to commanders. Non-lethal weapons are also meant to enhance the capability of U.S. forces to discourage, delay, or prevent hostile actions; limit escalation; take military action in situations where use of lethal force is not the preferred option; better protect the forces; temporarily disable equipment facilities and personnel; and help decrease the postconflict costs of reconstruction. DOD has begun to incorporate non-lethal weapons into existing doctrine and concept publications. The joint staff and the services have issued several dozen doctrine publications that cited a need for non-lethal capabilities and began to discuss the importance of developing capabilities that may be applied across the range of military operations, such as for Peace Operations, Urban Operations, and Civil Support. These organizations have also updated publications that describe a need to include NLW as part of the overall use of force continuum within planning for such diverse missions as command and control for joint land and maritime operations and joint counterdrug operations. In addition to mentioning the need for NLW capabilities in policy and doctrine, the services and combatant commands also have begun to incorporate non-lethal weapons into their plans and procedures. U.S. Northern Command has developed concept of operations plans for Defense Support to Civil Authorities/Homeland Defense missions. The military services have issued a joint tactics, techniques and procedures manual and updated the manual in 2007. They have also issued service- specific guidance. For example, the Air Force has a manual tailored to the particular needs of Air Force security forces and the Army has published a field manual on civil disturbance operations. The Marine Corps has developed mission-essential tasks and individual training standards in support of NLW use and recently issued specific use policy for human electromuscular incapacitation devices such as TASER®. Beginning in 1998, the Marine Corps was designated as the lead service for the Interservice Non-lethal Individual Weapons Instructor Course—the source for formal NLW instructor training for all of the services—and has also deployed mobile training teams to help facilitate on-site NLW training in Iraq. Although DOD has begun to incorporate NLW concepts throughout its body of operational doctrine, our analysis indicated that most references are limited, recognizing the value of such a capability, the need for it, or both, but not providing additional guidance about how such capabilities should modify existing operational concepts. For example, the joint doctrine publication, Doctrine for Joint Urban Operations, includes two paragraphs that describe the potential flexibility offered by non-lethal weapons but offers no other operational perspective or guidance in the 150-page publication. This may limit the utility of information on non- lethal weapons in existing doctrine. One reason for this is that we found gaps in DOD’s policy and guidance for NLW. DOD recognizes that policy tends to drive doctrine and doctrine, in turn, influences training and the execution of operations. Therefore, weaknesses in policy make it difficult to effectively produce or augment the doctrine and training. While DOD has clearly articulated a policy that non-lethal weapons shall not be required to have a zero probability of producing fatalities or permanent injuries, it has not (1) fully articulated what constitutes acceptable risk, (2) fully explained how employment doctrine should vary by scenario, or (3) provided specialized training to enable operators to make effective use of NLW in various contingencies, particularly within the United States. DOD has not reached consensus on how to answer these questions. The directive that establishes DOD policies and assigns responsibilities for the development and employment of non-lethal weapons states that NLW are designed and employed to minimize, rather than eliminate, fatalities. As such, NLW may have lethal effects and therefore carry some level of risk that is not precisely defined. For example, non-lethal is described in a joint functional concept as the degree to which the joint force is able to create desired effects using incapacitating, nonfatal capabilities. In addition, DOD stated as part of the capabilities-based assessment that increasing non-lethality widens the range of effects the joint force is able to achieve without using deadly force, or in the case of defense support to civil authorities, that the avoidance of casualties is imperative. An early assessment by the Human Effects Advisory Panel posited that acceptable risk was 1 percent suffering permanent damage (of which half were lethal), 98 percent incapacitation, and no effect on 1 percent of the population, but senior NLW program officials said that those figures were never considered authoritative and that acceptable risk had not been quantified. DOD has also not fully clarified what constitutes acceptable risk short of fatality. Current DOD policy defines NLW as weapons that are explicitly designed and primarily employed so as to incapacitate, and are intended to have “relatively reversible” effects. Based on the results of the capability- based assessment, the JNLWD has modified its definition of NLW to include weapons, devices, and munitions that are explicitly designed and primarily employed to “immediately incapacitate” targeted personnel or material, and that are intended to have “reversible” effects. These changes have not yet been incorporated into overarching DOD policy, although JNLWD officials have said that they are understood within DOD. However, publications are not fully synchronized throughout the department; for example, Army doctrine states that the use of NLW should “temporarily incapacitate.” Furthermore, it states that NLW use must not result in unnecessary suffering” without defining this term in the context of NLW effects. Each of these definitions has different implications for NLW development and employment, because as the “dose” of a weapon increases, so does its potential both to achieve the desired effect and to produce permanent injury. Thus, a NLW that is more successful with respect to onset – that is, immediately incapacitates — could be less so with respect to duration – that is, proves irreversible. Once DOD does finalize a new common definition, though, uncertainty about acceptable results of the use of NLW may still create unrealistic expectations. For example, the Marine Corps prepared an urgent needs statement to support its request to field a laser dazzler, in which it noted that the flares then in use caused injury and one fatality. The Marine Corps requesters wanted a capability that would avoid that outcome. The uncertainty of outcome may lead to expectations that non-lethal actually means never lethal, and moreover may not even cause any kind of serious injury. According to a senior NLW program official, the term “non-lethal” itself sets up false expectations, and DOD should establish a concerted strategic communications program to disabuse those who may be the target of such weapons as well as military users of the idea that “non- lethal” is risk-free. DOD’s interservice NLW training course materials also cite the possibility that political or military leaders might form an incorrect perception that NLW will allow wars and military operations other than war to be prosecuted without casualties. Other federal agencies, whose personnel may use NLW within the United States, manage expectations differently. For example, the Department of Justice uses the term “less- lethal” so as not to create the expectation that certain weapons never produce fatal results. The Department of Homeland Security uses both terms. Congress also defined NLW in a way that may reinforce expectations. In the statute directing DOD to establish centralized responsibility for the establishment of NLW technology it defined a “non- lethal weapon” as a weapon or instrument the effect of which on human targets is less than fatal. DOD has not been able to provide further clarification on acceptable risk primarily because there is no departmentwide consensus on what constitutes acceptable risk. DOD officials have been discussing the development of a methodology for characterizing acceptable risk that can be applied more specifically to individual non-lethal weapons or devices. They told us that they are nearing agreement within the department on this methodology. However, as of February 2009, the methodology had not been formally approved. The lack of a consistent and clear methodology regarding risk levels can hinder efforts to write formal requirements for material solutions to identified capability gaps, without which new products cannot progress through the acquisition process, and can complicate efforts to field NLW that are purchased from commercial vendors, to use them, or both. Until NLW terminology is clarified and fully disseminated, it could continue to create unrealistic expectations that could complicate the efforts of material developers, result in inconsistent rules of engagement, and make operational commanders more hesitant to employ any available NLW. Moreover, until DOD clarifies its policy on how to assess the risk of fatality or permanent injury it is willing to accept, it will be very difficult to develop, deploy, train for, and use any NLW that have the potential either to be lethal or create detrimental political effects. Our analysis of DOD’s policy and doctrine showed that DOD personnel lack clear guidance about how to employ NLW across the range of military operations, both overseas and domestically. This could be relevant, for example, for a range of missions that involve force but occur in a nominally “peacetime” scenario, or in ambiguous situations across the spectrum of conflict. NLW may be used to determine intent: a warning NLW, such as the laser dazzler, might be used to induce an approaching vehicle, individual, or group to stop, and failure to stop is then assumed to mean hostile intent against which lethal force may be used. However, our review and analysis of existing doctrine showed that it does not provide adequate guidance for employing NLW in ambiguous situations, such as when a vehicle or pedestrian is approaching a checkpoint and intent is not obvious. In some cases, servicemembers might not be able to determine intent until an individual is within a short distance. While there are NLW such as the laser dazzler that can allow troops to attempt to provide warnings over long distances, the existing suite of NLW that can incapacitate an individual is only effective at close range, so servicemembers have limited options. They could use blunt-force munitions or electro-muscular incapacitation devices, both of which are generally ineffective beyond a short range; or wait for an individual to approach to within the range at which these work, at which point effective self-defense may no longer be possible. These policy and weapons employment considerations are important to the warfighter because they represent a balance between safety/risk for U.S. service personnel and safety/risk to individuals or groups targeted by NLW. They are also important to provide context for the application of the standing rules of engagement that apply to military operations, for use where U.S. forces face hostile forces, hostile acts, or demonstrated hostile intent. Concepts of operations for use of DOD forces within the United States consider the avoidance of civilian casualties imperative, and the standing rules for use of force state that, normally, force is to be used only as a last resort and the force used should be the minimum necessary. Deadly force is to be used only when all lesser means have failed or cannot reasonably be employed, which implies that all other options must be exhausted first. By contrast, overarching DOD non-lethal weapons policy states that the existence of NLW in no way constitutes an obligation their employment and that the United States retains the option for DOD has not issued new guidance or immediate use of lethal weapons. instituted training that reconciles these two stances; for example Nation Guard Bureau guidance for the domestic employment of NLW contains little more than a restatement of passages from the DOD directive on non- lethal weapons. DOD also has not directed that any specific pieces of al equipment that could produce lethal effects be excluded from the capability sets that have been fielded in the United States. For example, non-lethal capability sets containing TASER® have been fielded to National Guard units in every state. TASER® is controversial because of concerns about injuries and fatalities that occurred in the course of its u se in law enforcement. The Marine Corps proscribed use of these weaponsuntil it published a policy specific to TASER®, and the Army compone U.S. Central Command decided not to train troops in their use. Furthermore, the joint staff has issued doctrine that states the employment of non-lethal weapons in certain supporting operations will also be governed by their political impact, For such operations, weapons employment policies would need to be developed and disseminated so that the existing rules of engagement and rules for the use of force could be adequately tailored to minimize detrimental political effects resulting from the use of NLW. In particular, DOD policy for employment of directed-energy NLW such as the Active Denial System is incomplete. The Under Secretary for Policy deemed the system politically untenable for detainee operations but has not yet issued employment policy for other missions. While the office of the Under Secretary of Defense for Policy has approved the Active Denial System in principle, a former senior policy official wrote that DOD would continue to require the development of definitive concepts of operation; rules of engagement; and tactics, techniques, and procedures before the Active Denial System could be deployed. While the Joint Non-lethal Weapons Program executive agent in December 2008 terminated efforts to deploy the existing Active Denial System overseas, DOD continues to try to find ways to deploy the system in the United States, possibly at the southern border. Unresolved questions about acceptable risk and proper employment guidance can also have an impact on the quality of training that warfighters and domestic responders receive. DOD’s Interservice Non- Lethal Individual Weapons Instructor Course provides scenario-based training that can be applied under the standing Rules for the Use of Force and Rules of Engagement that apply to all services, but it cannot integrate realistic training for specific situations into its curriculum unless appropriate policy has been developed. Because currently available NLW have short ranges, reaction time is limited and warfighters will need to make quick decisions, possibly in rapidly changing circumstances. While DOD can and does produce mission-specific rules of engagement, gaps in policy and doctrine limit the training that can be provided prior to deployment. Until these issues are resolved, doctrine and training for non- lethal weapons may be limited, and the warfighter or domestic responder may have fewer options other than resorting to lethal force. Testing NLW for effects on targets and bystanders is a difficult technological undertaking, in part because human effects testing needs to be done using modeling and surrogates which may not accurately reflect human responses. Further, almost all NLW have been fielded to date using abbreviated processes because of urgent needs, and as a result, NLW have generally not undergone the same level of effects testing to meet standard fielding requirements. One of the components lacking in DOD’s approach to testing NLW is a consistent methodology for assessing the risks of various human effects. While DOD has begun to develop elements of a risk assessment methodology, this methodology cannot be completed until human effects testing requirements are standardized in DOD policy. Testing and evaluation that include human effects testing measures could improve planning and commanders’ ability to avoid unintended effects of NLW use. The complicated nature of testing the outcomes of NLW use centers on the testing of the effects of these weapons on targets and bystanders— typically referred to as human effects testing. The test and evaluation process provides an assessment of the attainment of technical performance, specifications, and system maturity to determine whether systems are operationally effective, suitable, and survivable for intended use. Unlike conventional lethal weapons that destroy their targets principally through blast, penetration and fragmentation, NLW are intended to prevent the target from functioning and their effects are intended to be reversible. Testing of human effects would measure the NLW’s likelihood and degree of causing irreversible effects on human targets and bystanders. There are, however, several limitations to human effects testing. Although technology is improving to better test for and predict NLW human effects, DOD policies limit the use of human subjects for testing and the nature of nonlethality, which is aimed at producing reversible effects, poses challenges to testing accuracy. DOD policy states that “the rights and welfare of human subjects in research supported or conducted by the DOD Components shall be protected.” Therefore, when possible, human effects are derived from animal and computer-based models in substitution for direct effects on human subjects. We acknowledge the importance of ensuring the protection of human test subjects but also recognize that the test measures designed and put in place to operate within these restrictions—such as surrogate (i.e., test dummies) and animal tests—face unique challenges to produce accurate and timely test results. Furthermore, according to a DOD human effects testing official, the confidence intervals associated with non-lethal effects testing are typically low. For example, the Human Effects Center of Excellence reports that testing non-lethal munitions requires accurate accounting for projectile properties such as its mass, impact velocity, shape, target size, and impact location. However, potential for injury on the target varies depending on the weight and size of the target and the accuracy of the NLW projectiles— which are often hard to predict and test for. According to the Human Effects Advisory Panel, there is a knowledge gap between the expectations of the warfighter and the information that is being provided by models and simulation tools from the scientific community. Testing accuracy is inherently limited when extrapolations are based on subjects other than human subjects. The Advanced Total Body Model is an example of a simulation tool used to predict the risks associated with blunt impact weapons and was designed to test for effects upon various body parts (e.g., ribs, abdomen, head-neck). However, simulation tools are not capable of testing for all possible effects derived from a NLW. A broken rib, for example, could result in a punctured lung which may cause death. According to officials at the Human Effects Center of Excellence, other sources used to test and collect data for non-lethal effects are animals that share similar human characteristics. For example, they said chinchillas have similar inner ear structures to humans and are used to test for ear damage caused from acoustic NLW such as flash bang grenades. However, functional and anatomical differences between human and animal subjects may limit the generalizability of test results to human populations. If suitable testing that models NLW effects on humans is not conducted, then it becomes unclear how and when to use non-lethal weapons given the lack of assurance concerning the effects on the targets and bystanders. Current DOD testing policies do not address testing of NLW effects on human targets and bystanders. DOD has started to draft a policy that— once approved—would establish guidance and procedures for the characterization of target human effects in support of the development of NLW acquisition programs, but this policy has not yet been agreed on within DOD, formally approved, or implemented. Therefore, with the exception of laser-related weapons, human effects testing is currently not required, including the use of simulation tools and other methods. JNLWD officials told us that all NLW programs that receive JNLWD funding must be reviewed by the Human Effects Review Board before every major acquisition milestone. Service-funded programs are not required to undergo the same review; however, JNLWD officials said they encourage it. Army officials told us that they conduct human effects testing for non- urgently fielded NLW. Further limiting the amount of human effects testing being conducted is the fact that, to date, almost all NLW have been fielded using abbreviated processes to meet urgent operational requirements. When there is an established and immediate operational need, an urgent or abbreviated fielding decision allows DOD to bypass most testing, other than user safety testing, that is normally conducted for weapons fielded through the standard process. For example, the Army’s Urgent Material Release process requires a safety assessment, but Army officials told us that this safety testing is for the user of the weapon only, and does not test for the safety of the target or bystanders. Marine Corps policy states that with appropriate commanding general authority, weapons may be fielded in limited quantities to meet urgent operational requirements, even if all safety requirements are not met. A Marine Corps official told us that a commander’s willingness to accept safety risks associated with an NLW’s rapid acquisition given the urgent need of the weapon in the field is ultimately what drives NLW deployment. He said that the most complicating factor of streamlining and regulating the acquisition process for NLW from beginning to end is managing this balance between urgency and safety. DOD testing for commercial-off-the-shelf items can be even more limited than for those urgently fielded because agency officials can use contractor test data instead of conducting their own tests. The Defense Acquisition Guidebook recognizes the importance of oversight and government involvement in testing performed by contractors. Anytime agency officials decide to use contractor test results without adequate oversight or involvement in the testing, there is an increased risk that the testing was biased, the testing environment was not relevant for the weapon’s intended operational use, or the test results were inaccurately represented. One example of a commercial item where use of contractor test data had the potential to lead to unpredictable results was the TASER®. In 2002, the Human Effects Review Board reviewed the TASER® M26 model and submitted an approved but limited fielding recommendation. The recommendation was largely based on anecdotal data and field experience gathered over the last 20 years from police enforcement activities where TASER® was predominantly used on male targets. The Human Effects Review Board was concerned by the lack of unbiased, peer-reviewed scientific evidence of TASER® effects and effectiveness that is necessary to support a stronger endorsement. In 2003, the Army’s safety office at Picatinny Arsenal issued a safety certification to support the urgent fielding of the TASER® M26 model. However, the Army did not field the M26 model and instead fielded an even more advanced version of TASER®—the X26E model—that produces a 5 percent increase in muscle contraction compared to the approved M26 because it uses a waveform that is different from that of any preexisting models. Although the Human Effects Review Board did evaluate testing results for the TASER® X26 model in 2008 and determined that the human effects research conducted was sufficient, DOD increased the risks of unintended effects by reviewing testing data after the weapon was already being used in the field. Although testing the effects of NLW is technologically challenging, JNLWD recognizes that human effects, effectiveness, and risk must be quantified in order to support legal, treaty, and policy reviews and to ensure warfighter confidence in new technologies. DOD has begun to develop elements of a risk assessment methodology, but the methodology will not be complete until human effects testing requirements are standardized in DOD policy. In its Risk Management Guide for DOD Acquisition, DOD recognizes that risk management is critical to acquisition program success. In particular, DOD notes the need to define a program by satisfying the user’s need within acceptable risk. According to this guide, the purpose of addressing risk in programs is to help ensure that program cost, schedule, and performance objectives are achieved at every stage in the program’s life cycle and to communicate to all stakeholders the process for uncovering, determining the scope of, and managing program uncertainties. Although DOD policy does state that NLW shall not be required to have a zero probability of producing fatalities or significant injuries, our review found that the policy does not articulate a methodology for what constitutes acceptable risk of fatality and significant injuries across DOD and the services. Without a better understanding of acceptable risk, NLW developers and designers have no way of knowing whether the risk levels associated with the effects produced from NLW are in compliance with standards and whether NLW developments are progressing sufficiently to meet the needs of the warfighter. Although DOD has not established standardized acceptable levels of risk of fatality and significant injuries, DOD has a draft policy in development for a Risk of Significant Injury scale that characterizes the amount of treatment necessary to reverse the effects of an NLW (see fig. 6). The Risk of Significant Injury scale broadly categorizes three levels of health care capabilities required to reverse the effects of NLW once they are used on targets, but it does not take into account the risk probabilities of injury for each category for a given weapon or target. In other words, the Risk of Significant Injury scale does not assess the likelihood that non-lethal effects could cause a score of 0, 1, or 2 within the scale, and thus does not provide information about what probability to expect for each category of injury. The Human Effects Process Action Team concluded in 2000 that the Human Effects Review Board should make adopting a risk assessment approach to evaluating the NLW human effects data a priority because of the uncertainties involved with the science of human effects characterization. The team—directed by a Marine Corps lieutenant general and composed of each service’s acquisition executive and surgeon general—stated that a risk assessment methodology would allow the human effects of NLW to be expressed along with a measure of the confidence in the data. Hypothetically, if testing showed that a NLW carries a 15 percent risk for permanent injuries, a risk assessment methodology would allow a risk level (e.g., low-to-moderate or moderate- to-high) to be assigned. A commander can then use this as a basis for a decision on whether to accept that risk. DOD has not yet established a risk assessment methodology for human effects testing that is capable of identifying the potential risks associated with the use of NLW. Without a risk assessment methodology, NLW human effects are not fully understood and cannot accurately be predicted, which may result in unexpected effects upon targets and bystanders and cause political consequences. DOD recognizes that part of a successful risk management strategy includes sufficient testing and evaluation measures, and DOD also recognizes the importance of assessing operational effectiveness. Nevertheless, testing and evaluation that include human effects testing measures could improve planning and commanders’ ability to avoid unintended effects of NLW use. DOD’s Non-Lethal Weapons Program is intended to provide U.S. armed forces with flexibilities for dealing with the rapidly changing threat environment, especially when using lethal force is undesirable. However, key aspects of this program, such as assessing the extent to which priority capability gaps will be addressed, focusing on supportability and operational utility in the field, and providing oversight and full funding visibility, have been limited. These problems have contributed to the program’s overall limited progress in fielding suitable NLW. New weapons requirements and development are often understandably affected by technology hurdles and a preference to field an item that will partially meet needs quickly rather than wait indefinitely for a perfect solution. However, we note that 12 years have passed since DOD established the JNLWD and that the services were working on NLW development efforts even before that. Better planning, management, and oversight of NLW developmental efforts to incorporate early consideration of technology readiness, suitability, and supportability could improve the rate of progress. While individual services may attempt to satisfy some service- unique gaps on their own, the measure of a successful joint program will be whether it can successfully foster joint development. Without clearer policy on acceptable risk to both warfighters and potential targets, in both overseas and domestic scenarios, doctrine and training for NLW will continue to be limited. Finally, conducting suitable testing and evaluation is complicated in an environment where acceptable alternatives to human testing – animal tests and modeling and simulation – are themselves both limited and inherently difficult to extrapolate to humans. Although DOD recognizes that it needs to develop a risk assessment methodology and has taken steps toward that end, it still lacks the means to predict levels of risk concerning non-lethal effects on targets and bystanders. As a result of all these factors, DOD’s NLW program has had limited success in planning, developing, overseeing, and testing effective and efficient weapons. Unless these factors are addressed, the ability of U.S. forces to conduct operations across the full range of potential lethality where and when needed will be hindered, and they will continue to lack the means to escalate force while still achieving non-lethal effects. We recommend that the Secretary of Defense take the following eight actions: To help DOD better match program priorities to identified capability gaps, the Secretary of Defense should direct the JNLWD, in consultation with the services and combatant commanders, to assess and document the extent to which NLW efforts at the technology development stage and beyond (including procurement and operations and maintenance) address the highest-priority Joint Staff-validated capability gaps. To help DOD better incorporate logistics and supportability considerations, the Secretary of Defense should direct the JNLWD, in consultation with the services and combatant commanders, to ensure that appropriate logistics and supportability planning is integrated into development efforts at the earliest possible stage, including both DOD- developed and commercial weapons and capabilities. Incorporating changes to—and using information already gathered for—the JNLWD’s Investment Decision Support Tool might assist the directorate and DOD in establishing clear criteria and ensuring progress in this area. To help the Under Secretary of Defense for Acquisition, Technology and Logistics in its role in overseeing DOD’s Joint Non-Lethal Weapons Program, the Secretary of Defense should take the following actions: Require the Under Secretary of Defense for Acquisition, Technology and Logistics, in consultation with the Executive Agent, to ensure that NLW strategic guidance that sets out goals, objectives, and a framework for research, development, and acquisition—including science and technology efforts—is established and routinely updated. Require the Under Secretary of Defense for Acquisition, Technology and Logistics to oversee the development of performance evaluation criteria to guide decisions on how and for how long to allocate resources to research and development efforts. In addition to established DOD financial management regulations, DOD could use existing tools, such as the Investment Decision Support Tool, to help develop and implement these measures. Direct the Under Secretary of Defense for Acquisition, Technology and Logistics to develop and execute a methodology for monitoring all NLW-related funding and programs across DOD and designate a central focal point within that office to coordinate the effort with the JNLWD. To help DOD more fully incorporate non-lethal concepts and capabilities into its existing and new policy and doctrine for operations overseas and in the homeland, the Secretary of Defense should direct the Under Secretary of Defense for Policy to articulate a methodology and develop a time frame for determining acceptable risk with respect to lethality and permanent injury for operators, targets, and bystanders due to the use of specific types of NLW, and the Secretary of Defense should direct the Joint Staff, in consultation with the Under Secretary of Defense for Policy and the Services, to provide clearer weapons employment guidance that can be used to modify or augment existing rules of engagement or rules for the use of force for both warfighters and domestic responders on how non-lethal weapons should be used under certain conditions, and incorporate this guidance into training curricula. To help DOD conduct more thorough testing and evaluation of non-lethal weapons and aid end users’ ability to plan by knowing what to expect from NLW before using the weapon, the Secretary of Defense should direct the JNLWD and the military services to finalize and implement a risk assessment methodology for human effects testing of NLW and develop a timeline for implementing the methodology. In written comments on a draft of this report, DOD concurred with five of our recommendations, partially concurred with the other three, and described actions it is taking or will take to implement all of the recommendations. DOD’s comments are reprinted in appendix II. DOD also provided technical comments, which we have incorporated into the draft as appropriate. The Departments of Justice and Homeland Security also reviewed a draft of this report and had no comments. DOD concurred with our recommendation that DOD assess and document the extent to which non-lethal weapons efforts at the technology development stage and beyond address the highest-priority capability gaps and stated that they will incorporate a methodology for accomplishing this into the NLW Capabilities Roadmap and into the overall Joint Non-lethal Weapons program management process. With respect to our recommendation that DOD integrate logistics and supportability planning into NLW development efforts at the earliest possible stage, DOD agreed and stated that it would elevate consideration of logistics and supportability during program reviews and by using other existing tools, as appropriate. DOD agreed with our recommendation that it ensure that NLW strategic guidance that sets out goals, objectives, and a framework for research, development, and acquisition is established and routinely updated. DOD stated that to implement this recommendation, in addition to completing updates to DOD Directive 3000.3 and the Joint Services Memorandum of Agreement, the Office of the Under Secretary of Defense (AT&L) is working with the JNLWD to develop a new version of the Non-lethal Weapon Capabilities Roadmap and plans to be more active in the NLW Joint Integrated Product Team. DOD agreed with our recommendation that it develop and execute a methodology for monitoring all NLW-related funding and programs across DOD and designate a central focal point to coordinate the effort with the JNLWD. DOD stated that AT&L will coordinate with the directorate to develop and implement a methodology for monitoring NLW and funding and progress across the department in order to provide a more effective foundation for decision making. DOD also agreed with our recommendation that DOD finalize and implement a risk assessment methodology for human effects testing of NLW and develop a timeline for implementing the methodology. DOD stated that in addition to implementing a risk assessment framework across technology development programs, the JNLWD has begun to develop a human effects characterization guidance document that will become standard across DOD. We believe that these steps will improve management and operations of the Joint Non-lethal Weapons Program and encourage DOD to fully implement them as soon as possible. DOD partially concurred with our recommendation that the Secretary of Defense require the Under Secretary of Defense for AT&L to oversee the development of performance evaluation criteria to guide decisions on how and for how long to allocate resources to research and development efforts. DOD agreed that enhanced performance evaluation criteria could better guide resource allocation decisions, but does not believe that new measures are needed. Nevertheless, DOD stated that the JNLWD, with oversight from AT&L and the Joint Integrated Product Team, would improve existing evaluation criteria to more effectively guide resource allocation decisions. DOD partially concurred with our recommendation that DOD articulate a methodology and develop a time frame for determining acceptable risk with respect to lethality and permanent injury for operators, targets, and bystanders because of the use of specific types of NLW. DOD agreed with the need for a methodology and time frame for assessing the risks inherent in employing non-lethal weapons. DOD stated that as we mentioned in our draft report, the Risk of Significant Injury methodology will help address our recommendation and that DOD intends for this methodology to be the basis for the human effects characterization guidance document in development. DOD stated that it does not believe that such a methodology should articulate thresholds for acceptable risk and that such determinations should be left to military commanders with the advice of legal advisors. DOD also stated that it does not believe an acceptable risk methodology should include the risk to operators of a weapon because such risks are already addressed in the existing acquisition process. We agree that military commanders (with the appropriate legal advice) should make the determination of acceptable risk when employing any weapon— including NLW. Our intent was to highlight that in order to make these decisions, military commanders require accurate information on what effects a NLW should be expected to have. By finalizing and fully incorporating the risk of significant injury methodology and guidance into NLW efforts and properly implementing them, DOD should be able to arrive at the kind of consistent and accurate information needed. To the extent the acquisition process includes risk to the operator of a NLW, we would expect that this information would be provided to commanders in the same context as risk to targeted individuals and bystanders. DOD partially concurred with our recommendation that DOD provide clearer weapons employment guidance that can be used to modify or augment existing rules of engagement or rules for the use of force for both warfighters and domestic responders on how NLW should be used, and incorporate this guidance into training curricula. DOD agreed with the necessity of doctrine that more clearly addresses NLW employment, but stated that such doctrine should be integrated into existing policy documents rather than creating separate employment guidance. To the extent that clear guidance on the employment of NLW overseas and in the United States can be incorporated into existing or supplemental documents, we agree this should allow DOD to clarify how NLW are intended to be employed in the wide range of operational circumstances, enhance the broad understanding of the use-of-force continuum, and facilitate the modification of training curricula. We continue to believe that, in whatever form it is presented, DOD should provide the clearest possible guidance. As we discussed in our report, policy and doctrine tend to drive training, and the clearer they are, the better the training that can be provided. Although not addressing a specific recommendation, DOD expressed concern that we did not sufficiently acknowledge the positive steps taken and important contributions made by its investments in NLW, and cited the contribution of the Non-Lethal Capability Sets in supporting Operations Enduring Freedom and Iraqi Freedom in providing U.S. forces with valuable escalation-of-force options. We acknowledged the fielding of the Non-Lethal Capability Sets in our draft report. We also acknowledge that DOD has made progress in adapting the way it conducts operations by expanding the use and potential use of NLW, and also that several of the commercial-off-the-shelf items fielded under urgent requests have proven valuable and timely to the warfighter. Moreover, we recognize that the Non-Lethal Capability Sets have been requested by Army units and that the Army Chief of Staff directed a requirement that all brigade combat teams be issued these sets. However, DOD officials, including some who are part of the Joint Non-lethal Weapons Program, have pointed out to us that range limitations of current munitions and other Non-Lethal Capability Set items are important factors driving the JNLWD’s and services’ current development efforts. We continue to believe that in order to achieve the kinds of operational flexibility DOD seeks, including saving lives, greater effort is required to align policies, doctrine, technology, and logistics. DOD also expressed the view that we had not accurately portrayed its efforts with respect to the Active Denial System. DOD states that it intended the Active Denial System to be a concept demonstration and did not intend to develop a fully integrated, production-ready system. Although we acknowledged in our draft report that the Active Denial System was a concept demonstration, we observed that the level of resources the JNLWD devoted to the Active Denial System in comparison to its other efforts indicated a significant investment in a capability that was intended to eventually meet warfighters’ needs. Our use of the Active Denial System in our report was primarily in the context of illustrating specific findings. For example, we noted missteps with regard to the effort to deploy Active Denial System 2 as a way of illustrating gaps in DOD’s emphasis on fully developing logistics and supportability plans at the earliest possible stage of development. We are sending copies of this report to the Secretary of Defense, the Secretary of Homeland Security, the Attorney General, and other interested parties. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or dagostinod@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To identify the extent to which the Department of Defense (DOD) has developed or fielded non-lethal weapons (NLW) or capabilities since the NLW program’s inception, we obtained and analyzed the lists of developmental efforts and fielded items from the directorate and the services, and compared these with the lists in the joint services’ manual on NLW tactics, techniques, and procedures. We interviewed program management officials from the directorate as well as Marine Corps Systems Command, the Army Program Executive Office for Close Combat Systems, the office of the Chief of Naval Operations, the Under Secretary of the Air Force for Acquisition, and U.S. Coast Guard headquarters. We attended the spring and fall 2008 meetings of both the joint coordination and integration group and integrated product team group, and reviewed briefings prepared to support prior years’ meetings. We reviewed the products of the capabilities-based assessment conducted under JCIDS (for example the Functional Area Analysis, Functional Needs Analysis, and Joint Capabilities Document for Non-Lethal Capabilities) as well as the program information worksheets and investment decision support tool that the directorate uses to help it analyze priorities in light of identified gaps. To identify DOD non-lethal weapon program funding since 1997, we compiled and analyzed non-lethal weapon program budget information from the directorate and the services, reviewed DOD’s fiscal year 2009 budget submissions and future years defense program data. We also reviewed DOD and Joint Non-Lethal Weapons Directorate management guidance as well as DOD acquisition management criteria and federal internal control standards. Our use of the budget data was to provide context for our discussion. We concluded that the figures were sufficient to provide context for our discussion. However, since NLW funding information is not centralized, we were not assured that the identified funding amount allocated to NLW programs was comprehensive. To determine the extent to which DOD has established and implemented policy and doctrine, we reviewed and analyzed joint and service directives and other publications, and conducted interviews with cognizant officials in DOD, including the Office of the Under Secretary of Defense for Policy, Arlington, Virginia (both Office of the Assistant Secretary of Defense, Homeland Defense and America’s Security Affairs; and Assistant Secretary of Defense for Special Operations/Low Intensity Conflict and Interdependent Capabilities); National Guard Bureau, Operations (J34), Arlington, Virginia; Office of the Under Secretary of Defense, Acquisition, Technology, and Logistics, Arlington, Virginia; Office of the Deputy Commandant of the Marine Corps, Plans, Policies, and Operations, Arlington, Virginia; and Joint Non-Lethal Weapons Directorate, Quantico, Virginia. In addition, we held teleconferences with officials from U.S. Northern Command headquarters and U.S. Army Training and Doctrine Command Headquarters. We also interviewed Department of Homeland Security officials with the Science and Technology Division, Customs and Border Protection, and U.S. Coast Guard Headquarters, all in Washington, D.C. Also in Washington we interviewed Department of Justice officials within the National Institute of Justice. To determine the extent to which DOD has established and implemented NLW training, we also met with officials at the Army Non-lethal Scalable Effects Center at the U.S. Army Military Police School at Fort Leonard Wood, Missouri, and the Marine Corps, Inter-Service Non-lethal Individuals Weapons Instructor Course at Fort Leonard Wood, Missouri, and reviewed training materials including the training course manual. To determine the extent to which NLW have undergone testing and evaluation, we first reviewed overarching acquisition policy—which includes both DOD Directive 5000.1, The Defense Acquisition System and DOD Instruction 5000.2, Operation of the Defense Acquisition System—to ascertain test and evaluation guidelines for programs such as weapons that must be procured. We also reviewed the Risk Management Guide for DOD Acquisition and DOD test and evaluation guidance. We then compared the results of independent human effects assessment review panels with DOD test and evaluation guidance, compared DOD’s prefielding testing requirements with the documentation that recorded the tests actually performed, and compared Service urgent and standard fielding requirements. Service-specific fielding policy such as Army Regulation 700-142, Type Classification, Material Release, Fielding, and Transfer and Marine Corps Order 5000.23, Policy for the Fielding of Ground Weapon Systems and Equipment Policy provided information about what testing was required prior to fielding under various circumstances. For the few NLW fielded, we reviewed the status of test and evaluation master plans and relevant documentation as well as Human Effects Review Board assessments to determine if adequate testing was completed prior to fielding. In addition to meeting with DOD and services’ test and evaluation officials, we also interviewed officials at the Human Effects Center of Excellence to discuss NLW testing in detail. Except where noted, we limited our discussion of technology development to those items that were specifically designed to conform to the DOD definition of non-lethal weapons. To conduct our work, we interviewed officials in the following DOD organizations at the stated locations: Office of the Under Secretary of Defense for Policy in Arlington, Virginia Office of the Under Secretary of Defense, Acquisition Technology, and Logistics in Arlington, Virginia Office of the Assistant Secretary of Defense, Homeland Defense and America’s Security Affairs in Arlington, Virginia Office of the Assistant Secretary of Defense for Policy, Special Operations – Low Intensity Conflict in Arlington, Virginia Office of the Director, Operational Test & Evaluation in Arlington, Virginia Joint Staff, (J8) Force Application Engagement Division in Arlington, U.S. Special Operations Command in Tampa, Florida U.S. Central Command in Tampa, Florida U.S. Northern Command via teleconference National Guard Bureau, Operations (J34) in Arlington, Virginia Joint Non-Lethal Weapons Directorate in Quantico, Virginia Office of the Secretary of the Air Force, Assistant Secretary for Acquisition (Deputy Assistant Secretary for Science, Technology, and Engineering), Science and Technology Division, in Arlington, Virginia Air Force Security Forces Center in San Antonio, Texas Human Effects Center of Excellence personnel with the Air Force Research Laboratory in San Antonio, Texas Department of the Army Headquarters (G3/G8) in Arlington, Virginia We conducted our review from March 2008 through April 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix IV: Non-Lethal Programs under Development That Were Evaluated by the Investment Decision Support Tool Mk 19 Short Range Non-Lethal Munition Airburst Non-Lethal Munition, Low Velocity Vehicle Lightweight Arresting Device Single Net Solution / Remote Deployment Device Optical Warning Distraction and Suppression (OWDS) Mobile Active Denial System (ADS) 66mm Light Vehicle Obscuration Smoke System Grenades Human Electro Muscular Incapacitation X-26 (TASER®) uirement could be met by leveraging the U.S. Marine Corps’ Ocular Interruption and U.S. Navy’s Unambiguous Warning Device programs. OI has a schedule to reach a Milestone B decision in 2009 . uested in the latest Program Objectives Memorandum. In addition to the contact named above, Joseph Kirschbaum, Assistant Director; Sandra Burrell; Scott Clayton; Grace A. Coleman; James Driggins; David F. Keefer; Gregory Marchand; Sally Newman; Rae Ann Sapp; Rebecca Shea; and Jena Whitley made key contributions to this report.
Nonlethal weapons (NLW) provide an alternative when lethal force is undesirable. The Department of Defense (DOD) defines NLW as those that are explicitly designed and primarily employed to incapacitate personnel or materiel, while minimizing fatalities, permanent injury to personnel, and undesired damage to property and the environment. DOD created the Joint Non-Lethal Weapons Program in 1996 to have centralized responsibility for the development of NLW and coordinate requirements among the services. GAO was asked to review the status of NLW programs within DOD and the military services by identifying the extent to which (1) DOD and the Joint Non-Lethal Weapons Program have developed and fielded NLW since the program's inception; (2) DOD has established and implemented policy, doctrine, and training for NLW; and (3) DOD has conducted testing and evaluation prior to fielding NLW. GAO reviewed and analyzed DOD and service plans, guidance, and doctrine and interviewed officials associated with NLW development. The joint non-lethal weapons program has conducted more than 50 research and development efforts and spent at least $386 million since 1997, but it has not developed any new weapons and the military services have fielded 4 items stemming from these efforts that only partially fill some capability gaps identified since 1998. Three major factors contribute to the program's limited progress in fully addressing capability gaps. First, DOD did not prioritize departmentwide non-lethal capability gaps until 2007 and still does not fully address these gaps. Second, DOD has not consistently incorporated logistics and supportability considerations early in the development process. As a result, DOD may miss opportunities to allocate resources more effectively. Third, DOD has exercised limited general oversight of the NLW program which has resulted in gaps in key program guidance as well as limited measurement of progress and performance. For example, DOD's road map of ongoing and projected NLW capabilities and efforts could be used to discharge oversight responsibilities, but the road map lacks guidance about how to allocate resources and evaluate performance. Further, DOD has no single organization with visibility over all spending, and available budget information may not fully capture all spending associated with the development of non-lethal capabilities. DOD has begun to incorporate ideas about non-lethal capabilities into policy, doctrine, and training but has not yet clearly articulated what constitutes acceptable risk for fatality, fully developed weapons employment policies for the use of force in overseas warfighting or homeland applications, or ensured that warfighters and domestic responders are fully trained in NLW use. Until these issues are resolved, doctrine and training for non-lethal weapons may be limited, and the warfighter or domestic responder may have fewer options other than resorting to lethal force. DOD lacks a clear methodology for estimating the human effects of non-lethal weapons and does not fully test and evaluate many non-lethal weapons because they have been fielded under urgent operational requirements that abbreviate normal DOD testing standards. Testing can be bypassed for commercial items because DOD officials can use contractor test data instead of conducting their own tests. Therefore, when NLW are fielded, commanders are uncertain about acceptable risk on targets and bystanders and cannot accurately predict their effects. DOD has begun to develop elements of a risk assessment methodology to address human effects testing; for example, it has drafted a Risk of Significant Injury scale, which broadly categorizes levels of health care capabilities required to reverse NLW effects. However, DOD has not completed a risk assessment methodology that would provide information to commanders so that they may then make a determination about its acceptability in their operating environment.
You are an expert at summarizing long articles. Proceed to summarize the following text: OSHA was established after the passage of the Occupational Safety and Health Act in 1970. In the broadest sense, OSHA was mandated to ensure safe and healthful working conditions for working men and women. The act authorizes OSHA to conduct “reasonable” inspections of any workplace or environment where work is performed by an employee of an employer. The act also requires that OSHA conduct investigations in response to written and signed complaints of employees alleging that a violation of health or safety standards exists that threatens physical harm, or that an imminent danger exists at their worksites, unless OSHA determines that there are no reasonable grounds for the allegations. OSHA inspections fall into two broad categories: those that are “programmed” and those that are “unprogrammed.” Programmed inspections are those the agency plans to conduct because it has targeted certain worksites due to their potential hazards. Unprogrammed inspections are not planned; instead, they are prompted by things such as accidents or complaints. How OSHA responds to complaints has changed over time. In the wake of the Kepone case, OSHA started to inspect virtually any complaint, which led to a backlog of complaint-driven inspections, according to interviewed officials. In its early response to the backlog, OSHA adopted a complaint process whereby each complaint was categorized based on whether or not it was written and signed by complainants. “Formal” complaints met both conditions, while “nonformal” complaints were oral or unsigned. OSHA further categorized complaints by the seriousness of the hazard alleged. Formal complaints were inspected regardless of whether the hazard alleged was serious, although offices were given longer time frames for responding to those that were other than serious. The agency generally handled nonformal complaints by sending the employer a letter. Agency officials said that as a result of these distinctions, the agency was able to reduce some of its backlog. A new effort to reform the complaint procedures was made through the Complaint Process Improvement Project, which was part of the Department of Labor’s overall reinvention effort from 1994 to1996. In January 1994, two area offices were selected as pilot sites to develop and test new procedures for handling complaints. Their work focused on an effort to (1) reduce the time needed for handling complaints, (2) speed the abatement of hazards, (3) allow OSHA to focus its inspections resources on workplaces where they were needed most, and (4) ensure consistency. The new procedures placed a greater emphasis on the seriousness of the alleged hazard as a factor for determining how the office would respond to a complaint. In addition, they introduced the use of telephones and fax machines as the means to notify employers of an alleged hazard instead of regular mail and provided specific procedures for following up with employers to make sure hazards were abated. These new policies were adopted and outlined in an OSHA directive dated June 1996. Policies regarding complaints are established by the Office of Enforcement Directorate in Washington, D.C.. Regional administrators in each of OSHA’s 10 regional offices oversee the enforcement of these policies within their own regions (see fig. 1). Each region is composed of area offices—there are 80 in total—each under an area director. The area directors oversee compliance officers—there can be as many as 16 in an office—some of whom play a supervisory role. Compliance officers play a key role in carrying out the directive. At almost all area offices, compliance officers take turns answering the phones, and taking and processing complaints, a collateral responsibility in addition to their duties in the field. OSHA primarily responds to complaints based on the seriousness of the alleged hazard using a priority system that the agency credits with having improved its efficiency. However, its determinations can be affected by inadequate or inaccurate information. OSHA officials usually conduct an on-site inspection if an allegation is of a serious nature. Agency policy also requires on-site inspections in cases where a written and signed complaint from a current employee or their authorized representative provides reasonable grounds to believe that the employer is violating a safety or health standard. In general, OSHA officials conduct an inquiry by phone and fax—referred to as a phone/fax investigation—for complaints of a less serious nature. Many OSHA officials, especially compliance officers, told us this priority-driven system has been more effective in conserving their time and resources. Nevertheless, many of the compliance officers also said that some inspections may occur that are not necessarily warranted because complainants have inadequately or inaccurately characterized the nature of the hazard. On the other hand, almost everyone with whom we spoke said the agency prefers to err on the side of caution so as not to overlook a potential hazard. Many of the OSHA officials we interviewed, as well as officials from states that run their own safety and health programs, suggested approaches to improve the validity of the information accompanying the complaints. According to policy, OSHA initially evaluates all incoming complaints (whether received by fax, e-mail, phone, letter, or in person) to decide whether to conduct an on-site inspection or a phone/fax investigation (see fig. 2). OSHA conducts on-site inspections for alleged serious violations or hazards and makes phone/fax inquiries for allegations of a less serious nature. OSHA considers serious violations or hazards to be those that allege conditions that could result in death or serious physical harm. Specifically, OSHA initiates on-site inspections when the alleged conditions could result in permanent disabilities or illnesses that are chronic or irreversible, such as amputations, blindness, or third-degree burns. As seen in figure 2, though, OSHA will also go on-site when a current employee or his representative provides a written and signed complaint that provides reasonable grounds for believing that a violation of a specific safety and health standard exists. While immediate risks to any employee’s health or safety are the primary factors driving OSHA’s complaint inspections, additional criteria can also prompt an on-site inspection. For example, if an employer fails to provide an adequate response to a phone/fax investigation, OSHA’s policy is to follow up with an on-site inspection. Area office supervisors or compliance officers may call the complainant, if needed, to help understand the nature of the hazard. OSHA officials told us they might ask complainants to estimate the extent of exposure to the hazard and report how long the hazard has existed. If an area office supervisor decides that an on-site inspection will be conducted, OSHA’s policy is to limit the inspection to the specific complaint. A violation or another hazard that is in clear sight may be considered, but compliance officers cannot expand the scope of their inspection to look for other violations—a specification that underscores the importance of the complaint’s accuracy. Phone/fax investigations, meanwhile, afford an opportunity to resolve a complaint without requiring a compliance officer to visit the worksite. Instead, the compliance officer contacts the employer by telephone and notifies him or her of the complaint and each allegation. The employer is also advised that he or she must investigate each allegation to determine whether the complaint is valid. The employer can resolve the complaint, without penalty, by providing OSHA with documentation such as invoices, sampling results, photos, or videotape to show that the hazard has been abated. Upon receiving documentation from the employer, the area office supervisor is required to review it and determine whether the response from the employer is adequate. For both on-site inspections and phone/fax investigations, OSHA’s policy is to keep the complainants informed of events by notifying them by letter that an on-site inspection has been scheduled, the outcome of either the inspection or the phone/fax investigation, and the employer’s response. In the case of a phone/fax investigation, the complainant has the right to dispute the employer’s response and request an on-site inspection if the hazard still exists. OSHA can also determine that the employer’s response is inadequate and follow with an on-site inspection. Of the 15 officials who told us they worked for OSHA prior to 1996, and whom we asked about past practices, nearly half said the agency’s current complaint policy has allowed them to better conserve their resources. For example, one 26-year veteran said phone/fax investigations have relieved his compliance officers of traveling to every complaint site for inspections that once averaged as many as 400 per year. Because the employer investigates the allegation first, the phone/fax inquiry is an efficient use of time, according to this supervisor. Of the 20 compliance officers that we asked about this topic, 18 said phone/fax investigations took less time to conduct than on-site inspections. Nearly one-half of these compliance officers told us the phone/fax investigation procedures reduced travel time or eliminated time spent writing inspection reports. The agency handled about two-thirds of all complaints it received in fiscal years 2000 through 2002 through phone/fax investigations. Several OSHA officials we interviewed said OSHA’s phone/fax investigation procedures ease the burden on employers because the employers have an opportunity to resolve the problem. As a result, these officials told us that their interaction with employers has improved. While few of the employers we interviewed had the complaints against them resolved through phone/fax investigations, the three that did expressed satisfaction with the way the allegation was handled. These employers reported that responding to phone/fax investigations required 3 hours, 5 hours, and 2 to 3 days respectively. Only the employer reporting the greatest amount of time believed that the time he invested was inappropriate given the nature of the alleged hazard. A 1995 internal OSHA report, which reviewed the new complaint procedures implemented in two area offices as part of a pilot project, also credited phone/fax investigations with improving efficiency, specifically by reducing the time it took to notify employers of alleged hazards and to correct them, as well as with reducing the offices’ complaint backlog. The report found that using phone/fax investigations reduced notification time by at least a week, reduced the average number of days to correct hazards by almost a month in the two offices, and eliminated one office’s backlog and reduced the other’s backlog by almost half during its involvement in the pilot project. The report attributed these gains to compliance officers being able to phone and fax employers to inform them of the allegations instead of relying on mail, promptly contacting employers to clarify allegations and to offer feasible methods for correcting hazardous conditions, and more employees choosing to have their complaints resolved with phone/fax investigations. More than half of the 20 nonsupervisory compliance officers we interviewed told us that complainants’ limited knowledge of workplace hazards and their reasons for filing complaints can affect the quality of the information they provide, which, in turn, can affect OSHA’s determination of the hazard’s severity. They said complainants generally have a limited knowledge of OSHA’s health and safety standards or may not completely understand what constitutes a violation; consequently, they file complaints without knowing whether a violation exists. As a result, the level of hazard can be overstated. For example, one nonsupervisory compliance officer said he received a complaint that alleged a construction company was violating the standards for protecting workers from a potential fall, but found upon arriving at the site that the scaffolding in question was well within OSHA’s safety standard. Over half of the nonsupervisory compliance officers (13 of 20) said that there were “some or great” differences between what complainants allege and what is ultimately found during inspections or investigations, because complainants may not completely understand what constitutes an OSHA violation or they have a limited knowledge of OSHA’s standards. Complainants’ limited knowledge of OSHA’s health and safety standards can also result in compliance officers not knowing which potential hazards to look for when conducting on-site inspections. For example, one compliance officer noted that employees might complain about an insufficient number of toilets but not about machinery on the premises that could potentially cause serious injury. In addition, another compliance officer noted that many times complainants’ descriptions of hazards are too vague, a circumstance that prevents her from locating the equipment that was alleged in the complaint, such as a drill press, and OSHA’s rules preclude her from expanding the scope of the inspection in order to locate the hazard. The quality of the information complainants provide to OSHA can also be influenced by their motives for filing a complaint. For example, half (27 of 52) of the area office directors and compliance officers we interviewed said they have received complaints from employees who filed them as retribution because they were recently terminated from their jobs or were angry with their employers. Although this practice was described as infrequent, OSHA officials said that in some instances complainants intentionally exaggerated the seriousness of the hazard or reported they were current employees when in fact they had been fired from their jobs. One official asserted that disgruntled ex-employees have taken advantage of OSHA’s complaint process to harass employers by having OSHA conduct an on-site inspection. Several of the employers we interviewed (4 of the 15) also claimed that disgruntled employees have used the complaint process to harass them. They expressed the view that OSHA should improve its procedures for evaluating the validity of complaints. Some of the compliance officers we interviewed said it is not unusual to experience an increase in the number of complaints during contract negotiations. One official told us that in a region where he once worked, union workers filed multiple complaints in order to gain leverage over the employer. A union official acknowledged that this occurred but noted that it was infrequent. Other OSHA officials told us that competitors of companies sometimes file complaints when they lose a competitive bid for a work contract. One official said that while company representatives do file complaints against each other to disrupt the other company’s work schedule, such tactics are not typical in his region. Despite these problems, several of the OSHA officials we interviewed said OSHA’s obligation is to evaluate whether there are reasonable grounds to believe that a violation or hazard exists, rather than trying to determine a complainant’s motives for filing the complaint. In fact, 34 of the 52 officials we interviewed told us that almost all of the complaints they see warrant an inspection or an investigation, and as a result, many of the area offices inspect or investigate most of the complaints that are filed. One official said he would prefer to conduct an inspection or do a phone/fax investigation for an alleged hazard, rather than not address the complaint and have it result in a fatality. When asked during interviews about ways OSHA could improve its process for handling complaints, officials from OSHA and from states that run their own health and safety programs suggested approaches the agency could take to improve the information they receive from complainants. Although some offices were actively engaging in these practices, others reported that they were being used only to some or little extent. Their recommendations were of three types; the first was in regard to strategies for improving the validity of complaints that OSHA considers. Many OSHA area directors and compliance officers said the agency could warn complainants more explicitly of penalties for providing false information, which could be as much as $10,000 or imprisonment for as long as 6 months, or both. This warning is printed as part of the instructions on the complaint form available on OSHA’s Web site. However, OSHA’s complaint policies and procedures directive states that area offices will not mail the form to complainants; consequently, complainants primarily receive the penalty warning only if they access the Web-based form. In contrast, an official from one of the state programs reported that his state’s program requires complainants to sign a form with penalty information printed in bold above the signature line. According to the state official, this policy has reduced by half the number of invalid complaints. Several OSHA supervisors and directors expressed reservations about having compliance officers make verbal warnings to complainants about providing false information while taking their complaints, saying it could prevent some complainants who are already fearful from reporting hazards. Of the 52 OSHA officials we interviewed, 23 said the extent to which they remind complainants of the penalty for providing false information is “little or none at all.” Furthermore, several officials said complainants report hazards based on a perceived violation; therefore, they doubted a hazard that turned out to be invalid would result in a penalty. To further improve the validity of complaints, one official pointed to his state’s practice of generally conducting on-site inspections only for a current employee or an employee’s representative. According to the state health and safety official, this policy improves the validity of information because current employees can more accurately describe the hazard than an ex-employee who has been removed from the environment for some time and whose relationship with the employer may be strained. Another state’s health and safety official said her state has a policy that allows its managers to decline any complaint they determine is intended to willfully harass an employer, which also helps improve the reliability of complaints. According to this official, however, managers seldom find that a complaint was filed to willfully harass an employer. The state also has a policy that allows managers to dismiss any complaint they determine is without any reasonable basis. A second approach suggested by many OSHA officials was to improve complainants’ ability to describe hazards accurately. Of the 52 officials that we interviewed, 14 said OSHA could, for example, conduct more outreach to educate both employees and employers about OSHA’s health and safety standards. Although OSHA area offices already participate in outreach activities, such as conducting speeches at conferences or making presentations at worksites, several of the officials we interviewed said the agency could do more. For example, one compliance officer suggested developing public service announcements to describe potential hazards, such as trenches without escape ladders, and to provide local OSHA contact information for reporting such hazards. One official expressed the opinion that if OSHA were to conduct more outreach to employees, the quality of complaints would likely improve. Another compliance officer suggested that OSHA engage in more preconstruction meetings with employers to discuss OSHA’s regulations and requirements and share ideas for providing safer working environments. One interviewee said if employers were more knowledgeable about hazards, there would be less need for workers to file complaints. Finally, OSHA officials said the agency could take steps to improve the ability of employers and employees to resolve complaints among themselves before going to OSHA. Many of the officials that we interviewed said their offices could encourage employers to form safety committees or other internal mechanisms to address safety concerns. Ten of the 52 officials we interviewed told us the extent to which their offices promote or encourage safety committees was “little to none at all.” Only some of these officials said that this lack of promotion stemmed from the requirements of the National Labor Relations Act (NLRA), which some believe may prohibit or hinder the establishment of safety committees. OSHA’s policy for responding to complaints requires compliance officers to address complaints in a systematic and timely manner; however, we found practices used by area offices to respond to complaints varied considerably. While some of these practices involved departures from OSHA policy, others were practices that varied to such a degree that they could result in inconsistent treatment of complainants and employers. In particular, we found several instances where area offices departed from the directive by persuading complainants to choose either an on-site inspection or a phone/fax investigation, and by having nonsupervisory compliance officers evaluate complaints. We also found several instances where practices were inconsistent. Among the 42 offices we contacted, we found that some conducted follow-up inspections on a sample of closed investigation cases to verify employer compliance, and others did not. Since issuing its new directive for handling complaints in 1996, however, OSHA has issued no guidance to reinforce, clarify, or update those procedures. In addition, while OSHA requires its regional administrators to annually audit their area office operations, some administrators do not, and further, for those who do, OSHA does not have a mechanism in place to review the results and address problems on an agencywide level. In our interviews with 52 randomly selected supervisory and nonsupervisory officials in 42 of the 80 area offices, we found practices that appeared to depart from OSHA’s official policies. In particular, agency policy calls for supervisors to evaluate each complaint. However, 22 of the 52 officials to whom we talked said nonsupervisory compliance officers in their offices are sometimes the decision makers for whether complaints are inspected or pursued through phone/fax investigations. In some of these offices, compliance officers make the decision if the complaint is less than serious. In addition, some officials told us that if the case was earmarked for an inspection or was challenging, the supervisor would then review it. While OSHA’s directive addresses supervisory review within the context of inspections, an OSHA national director informed us that it is agency policy to have supervisors review each and every complaint. In addition, agency policy prescribes that compliance officers explain to complainants the relative advantages of both phone/fax investigations and inspections, if appropriate. However, 16 of the 52 officials to whom we spoke said they encourage complainants, in certain circumstances, to seek either an inspection or an investigation. For example, one official said that his office “sells” phone/fax investigations because they are faster to conduct and lead to quicker abatement than on-site inspections. However, an OSHA national director stressed to us that duty officers should not attempt to persuade complainants. Another practice that appeared inconsistent with policy was the treatment of written, signed complaints. Current employees and their representatives have the right to request an inspection by writing and signing a complaint, but before an inspection may take place, OSHA must determine that there are reasonable grounds for believing there is a violation of a safety or health standard or real danger exists. Area office supervisors are to exercise professional judgment in making this determination. Of the 52 officials with whom we spoke 33 said their offices exercise professional judgment by evaluating written and signed complaints. However, most of the remainder were about equally split in reporting that they evaluate these complaints “sometimes” (7 of 52) or forgo evaluation altogether and automatically conduct on-site inspections (8 of 52). Finally, while we found that complaint policy was generally followed at the three OSHA offices where we reviewed case files, we did find that one office had not been sending a letter to complainants to notify them of a scheduled inspection. According to the OSHA directive, complainants should be notified of inspections. During telephone interviews, officials described practices that, while they did not depart from agency policy, varied significantly from office to office. For example, offices differed in whether they treated e-mails as phone calls or as written and signed complaints. Of the 52 officials with whom we spoke, 12 said they treated complaints received via e-mail as written and signed complaints, while 34 said they treated them as phone complaints. While agency policy is silent on how to classify e-mail complaints, this inconsistency is important because written and signed complaints are more likely to result in on-site inspections. Offices also differed in whether or not they performed random follow-up inspections for phone/fax investigations. While 10 of the 52 officials said they did not know if their offices conducted follow-up inspections, most of the remainder were about equally split in reporting that either they did (18 of 52) or did not (20 of 52) do them. Although the directive does not require follow-up inspections, the OSHA letters sent to employers says they may be randomly selected for such inspections. This inconsistency in practice across offices is significant insofar as follow-up inspections can be seen either as an added burden to employers or as an important safeguard for ensuring abatement. We also found variation in how offices determined whether a complainant was a current employee. The employment status of a complainant is important, as it is often a factor in evaluating the complaint. Of the 52 OSHA officials with whom we spoke, 30 said their offices determine whether a complainant is a current employee simply by asking the complainant; 11 said they asked probing questions of the complainant, and 5 said they asked the complainant for some type of documentation, such as a pay stub. While the directive does not specify how compliance officers are to verify employment status, the methods used to obtain this information can affect its accuracy. Finally, we found that some area offices differ significantly in how they respond to complaints for which OSHA has no standard, specifically those involving substance abuse in the workplace. For example, during a site visit to one area office, an official explained that his office would not do a phone/fax investigation in response to complaints alleging drug use at a workplace, but would refer them to the police instead. However, another area office conducted a phone/fax investigation for a complaint about workers drinking alcoholic beverages while operating forklifts and mechanical equipment. An official in a third area office told us that his office has sometimes referred complaints about drug use at a workplace to the local police and at other times has responded to similar complaints with a phone/fax investigation. An OSHA national director told us that area offices are obligated to do phone/fax investigations for alleged drug use in the workplace. OSHA policy requires that regional administrators annually audit their area offices and that audit results be passed on to the Assistant Secretary. However, this is not current practice. Regional administrators are required to focus the audits on programs, policies, and practices that have been identified as vulnerabilities, including the agency’s complaint-processing procedures. However, according to OSHA’s regional administrators, only 5 of the agency’s 10 regions conduct these audits annually, while 3 conduct the audits, but only for a proportion of their area offices each year, and 2 do not conduct the annual audits at all. In addition, according to one national director, all of the regional administrators are to submit the results of their audits to a Program Analyst in the Atlanta area office for review. The results of this review are to be reported to the Deputy Assistant Secretary for Enforcement, as well as to the responsible directorate, and they are responsible for addressing issues of noncompliance and determining what, if any, policy changes are needed. However, the Program Analyst in Atlanta said he does not receive all of the audits from each region as required, and an official from one of OSHA’s directorates told us his office does not receive such reports. The findings from the seven audits we reviewed underscore their value for monitoring consistency. These audits showed that most of the audited offices were (1) not correctly following procedures for meeting the time frames for initiating on-site inspections, (2) closing phone/fax investigation cases without obtaining adequate evidence that hazards had been corrected, and (3) not including all required documentation from the case files. To some extent, complaints have drawn OSHA compliance officers to sites with serious hazards. According to OSHA’s data for fiscal years 2000 and 2001, compliance officers found serious violations at half the worksites inspected in response to complaints, a figure comparable to inspections conducted at worksites targeted for their high injury and illness rates. However, in one of our earlier reports, we expressed concern that for targeted inspections a 50 percent success rate may raise questions about whether inspection resources are being directed at sites with no serious hazards. Complaint-driven inspections shared other similarities with planned inspections; specifically, compliance officers cited similar standards during both types of inspections. On the other hand, complaint inspections often required more time to complete. Finally, we found a correlation between hazardous industries and complaints inspections. Specifically, those industries that, according to BLS data, had more injuries and illnesses also generally had a larger number of complaint inspections according to OSHA data. OSHA compliance officers found serious violations in half of the worksites they inspected when responding to complaints alleging serious hazards according to OSHA’s data for fiscal years 2000 and 2001 combined. These are hazards that pose a substantial probability of injury or death. During some planned inspections—those conducted at worksites targeted for their high injury and illness rates—OSHA compliance officers found serious violations, such as those involving respiratory protection and control of hazardous energy, in a similar percentage of worksites. Specifically, as shown in table 1, OSHA compliance officers found serious violations in 50 percent of the 17,478 worksites they inspected during complaint-driven inspections. Likewise, they found serious violations in 46 percent of the 41,932 worksites they targeted during planned inspections. In a previous report we noted that this percentage might indicate that inspection resources are being directed to worksites without serious hazards. According to OSHA, many complaints come from the construction industry, where the work is often dangerous and of a short duration. As a result, even if an inspection begins immediately, “citable” circumstances may no longer exist, a fact that according to the agency, might explain why the number of serious violations that result from complaints is not higher. We found that, in contrast to planned inspections, complaint-driven inspections require, on average, more hours per case to complete. Table 2 shows that OSHA compliance officers have required about 65 percent more time for complaint-driven inspections in comparison to planned inspections—29.7 hours on average compared with 18.1 hours— suggesting that while outcomes are similar, complaint-driven inspections are more labor intensive than planned inspections. Compared with planned inspections, complaint-driven inspections have a higher rate of health inspections, which, according to an OSHA national director, place extra time demands on compliance officers to obtain samples, test them, and document the results. In comparison with inspections, phone/fax investigations require, on average, far less time than either complaint- driven or planned inspections. In terms of the types of hazards they uncover, complaint-driven inspections shared some similarities with planned inspections that target the most hazardous sites. Of the 10 standards OSHA compliance officers cited most frequently for violations during complaint-driven inspections, 7 were also among the 10 most frequently cited during planned inspections. Table 3 shows the rank ordering of hazards cited most frequently during planned inspections and complaint-driven inspections. However, table 3 also shows that there were some differences in the frequency with which compliance officers cited particular hazards during planned inspections, compared with complaint-driven inspections. For example, the standard most frequently cited during planned inspections, general requirements for scaffolds, is the 18th most frequently cited standard during complaint-driven inspections. Likewise, the standard cited with the second highest frequency in planned inspections, “fall protection,” is not within the 10 standards most frequently cited for complaint-driven inspections. Such examples indicate that some differences exist in the type of hazards compliance officers found at worksites about which workers have complained and at those OSHA targeted for inspection. Our analysis found a correlation between injuries and illnesses reported in industries and the rate at which complaints were inspected. As shown in figure 3, industries associated with higher rates of injuries and illnesses also tended to have a higher rate of complaint inspections than did industries with lower injury and illness rates, according to OSHA’s data. For example, one industry, transportation equipment, had 12.6 injuries and illnesses per 100 full-time workers in 2001 and had a relatively high rate of complaint inspections, .016 per 100 full-time workers. Conversely, the motion picture industry, which had only 2.5 injuries and illnesses per 100 full-time workers in 2001, had a relatively low incidence rate for complaint inspections, .0015 complaint inspections per 100 full-time workers. For a handful of industries the pattern of high injury and illness rates associated with high complaint inspection rates did not apply. For these industries, the number of complaint inspections per 100 full-time workers was either far higher or far lower than might have been expected given the number of injuries and illnesses per 100 full-time workers. For example, the air transport industry had the highest injury and illness rate for 2001, but its complaint inspection rate was lower than those for all but 1 of the 10 industries with the highest injury and illness rates. In another example, while the general building contractors industry had the highest complaint inspection rate of any industry, over a third of all industries had higher injury and illness rates. Table 4 shows industries that were highest or lowest in terms of injuries and illness and their corresponding rates of complaint inspections. Since 1975, OSHA has had to balance two competing demands: the need to use its inspection resources efficiently and the need to respond to complaints about alleged hazards that could seriously threaten workers’ safety and health. In light of this ongoing challenge, OSHA has adopted complaint procedures that, according to agency officials, have helped OSHA conserve its resources and promptly inspect complaints about serious hazards. Nonetheless, in deciding which complaints to inspect, OSHA officials must depend on information provided by complainants whose motives and knowledge of hazards vary. Many OSHA officials do not see the quality of this information as a serious problem. However, considering that serious violations were found in only half of the workplaces OSHA officials inspected when responding to complaints, it seems likely that the agency, employers, and workers could all be better served if OSHA improved the quality of information it receives from complainants. When OSHA conducts inspections of complaints based on incomplete or erroneous information, it potentially depletes inspection resources that could have been used to inspect or investigate other worksites. In addition, employers may be forced to expend resources proving that their worksites are safe when no hazard exists. OSHA should certainly not discourage workers from making complaints or pursuing a request for an OSHA inspection. Indeed, the correlation we found between those industries designated as hazardous and those that generate complaints inspections suggests that using complaints to locate hazardous worksites is a reasonable strategy for the agency to pursue. However, to the extent that OSHA officials could glean more accurate information from complainants, such as by deterring disgruntled employees from misrepresenting hazards or their employment status, the agency could benefit in several ways. With better information, OSHA could better conserve its inspection resources, minimize the burden on employers, and further enhance the agency’s credibility in the eyes of employers. In addition, if the strategies described by OSHA officials as effective means to improve the quality of complaints are not being fully utilized, OSHA may miss opportunities to maximize the efficiency its complaint process might afford. Some variation in how OSHA officials respond to complaints is inevitable, particularly considering that there are 80 area offices with as many as 16 compliance officers in each office. Nevertheless, the inconsistencies that we found have ramifications when considering the size of the agency and the judgment that comes into play when handling complaints. Moreover, OSHA has much to gain by upholding a reputation for fairness among employers. When employers buy into OSHA’s standards and comply voluntarily, the agency can better use its 1,200 compliance officers to ensure worker safety at the more than 7 million worksites nationwide. However, OSHA’s credibility could be damaged by procedural inconsistencies if, for example, they resulted in different treatment and disposition of similar complaints. While OSHA requires regional audits for monitoring consistency, the failure to maximize the value of this information limits the agency’s ability to ensure one of the underlying principles of its complaint policy. We are making recommendations that the Secretary of Labor direct the Assistant Secretary for Occupational Safety and Health to instruct area offices to pursue practices to improve the quality of information they receive from complainants, such as reminding complainants of the penalties for providing false information, conducting outreach to employees regarding hazards, and encouraging employers to have safety committees that could initially address complaints. We are also recommending that the Secretary direct the Assistant Secretary for Occupational Safety and Health to take steps to ensure that area offices are consistently implementing the agency’s policies and procedures for handling complaints. As a first step, the agency should update and revise the 1996 directive. In revising the directive, the agency should update and clarify how complainants are advised of the process, how written and signed complaints are evaluated, how to verify the employment status of complainants, how to treat e-mail complaints, and how to address complaints involving hazards for which the agency has no specific standard. In addition, we are recommending that the Secretary direct the Assistant Secretary for Occupational Safety and Health to develop a system for ensuring the regions complete audits and develop a system for using the audit results to improve consistency of the complaint process. We received comments on a draft of this report from Labor. These comments are reproduced in appendix II. Labor also provided technical clarifications, which we incorporated where appropriate. Although Labor recognized in its comments that most complaints are anonymous and unsigned—a fact that makes it difficult to find employees to obtain their views about the complaint process—the agency recommended that we acknowledge in the report the limited number of employees we interviewed. At the beginning of the report and again at the end, we acknowledged that we interviewed 6 employees. Further, Labor questioned whether the number of employees we interviewed was an adequate number on which to base the conclusions reached in this report. Our conclusions about OSHA’s complaint process were not based solely on employee interviews but were based on a variety of data, including interviews with 52 OSHA officials. In determining which OSHA officials to interview, we deliberately included area directors, assistant area directors, and compliance officers, which resulted in us obtaining information from officials at various levels in 42 of OSHA’s 80 area offices. Labor also noted that our findings from OSHA’s database which showed that only half of complaint inspections result in citations for serious violations do not recognize that many complaints come from the construction industry, where the work is often dangerous and of a short duration so that even if an inspection begins immediately, “citable” circumstances may no longer exist. We added language to the body of the report to reflect this information. In responding to our first recommendation about improving the quality of information received through complaints, Labor stated that OSHA has taken many steps, both in its online and office-based complaint-taking procedures, to provide guidance to employees to ensure that all complaints are valid and accurate. We maintain, however, that OSHA can do more to improve the validity and accuracy of the complaints it receives. Labor did not comment on our recommendations that OSHA develop a system for ensuring that the regions complete audits of the complaint process and for using the results of these audits to improve the consistency of the process. We will make copies of this report available upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or any of your staff has any questions about this report, please contact me at (202) 512-7215 or Revae Moran, Assistant Director, at (202) 512-3863. Our criteria for selecting our site visits were geographical diversity and volume of complaints. We received data from the Occupational Safety and Health Administration (OSHA) regarding the number of complaints each of its area offices processed in 2000, 2001 and 2002. On the basis of these data, we selected the three sites with the largest number of complaints processed in their respective regions and which roughly approximated the east, south and western regions. Those sites were Pittsburgh, Pennsylvania; Austin, Texas; and Denver, Colorado. In each of these offices, we examined a statistical sample of case files. We used a standard set of questions, pretested on case files in the Philadelphia, Pennsylvania office, to conduct the case file reviews. In addition, we interviewed compliance officers—both supervisory and nonsupervisory. We randomly selected 38 cases in Denver, 30 cases in Austin, and 34 cases in Pittsburgh from the available list of complaint files processed by these offices in 2000, 2001, and 2002. Austin and Pittsburgh had disposed of their case files for phone/fax investigations for 2000, according to area directors there, who said this was allowed by agency rules for how long files must be kept. As a result, our random selections for Austin and Pittsburgh were selected from lists that did not include phone/fax investigations for 2000. In addition to our site visits, using standard sets of questions, we interviewed by telephone randomly selected area directors, assistant area directors, and compliance officers in 42 area offices. We obtained from OSHA a list of area directors, assistant area directors (who are supervisory compliance officers), compliance officers, and regional administrators. We randomly selected 20 of the agency’s 80 area directors and 32 of its 1,200 compliance officers (12 assistant area directors and 20 nonsupervisory compliance officers). We also interviewed officials in all 10 regional offices. Additionally, we conducted telephone interviews with health and safety officials from 13 states that operate health and safety programs apart from OSHA. We selected these 13 states, in part, based on discussions with OSHA. In addition to OSHA officials, we also interviewed employers whose worksites were the subject of a complaint and employees who had filed complaints. OSHA provided us with a database of all employers who in 2000, 2001, or 2002 had worksites that were the subject of complaints and employees who had filed complaints in the same year. From the database we randomly selected 90 employers and 90 employees. We took steps to make sure that employers’ and employees’ contact information was kept separate from their identity and any information collected from them during their interviews. We also obtained a guarantee of confidentiality from the report’s requester. Of the 90 employers randomly selected, we succeeded in interviewing 15. Of the 90 employees, we succeeded in interviewing 6. Some of the employee complaints randomly selected had been filed anonymously, so contact information was not available. In most cases, those selected could not be reached. Finally, we examined data for fiscal years 2000 through 2002 related to complaints in OSHA’s Integrated Management Information System (IMIS) and looked at data on injuries and illnesses collected and published by the Bureau of Labor Statistics (BLS) for calendar year 2001 as they related to complaints. In addition, for the IMIS data we obtained and reviewed documentation of internal controls and manually tested the data. We interviewed both OSHA and BLS officials to establish the reliability of the data. We found the data to be reliable for our purposes. The following are GAO comments on Labor’s letter dated May 21, 2004. 1. We rephrased our recommendations to reflect Labor’s administrative procedures. 2. Our conclusions are based on site visits to 3 area offices processing large numbers of complaints, reviews of case files in those offices, interviews with 52 OSHA officials—area directors, assistant area directors, and compliance officers— who represented 42 of OSHA’s 80 area offices, interviews with officials in all 10 of OSHA’s regional offices, interviews with the director of the Office of Enforcement, interviews with officials in 13 states that have their own safety and health programs, analysis of data on complaints from OSHA’s Integrated Management Information System, analysis of BLS data on injuries and illnesses, interviews with 15 employees whose companies were the subject of complaints, interviews with 6 employees who filed complaints, and the review of agency documents related to the complaint process. In the appendix on scope and methodology, we corrected the number of employee interviews, changing it to 6 from 8. 3. We have included the agency’s explanation in the final version of the report. 4. We added a note to table 4 acknowledging that OSHA’s jurisdiction is limited in the transportation area and corrected the source of the data in the table. 5. On the basis of our interviews with OSHA officials who said the agency could do more to improve the quality of information received from complainants, we continue to believe that adopting our recommendation would help the agency better manage its inspection resources. Moreover, we believe that the agency could take such actions without discouraging employees from filing legitimate complaints. Carl Barden, Sue Bernstein, Karen Brown, Amy Buck, Patrick di Battista, Barbara Hills, Mikki Holmes, Cathy Hurley, Julian Klazkin, Jim Lawrence, Luann Moy, Corinna Nicolaou, Sid Schwartz, and Michelle Zapata made key contributions to this report. Workplace Safety and Health: OSHA's Voluntary Compliance Strategies Show Promising Results, but Should Be Fully Evaluated Before They Are Expanded. GAO-04-378 March 19, 2004. Workplace Safety and Health: OSHA Can Strengthen Enforcement through Improved Program Management. GAO-03-45 November 22, 2002. Worker Protection: Labor's Efforts to Enforce Protections for Day Laborers Could Benefit from Better Data and Guidance. GAO-02-925 September 26, 2002. Workplace Safety and Health: OSHA Should Strengthen the Management of Its Consultation Program. GAO-02-60 October 12, 2001. Worker Protection: OSHA Inspections at Establishments Experiencing Labor Unrest. HEHS-00-144 August 31, 2000. Occupational Safety and Health: Federal Agencies Identified as Promoting Workplace Safety and Health. HEHS-00-45R January 31, 2000.
Each year, OSHA receives thousands of complaints from employees alleging hazardous conditions at their worksites. How OSHA responds to these complaints--either by inspecting the worksite or through some other means--has important implications for both the agency's resources and worker safety and health. Responding to invalid or erroneous complaints would deplete inspection resources that could be used to inspect or investigate other worksites. Not responding to complaints that warrant action runs counter to the agency's mission to protect worker safety and health. Considering OSHA's limited resources, and the importance of worker safety, GAO was asked: (1) What is OSHA's current policy for responding to complaints in a way that conserves its resources, (2) how consistently is OSHA responding to complaints, and (3) to what extent have complaints led OSHA to identify serious hazards? In general, the Occupational Safety and Health Administration (OSHA) responds to complaints according to the seriousness of the alleged hazard, a practice that agency officials say conserves inspection resources. OSHA officials usually conduct on-site inspections for alleged hazards that could result in death or serious injury. For less serious hazards, OSHA officials generally investigate by phoning employers and faxing them a description of the alleged hazard. Employers are directed to provide the agency with proof of the complaint's resolution. OSHA officials said the availability of both options allows them to manage resources more effectively when responding to complaints. However, many agency officials we interviewed said some complainants provide erroneous information about the alleged hazard, which can affect the agency's determination of the hazard's severity. For example, some complainants lack the expertise to know what is truly hazardous and, as a result, file complaints that overstate the nature of the hazard. Others, particularly disgruntled ex-employees, may have ulterior motives when filing complaints and misrepresent the nature of the hazard. In the 42 area offices where we conducted interviews (there are 80 area offices), OSHA officials described practices for responding to complaints that varied considerably. For example, the degree to which supervisors participated in decisions about which complaints would result in inspections and which would not varied across offices. While OSHA requires annual audits that would identify the extent to which its area offices are correctly employing the complaint policies, some regions are not conducting these audits, and agency officials have told us that OSHA does not have a mechanism in place to address agencywide problems. To some extent complaints direct inspection resources where there are serious hazards. At half the worksites OSHA inspected in response to complaints, compliance officers found serious violations--those that posed a substantial probability of injury or death, according to OSHA's own data for fiscal years 2000-2001.
You are an expert at summarizing long articles. Proceed to summarize the following text: As we reported in 2000, commercial activities in school can generally be classified in four categories—product sales, direct advertising, indirect advertising, and market research—although each category encompasses a wide range of activities. For example, advertising activities could range from selling advertisements for a high school football game to selling naming rights to a school. Although this report synthesizes statutes, regulations, and proposed legislation addressing all four categories, our discussions of school district policies and Education’s activities focus on the fourth category, market research, because of the amendments made by NCLBA that place requirements on districts that deal with the collection, disclosure, and use of student data for marketing and selling. (See table 1.) In recent years, the growth of the Internet has had a large impact on commercial activities, particularly market research, by enabling marketers to elicit aggregated and personally identifiable information directly from large numbers of students. For example, some Web filtering systems used in schools that block student access to certain Web sites also allow the company that maintains that software to measure and analyze how children use the Internet by tracking which Web sites they visit and how long they stay there. Although this information is aggregated and does not identify particular children, this information, especially when used with demographic data, can help businesses develop advertising plans that target particular audiences if districts allow the installation of the software. Also, Web sites directly elicit the participation of students in market research panels by offering them cash or prizes in exchange for information about themselves and their preferences. This makes it possible for companies to engage large-scale customized panels of students to test out marketing strategies and provide data to develop product lines and product loyalty without relying on schools. NCLBA addresses some concerns about commercial activities and student data by amending and expanding certain student data safeguards that were established in PPRA. Prior to NCLBA, PPRA generally prohibited requiring students to submit to a survey concerning certain personal issues without prior written parental consent. As amended, PPRA for the first time requires districts to develop and adopt new policies, in consultation with parents, for collecting, disclosing, and using student data for marketing or selling purposes. Districts are also required to directly notify parents of these policies and provide parents an opportunity to opt their child out of participation in such activities. Furthermore, districts are required to notify parents of specific activities involving the collection, disclosure, and use of student information for marketing or selling purposes and to provide parents with an opportunity to review the collection instruments. PPRA did not contain deadlines for districts to develop policies. Also, PPRA requires Education to annually inform each state education agency and local school districts of their new obligations under PPRA. Finally, PPRA continues to require Education to investigate, process, and adjudicate violations of the section. For the past 30 years, student and parent privacy rights related to students’ education records have been protected primarily under the Family Educational Rights and Privacy Act (FERPA), which was passed in 1974. FERPA protects the privacy of students’ education records by generally requiring written permission from parents before records are released. FERPA also allows districts to classify categories of information as publicly releasable directory information so long as the district has provided public notice of what will constitute directory information items and has allowed parents a reasonable period of time to advise the district that directory information pertaining to their child cannot be released without consent. Under FERPA, directory information may include a student’s name, telephone number, place and date of birth, honors and awards, and athletic statistics. Unlike PPRA, FERPA does not address the participation of students in surveys or the collection, disclosure, or use of student data for marketing or selling purposes. (See table 2.) As a result of the NCLBA amendments, Education is required to annually inform each state and local education agency of the educational agency’s obligations on PPRA and FERPA. State education laws are enacted by state legislatures and administered by each state’s department of education, which is led by the state’s chief state school officer. The Council of Chief State School Officers represents states’ education interests in Washington, D.C., and acts as a conduit of information between the federal government and the states regarding federal education laws. Each state department of education provides guidance and regulations on state education laws to each school district. School district policies are generally set by local school boards according to the authority granted to them by state legislatures. The policies are then administered by the school district’s superintendent and other school district staff. Local school boards in each state have come together to form a state school boards association. They provide a variety of services to their members including help on keeping their local school board policies current. For example, a partial list of services offered by one school board association includes policy development services, advocacy, legislative updates, legal services, executive search services, conferences and training, and business and risk management services. Since 2000, 13 states have established statutes, regulations, or both that address one or several categories of commercial activities in schools. Six of these states established provisions addressing market research by restricting the use of student data for commercial activities and for surveys. Other states passed statutes or issued regulations addressing product sales and advertising. In addition, as of February 2004, at least 25 states are considering proposed legislation that would affect commercial activities. Most of these proposals would affect product sales, particularly the sale of food and beverages. Prior to 2000, 28 states had passed provisions addressing commercial activities. At that time, most provisions addressed direct advertising and product sales. The seven districts we visited in 2000 continued to conduct a variety of commercial activities, particularly product sales, and three districts reported that they have increased the level of activities with local businesses. However, the types of activities in these districts have not substantially changed since our visit. Since our previous report in 2000, 13 states have enacted 15 statutory provisions and issued 3 regulatory provisions addressing one or more types of commercial activities in schools. Six states passed legislation affecting marketing research. Three of these 6 passed laws restricting the disclosure or use of student data for commercial purposes, and another 3 placed restrictions on students’ participation in surveys. For example, an Illinois statute prohibited the disclosure of student data to businesses issuing credit or debit cards, and a New Mexico regulation prohibited the sale of student data for commercial reasons without the consent of the student’s parent. Laws in Arizona, Arkansas, and Colorado prohibited student participation in surveys without the consent of their parents. Five states passed new provisions affecting product sales. In most cases, these laws targeted the sale of soft drinks and snack food. Other new provisions addressed direct and indirect advertising. Prior to 2000, 28 states had established one or more statutes or regulations that affected commercial activities in schools. Twenty-five states established provisions addressing advertising—in 19 states, measures affected direct advertising and in 6, indirect advertising. Sixteen states established provisions addressing product sales. Only 1 state established a measure that addressed market research. See appendix II for a state-by- state listing of provisions addressing commercial activities. Legislatures in 25 states have recently considered one or more bills that affect commercial activities in schools, with most having a particular focus on child nutrition. These bills are intended to improve child nutrition and reduce obesity, and to achieve this intention, place limitations, restrictions, or disincentives on the sale of beverages and food of limited nutritional value. Legislatures in 24 of the 25 states recently considered bills that restrict or ban the sale of beverages and food of limited nutritional value in schools. For example, a bill in New York would prohibit vending machines from selling food and drinks of minimal nutritional value. Additionally, legislatures in several states have considered bills that restrict the hours when students can buy products of limited nutritional value. For example, bills in Alaska and Ohio would restrict the sale of soft drinks during certain hours. Finally, pending legislation in Maryland would require schools to sell food of limited nutritional value at higher prices than nutritious food. Legislatures in seven states have recently proposed bills that focus on other aspects of commercial activities in schools. In three states— Connecticut, Minnesota, and North Carolina—bills would restrict the ability of schools to enter into exclusive contracts with beverage and food venders. In two statesNew Jersey and North Carolinabills would place limits on the ability of schools to release or collect personal information about students, such as prohibiting the release of data from the student- testing program to any marketing organization without the written permission of the parent or guardian. Other proposed bills addressed a variety of issues, such as allowing schools to sell advertising and accept supplies bearing logos or other corporate images or requiring school boards to disclose the portion of proceeds from fundraising activities that is contributed to the school activity fund. See appendix III for a state-by- state listing of legislative proposals. In updating the site visit information we collected in 2000, we found only slight changes in commercial activities in all seven school districts. All districts reported they continued to engage in product sales and display advertising. As we found earlier, most commercial activities, particularly product sales and advertising, occurred in high schools. All the high schools we visited in 2000 still sold soft drinks, and most sold snack or fast food. To varying degrees, all displayed corporate advertising. High schools continued to report the receipt of unsolicited samples, such as toiletries, gum, razors, and candy, that they did not distribute to students. In contrast, the elementary schools we contacted did not sell carbonated soft drinks to students or display corporate advertising. Grocery and department store rebate programs continued to operate in almost all schools, but coupon redemption programs were largely an elementary school enterprise. As we found before, none of the districts reported using corporate-sponsored educational materials or engaging in market research for commercial purposes. Officials did report some changes in commercial activities. Three of these districts reported stronger ties with local businesses, and three schools in two districts reported they now sell healthier soft drinks. One district reported a new relationship with a computer firm headquartered in its area that provided tutors as well as cash donations to schools in the district. Under this relationship, company employees tutored students who were at risk of failing, and the company donated $20 to schools for each 10 hours of tutoring that its employees provided. A principal in this district reported that many students in her school benefit substantially from this relationship and her school earned between $6,000 and $9,000 per year in donations. Another district reported it had entered into a new contract with a local advertising agency to raise revenue to renovate sport concession stands, and a third had organized a new effort to sell advertisements to fund construction on the district’s baseball field. Three principals told us that vending machines in their schools now offer a different mix of beverages—for example, more juice, milk, and water and fewer carbonated beverages—than they did when we visited in 2000. We estimate that about two-thirds of the districts in the nation were either developing or had developed policies addressing the new provisions on the use of student data for commercial purposes. However, only 19 of the 61 districts that provided us copies of their policies specifically addressed these provisions. Very few school districts reported releasing student data for marketing and selling, and all these releases were for student-related purposes. Of the seven districts we visited in 2000, three adopted new policies on the use of student data since our visit, and only one released data and that was for graduation pictures. Although districts reported they had developed policies, many of the policies they sent us did not fully address PPRA requirements. On the basis of the results of our surveys, we estimate about a third of districts were developing policies regarding the use of student data for commercial purposes; another third had developed policies; and about another third had not yet developed policies. However, when we analyzed policies that 61 districts sent to us, we found only 19 had policies that specifically addressed marketing and selling of student information. Of these, 11 policies addressed the collection, release, and use of student information for commercial purposes. Eight policies partially addressed the provisions by prohibiting the release of student data for these purposes. Policies in the 42 remaining districts did not address the new PPRA provisions. Many of these districts provided us policies concerning FERPA requirements. We telephoned all districts in our sample that reported they release data for commercial purposes and a subsample of districts that reported they had not. Of the 17 districts that released data for commercial purposes, all reported that they released data only for school-related purposes. For example, all 17 released students’ names to photographers for graduation or class pictures. Two of these districts also released student data to vendors who supplied graduation announcements, class rings, and other graduation-related products, and another two districts released student information to parent-teacher organization officials who produced school directories that they sold to students’ parents. Of the 16 districts that reported they did not release student data, one actually did release student data. As in the other cases, that district released it to a school photographer. Of the seven districts we visited in 2000, three adopted new policies on the use of student data. One of the districts we visited adopted new policies that incorporated PPRA provisions on the use of student data for commercial purposes. Two adopted policies with blanket prohibitions against some uses of student data for marketing and selling. In one of these districts, policies prohibited the release of students’ data for any survey, marketing activity, or solicitation, and policies in the other banned the use of students to support any commercial activity. Officials in all seven districts reported that their district did not collect student data for marketing or selling purposes, and several expressed surprise or disbelief that this practice did in fact occur. However, a high school in one district reported that it disclosed information on seniors to vendors selected by the district to sell senior pictures, school rings, and graduation announcements. As required by NCLBA, Education has developed guidance and notified every school district superintendent and chief state school officer in the country of the new required student information protections and policies, and has charged the Family Policy Compliance Office to hear complaints on PPRA. Education issued guidance about the collection, disclosure, and use of student data for commercial purposes as part of its general guidance on FERPA and PPRA in 2003 and 2004. In addition, although not required by statute to do so, Education provided superintendents with model notification information that districts could use to inform parents of their rights, included information about PPRA in some of its training activities, and posted its guidance and other PPRA-related material prominently on its Web site. Education has charged its Family Policy Compliance Office to hear complaints and otherwise help districts implement the new student data requirements. Although the office has received some complaints about other provisions related to student privacy, as of June 2004, officials from that office reported they have received no complaints regarding the commercial uses of student data. Many districts did not appear to understand the new requirements, as shown by our analysis of the 61 policies sent to us by districts in our sample. Although we asked districts to send us their policies that addressed these new provisions, only 11 districts sent policies that addressed these new provisions comprehensively, and 8 sent policies that covered these provisions partially. The 42 remaining districts sent policies that did not contain specific language addressing the collection, release, or use of student data for commercial purposes, although districts sent them to us as documentation that the districts had developed such policies. Most of these policies contained only general prohibitions about the release of student records and concerned FERPA. Although Education is not required to issue its guidance to state school boards associations, four districts in two states in our survey offered unsolicited information that they relied on state school boards associations to develop policies for their consideration and adoption. Two districts in a third state that sent us policies used policies developed by their state school boards association to address commercial activities in schools. However, Education did not distribute its guidance to these associations. Although state laws both limit and support commercial activities in schools, many state legislatures have chosen to pass laws addressing only specific activities such as permitting or restricting advertising on school buses. In addition, many states have not enacted legislation concerning commercial activities or have passed the authority to regulate these activities to local districts, thus allowing district school boards, superintendents, or principals to determine the nature and extent of commercial activities at the local level. Not only do commercial activitiesproduct sales, direct advertising, indirect advertising, and market researchencompass a broad spectrum of activities, but also the levels of these activities and the levels of controversy attached to them vary substantially. For example, few would equate selling advertisements for a high school football program with selling the naming rights to a school, although both are examples of direct advertising. Because of these differences, as well as philosophical differences among districts and communities, it is probably not surprising that states legislatures have taken various approaches toward the regulation of commercial activities. Perhaps because providing student information for commercial purposes may have serious implications, few districts do so. In fact, some school officials said they were skeptical that schools would allow the use of student data for this purpose. In the past, marketers may have approached schools to survey students about commercial products or services. Today, however, technology, particularly the proliferation and availability of the Internet, provides marketers with quick and inexpensive access to very large numbers of children without involving the cooperation of schools. As Internet users, children often submit information about themselves and their personal product preferences in exchange for cash or prizes. Because of the disinclination of school officials to sell student data and the ability of marketers to get data directly from students without involving schools, it may be understandable that relatively few districts as yet have actually adopted policies that specifically address the selling and marketing provisions of PPRA. On the other hand, few would argue against the need to protect students’ personal information. Many businesses, particularly local businesses catering to youth markets, might still profit from acquiring student information from schools. Although we found districts did not use student data for purposes generally viewed as offensive, this does not mean such use would not happen in the future in the absence of safeguards. It appears that some superintendents may not be aware of the new PPRA requirements or have not understood Education’s guidance because many thought their district’s policies reflected the latest federal requirements on use of student data when, in fact, they did not. Also, several districts told us that they relied on state school boards associations to develop policies. Unlike models or guidance that reflect only federal law, policies developed by these groups may be most useful to districts because they correspond to both federal and state requirements. These associations are not on Education’s guidance dissemination list. We recommend that the Secretary of Education take additional action to assist districts in understanding that they are required to have specific policies in place for the collection, disclosure, and use of student information for marketing and selling purposes by disseminating its guidance to state school boards associations. We provided a draft of this report to the Department of Education for review and comment. Education concurred with our recommendation. Education’s comments are reproduced in appendix V. Education also provided technical comments, which were incorporated as appropriate. Unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will send copies of this report to the Secretary of Education, appropriate congressional committees, and others who are interested. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, or wish to discuss this material further, please call me on (202) 512-7215. Other contacts and staff acknowledgments are listed in appendix VI. To conduct our work, we reviewed state statutes, regulations, and proposed legislation; received mail questionnaires from 219 school districts selected on the basis of a national stratified probability sample design; conducted additional brief telephone interviews with 36 of these districts; analyzed policies voluntarily provided by 61 districts; interviewed officials at Education; and examined guidance issued by the department. In addition, we conducted telephone interviews with district and school officials in the 7 districts that we visited in 2000 for our previous study on commercial activities in schools to update our previous findings. To update our compilation of state statutes and regulation contained in our 2000 report, we researched legal databases, including Westlaw and Lexis, to identify laws passed between January 2000 and May 2004. To identify pending laws, we researched information available on databases maintained by state legislatures or followed links provided by these databases to identify bills introduced between January 2003 and February 2004. However, there are inherent limitations in any global legal search, particularly when—as is the case here—different states use different terms or classifications to refer to commercial activities in schools. We selected a national probability sample of districts, taken from school districts contained in the Department of Education’s Common Core of Data (CCD) Local Education Agency (LEA) file for the 2000-2001 school year. After removing districts from this list that were administered by state or federal authorities, we identified a population of 14,553 school districts. In the course of our study, we learned that some special education and other units in this list do not have legal authority to establish formal policies. As a result, we estimate that our study population consists of 13,866 districts in the 50 states and the District of Columbia. The sample design for the survey consisted of a stratified random probability sample design: 271 districts were drawn from the three strata shown in table 3. The strata were designed to draw relatively large numbers of districts from states likely to include districts that had engaged in or planned to engage in one or more specific activities involving the collection, disclosure, or use of student information for the purposes of marketing or selling or providing information to others for these purposes. Because we thought the activities of interest were low incidence activities, we wanted to maximize our ability to examine situations involving the use of student data for commercial purposes. The expected high-activity strata were defined as states that we identified as having laws that permitted commercial activities when we performed our work in 2000. As shown in table 4, the response rates were 76 percent, 83 percent, and 88 percent in the three sampling strata. The overall estimated response rate was 87 percent. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample might have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (for example, plus or minus 9 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. The practical difficulties of conducting any survey introduce nonsampling errors. For example, errors could be made in keying questionnaire data, some people may be more likely than others to respond, or questions may be misinterpreted. To minimize data-handling errors, data entry and programs were independently verified. To reduce the possibility of misinterpreting questions, we pretested the questionnaire in four districts. The full questionnaire is reproduced in appendix IV. We took additional steps to check answers for a subsample of respondents because of concerns about misinterpretation. We were concerned about possible misinterpretation of a question about implementing the law (question 1) because we discovered during our pretest that there was confusion between a narrow and (probably) little used portion of a student privacy law (PPRA) regarding selling and marketing of student data and a more familiar older law (FERPA) concerning student records. We were also concerned that there could be an underreporting of commercial activities because our question did not specify the activities (“During school year 2003-2004, did your LEA, or any school in your LEA, engage or plan to engage in one or more activity regarding the collection, disclosure, or use of student information for the purposes of marketing or selling or providing the information to others for these purposes?”) and because very few schools reported using student information for commercial purposes. Further, our pretesting indicated a potential problem with respondents suggesting they would be hesitant to report commercial activities. Moreover, because 86 percent of our respondents answered “no” to question 2, a higher rate than expected, there was some question in our minds whether our respondents who answered no had really considered all the possible ways in which student information could be used for commercial purposes when formulating their answers. We attempted to verify the answers to our question about commercial practices by telephoning 36 districts, the 20 districts that reported using student data for commercial purposes, and the 16 randomly selected districts from among the districts that reported not using student data for commercial purposes. Three of the 20 districts originally reporting the use of student data were found not to be using the data for commercial purposes because they were supplying the data to military organizations or for scholarships, allowed uses. For the 16 sampled districts reporting not using students’ data for commercial purposes, we asked the superintendent or other knowledgeable person in the district detailed questions about 20 possible commercial activities. We asked whether the school provides student addresses or phone numbers for specific activities (student pictures, letter jackets, any types of school uniforms, yearbooks, class rings, tuxedo rentals for prom, corsages for prom, musical instrument rentals, caps and gowns for commencement, preparation for Scholastic Aptitude Test or other tests, any other type of tutoring, transportation for school field trips, or travel for other trips such as spring break or ski trips), or to outside organizations for students to serve on an Internet or study panel, answer questionnaires, test or try out a product, or receive mailings from or talk with representatives of outside organizations who are selling services or products. One of these 16 districts was found to be using the data for commercial purposes in connection with photographers for school pictures. As a result of these telephone calls, we corrected the three incorrect records but did not make further adjustments for districts that were not contacted. We attempted to verify the answers about the development of policies by examining policies that were voluntarily submitted by districts in our sample. Sixty-one districts complied with our request to submit a copy of their policy. We analyzed these policies to determine if they specifically referred to marketing or commercial activities. We found 11 districts submitted policies that addressed these provisions and an additional 8 submitted policies that partially addressed the provisions in that they prohibited the release of student data for commercial purposes but did not address the collection or use of such data. Therefore, our questionnaire gathered relevant data about districts’ perceptions of the extent to which they thought they were implementing the provision, rather than the extent to which these policies actually did so. This probably reflects confusion in interpreting the provision or lack of awareness of its existence. We interviewed officials at Education’s Family Policy Compliance Office and the Office of the General Counsel. We examined in detail the guidance issued by the department to assist schools in implementing PPRA provisions on use of student data for marketing and selling purposes. We conducted telephone interviews with district and school officials in the seven districts we visited to collect information for Public Education: Commercial Activities in Schools (GAO-00-156), a report we issued in 2000, to discern changes in commercial activities in these districts. (See table 5.) We selected these districts because they engaged in a variety of commercial activities, served diverse populations—ranging from large numbers of poor students to children from affluent families—and varied in terms of geography and urbanicity. In updating that information for this report, we interviewed district-level officials, including superintendents and business managers, and elementary, middle school, and high school principals. This appendix lists state statutory and regulatory provisions relating to commercial activities as of May 2004. Shaded entries were enacted since 2000. This updated table identifies 15 new statutes and 3 new regulations and also includes state laws pertaining to the sale of competitive food in schools. In addition, the table includes several laws that were not identified in our previous report. Requires school committee to establish a travel policy that addresses expectations for fundraising by students . This appendix lists proposals addressing commercial activities in schools that have been introduced by some state legislatures between January 2003 and February 2004. The data are taken from the Web sites maintained by state legislatures. Many of these sites are revised only periodically, and information on some is limited to the current legislative session. Therefore this information should be viewed as a rough snapshot, rather than a comprehensive analysis. Would establish nutrition standards for beverages sold to students in public schools Would require food and drink items in public school vending machines to comply with board of education standards standards for vending machine food and drink items vending machines that are not within the cafeteria to be used exclusively for physical education and nutrition education and candy from being dispensed to students by school vending machines. In addition to those names above, the following people made significant contributions to this report: Susan Bernstein, Carolyn Blocker, Richard Burkard, Jim Fields, Behn Kelly, James Rebbe, and Jay Smale.
Congress has continuing interest in commercial activities in U.S. public schools. These include product sales, advertising, market research, and the commercial use of personal data about students (such as names, addresses, and telephone numbers) by schools. To update information about commercial activities in schools, Congress asked us to answer the following questions: (1) Since 2000, what statutes and regulations have states enacted and proposed to govern commercial activities in schools? (2) To what extent have districts developed policies implementing amended provisions of the Protection of Pupil Rights Amendment (PPRA) in the No Child Left Behind Act on the use of student data for commercial purposes? (3) What guidance has the Department of Education (Education) disseminated? To answer these questions, we researched state laws, surveyed a national sample of school districts, analyzed policies provided by districts, interviewed officials at Education, and examined its guidance. In addition, we updated findings from the districts we visited in 2000. Since we reported on commercial activities in 2000, 13 states have established laws addressing commercial activities in public schools, and at least 25 states are considering such legislation. Of the states establishing new laws, 6 established laws affecting market research by addressing the use of student data for commercial activities. Almost all of the proposed bills target the sale of food and beverages. Prior to 2000, 28 states established laws addressing commercial activities, particularly product sales and advertising. At that time, only 1 state passed a provision affecting market research. PPRA provisions required districts to implement policies on the collection, disclosure, or use of student data for marketing and selling purposes, and we estimate that about two-thirds of the districts in the nation believe they are developing or have developed such policies. However, of the 61 districts that sent us policies, only 19 policies addressed these issues. No district reported having collected student data for commercial purposes. Only a few reported disclosing student information for these purposes, and all had done so for school-related purposes such as graduation pictures. Education has undertaken several activities, such as sending guidance to state education agencies and school district superintendents and posting information on its Web page, to inform districts about the student information provisions of PPRA, but many districts appear not to understand the new requirements. Some districts told us that they relied on their state school boards association to develop policies for them because state school boards associations address federal and state laws. School districts in one state sent us policies that addressed commercial activities that had been developed by their state school boards association. Education was not required to disseminate guidance to associations of local school boards in each state and has not done so.
You are an expert at summarizing long articles. Proceed to summarize the following text: FAA has authority to authorize all UAS operations in the national airspace—military; public (academic institutions and federal, state, and local governments including law enforcement organizations); and civil (non-government including commercial). Currently, since a final rulemaking is not completed, FAA only allows UAS access to the national airspace on a case-by-case basis. FAA provides access to the airspace through three different means: Certificates of Waiver or Authorization (COA): Public entities including FAA-designated test sites may apply for COA. A COA is an authorization, generally for up to 2 years, issued by the FAA to a public operator for a specific UAS activity. Between January 1, 2014 and March 19, 2015 FAA had approved 674 public COAs. Special Airworthiness Certificates in the Experimental Category (Experimental Certificate): Civil entities, including commercial interests, may apply for experimental certificates, which may be used for research and development, training, or demonstrations by manufactures. Section 333 exemptions: Since September 2014, commercial entities may apply to FAA for issued exemptions under section 333 of the 2012 Act, Special Rules for Certain Unmanned Aircraft Systems. This exemption requires the Secretary of Transportation to determine if certain UASs may operate safely in the national airspace system prior to the completion of UAS rulemakings. FAA has granted such exemptions to 48 of 684 total applications (7 percent) from companies or other entities applying under section 333. These companies may apply to fly at their own designated sites or the test sites. While limited operations continue through these means of FAA approval, FAA has been planning for further integration. In response to requirements of the 2012 Act, FAA issued the UAS Comprehensive Plan and the UAS Integration Roadmap, which broadly map the responsibilities and plans for the introduction of UAS into the national airspace system. These plans provide a broad framework to guide UAS integration efforts. The UAS Comprehensive Plan described the overarching, interagency goals, and approach and identified six high- level strategic goals for integrating UAS into the national airspace. The FAA Roadmap identified a broad three-phase approach to FAA’s UAS integration plans—Accommodation, Integration, and Evolution—with associated priorities for each phase that provide additional insight into how FAA plans to integrate UAS into the national airspace system. This phased approach has been supported by both academics and industry. FAA plans to use this approach to facilitate further incremental steps toward its goal of seamlessly integrating UAS flight into the national airspace. Accommodation phase: According to the Roadmap, in the accommodation phase, FAA will apply special mitigations and procedures to safely facilitate limited UAS access to the national airspace system in the near-term. Accommodation is to predominate in the near-term with appropriate restrictions and constraints to mitigate any performance shortfalls. UAS operations in the national airspace system are considered on a case-by-case basis. During the near-term, R&D is to continue to identify challenges, validate advanced mitigation strategies, and explore opportunities to progress UAS integration into the national airspace system. Integration phase: The primary objective of the integration phase is establishing performance requirements for UAS that would increase access to the NAS. During the mid- to far-term, FAA is to establish new or revised regulations, policies, procedures, guidance material, training, and understanding of systems and operations to support routine NAS operations. FAA plans for the integration phase to begin in the near- to mid-term with the implementation of the small UAS rule and is to expand the phase further over time (mid- and far-term) to consider wider integration of a broader field of UASs. Evolution phase: In the evolution phase, FAA is to work to routinely update all required policy, regulations, procedures, guidance material, technologies, and training to support UAS operations in the NAS operational environment as it evolves over time. According to the Roadmap, it is important that the UAS community maintains the understanding that the NAS environment is not static and that many improvements are planned for the NAS over the next 13—15 years. To avoid obsolescence, UAS developers are to maintain a dual focus: integration into today’s NAS while maintaining cognizance of how the NAS is evolving. In February 2015, FAA issued a Notice for Proposed Rulemaking for the operations of small UASs—those weighing less than 55 pounds—that could, once finalized, allow greater access to the national airspace. To mitigate risk, the proposed rule would limit small UASs to daylight-only operations, confined areas of operation, and visual-line-of-sight operations. FAAs release of this proposed rule for small UAS operations started the process of addressing additional requirements of the 2012 Act. See table 1 for a summary of the rule’s major provisions. FAA has also met additional requirements outlined in the 2012 Act pertaining to the creation of UAS test sites. In December 2013, FAA selected six UAS test ranges. According to FAA, these sites were chosen based on a number of factors including geography, climate, airspace use, and a proposed research portfolio that was part of the application. All UAS operations at a test site must be authorized by FAA through either the use of a COA or an experimental certificate. In addition, there is no funding from FAA to support the test sites. Thus, these sites rely upon revenue generated from entities, such as those in the UAS industry, using the sites for UAS flights. Foreign countries are also experiencing an increase in UAS use, and some have begun to allow commercial entities to fly UASs under limited circumstances. According to industry stakeholders, easier access to testing in these countries’ airspace has drawn the attention of some U.S. companies that wish to test their UASs without needing to adhere to FAA’s administrative requirements for flying UASs at one of the domestically located test sites, or obtaining an FAA COA. It has also led at least one test site to partner with a foreign country where, according to the test site operator, UAS test flights can be approved in 10 days. Since being named in December 2013, the six designated test sites have become operational, applying for and receiving authorization from FAA to conduct test flights. From April 2014 through August 2014, as we were conducting our ongoing work, each of the six test sites became operational and signed an Other Transaction Agreement with FAA. All flights at a test site must be authorized under the authority of a COA or under the authority of an experimental certificate approved by FAA. Since becoming operational in 2014 until March 2015, five of the six test sites received 48 COAs and one experimental certificate in support of UAS operations resulting in over 195 UAS flights across the five test sites. These flights provide operations and safety data to FAA in support of UAS integration. While there are only a few contracts with industry thus far, according to test site operators these are important if the test sites are to remain operational. Table 2 provides an overview of test-site activity since the sites became operational. FAA officials and some test sites told us that progress has been made in part because of FAA’s and sites’ efforts to work together. Test site officials meet every two weeks with FAA officials to discuss current issues, challenges, and progress. According to meeting minutes, these meetings have been used to discuss many issues from training for designated airworthiness representatives to processing of COAs. In addition, test sites have developed operational and safety processes that have been reviewed by FAA. Thus, while FAA has no funding directed to the test sites to specifically support research and development activities, FAA dedicates time and resources to supporting the test sites, and FAA staff we spoke to believe test sites are a benefit to the integration process and worth this investment. According to FAA, its role is to ensure each test site sets up a safe-testing environment and to provide oversight that guarantees each test site operates under strict safety standards. FAA views the test sites as a location for industry to safely access the airspace. FAA told us it expects to collect data obtained from the users of the test ranges that will contribute to the continued development of standards for the safe and routine integration of UASs. The Other Transaction Agreement between FAA and the test sites defines the purpose of the test sites as research and testing in support of safe UAS integration into the national airspace. FAA and the test sites have worked together to define the role of the test sites and see that both the FAA and the test sites are effectively supporting each other and the goal of the test sites, we will continue to examine this progress and will report our final results late this year. As part of our ongoing work, we identified a number of countries that allow commercial UAS operations and have done so for years. In Canada and Australia, regulations pertaining to UAS have been in place since 1996 and 2002, respectively. According to a MITRE study, the types of commercial operations allowed vary by country. For example, as of December 2014, Australia had issued over 180 UAS operating certificates to businesses engaged in aerial surveying, photography, and other lines of business. In Japan, the agriculture industry has used UASs to apply fertilizer and pesticide for over 10 years. Furthermore, several European countries have granted operating licenses to more than 1,000 operators to use UASs for safety inspections of infrastructure, such as rail tracks, or to support the agriculture industry. The MITRE study reported that the speed of change can vary based on a number of factors, including the complexity and size of the airspace and the supporting infrastructure. In addition, according to FAA, the legal and regulatory structures are different and may allow easier access to the airspace in other countries for UAS operations. While UAS commercial operations can occur in some countries, there are restrictions controlling their use. We studied the UAS regulations of Australia, Canada, France, and the United Kingdom and found these countries impose similar types of requirements and restrictions on commercial UAS operations. For example, all these countries except Canada require government-issued certification documents before UASs can operate commercially. In November 2014, Canada issued new rules creating exemptions for commercial use of small UASs weighing 4.4 pounds or less and from 4.4 pounds to 55 pounds. UASs in these categories can commercially operate without a government-issued certification but must still follow operational restrictions, such as a height restriction and a requirement to operate within line of sight. Transport Canada officials told us this arrangement allows them to use scarce resources to regulate situations of relatively high risk. In addition, each country requires that UAS operators document how they ensure safety during flights and that their UAS regulations go into significant detail on subjects such as remote pilot training and licensing requirements. For example, the United Kingdom has established “national qualified entities” that conduct assessments of operators and make recommendations to the Civil Aviation Authority as to whether to approve that operator. If UASs were to begin flying today in the national airspace system under the provisions of FAA’s proposed rules, their operating restrictions would be similar to regulations in these other four countries. However, there would be some differences in the details. For example, FAA proposes altitude restrictions of below 500 feet, while Australia, Canada, and the United Kingdom restrict operations to similar altitudes. Other proposed regulations require that FAA certify UAS pilots prior to commencing operations, while Canada and France do not require pilot certification. Table 3 shows how FAA’s proposed rules compare with the regulations of Australia, Canada, France, and the United Kingdom. While regulations in these countries require UAS operations remain within the pilot’s visual line of sight, some countries are moving toward allowing limited operations beyond the pilot’s visual line of sight. For example, according to Australian civil aviation officials, they are developing a new UAS regulation that would allow operators to request a certificate allowing beyond line-of-sight operations. However, use would be very limited and allowed only on a case-by-case basis. Similarly, according to a French civil aviation official, France approves on a case-by-case basis, very limited beyond line-of-sight operations. Finally, in the United States, there have been beyond line-of-sight operations in the Arctic, and, NASA, FAA and the industry have successfully demonstrated detect-and-avoid technology, which is necessary for beyond line-of-sight operations. In March 2015, the European Aviation Safety Agency (EASA) issued a proposal for UAS regulations that creates three categories of UAS operations—open, specific, and certified. Generally, the open category would not require authorization from an aviation authority but would have basic restrictions including altitude and distance from people. The specific category would require a risk assessment of the proposed operation and an approval to operate under restrictions specific to the operation. The final proposed category, certified operations, would be required for those higher-risk operations, specifically when the risk rises to a level comparable to manned operations. This category goes beyond FAA’s proposed rules by proposing regulations for large UAS operations and operations beyond the pilot’s visual line-of-sight. As other countries work toward integration standards organizations from Europe and the United States are coordinating to try and ensure harmonized standards. Specifically, RTCA and the European Organization for Civil Aviation Equipment (EUROCAE) have joint committees focused on harmonization of UAS standards. We found during our ongoing work that FAA faces some critical steps to keeping the UAS integration process moving forward, as described below: Issue final rule for small UASs: As we previously discussed, the NPRM for small UAS was issued in February 2015. However, FAA plans to process comments it receives on the NPRM and then issue a final rule for small UAS operations. FAA told us that it is expecting to receive tens of thousands of comments on the NPRM. Responding to these comments could extend the time to issue a final rule. According to FAA, its goal is to issue the final rule 16 months after the NPRM, but it may take longer. If this goal is met, the final rule would be issued in late 2016 or early 2017, about 2 years after the 2012 Act required. FAA officials told us that it has taken a number of steps to develop a framework to efficiently process the comments it expects to receive. Specifically, the officials said that FAA has a team of employees assigned to lead the effort with contractor support to track and categorize the comments as soon as they are received. According to FAA officials, the challenge of addressing comments could be somewhat mitigated if industry groups consolidated comments, thus reducing the total number of comments that FAA must address. Implementation plan: The Comprehensive Plan and Roadmap provide broad plans for integration, but some have pointed out that FAA needs a detailed implementation plan to predict with any certainty when full integration will occur and what resources will be needed. The UAS Aviation Rulemaking Committee developed a detailed implementation plan to help FAA and others focus on the tasks needed to integrate UAS into the national airspace.need for an implementation plan that would identify the means, necessary resources, and schedule to safely and expeditiously integrate civil UASs into the national airspace. The proposed implementation plan contains several hundred tasks and other activities needed to complete the UAS integration process. FAA stated it used this proposed plan and the associated tasks and activities when developing its Roadmap. However, unlike the Roadmap, an implementation plan would include specific resources and time frames to meet the near-term goals that FAA has outlined in its Roadmap. An internal FAA report from August 2014 discussed the importance for incremental expansion of UAS operations. While this report did not specifically propose an implementation plan, it suggested that for each incremental expansion of operations, FAA identify the tasks necessary, responsibilities, resources, and expected time frames. Thus, the internal report suggested FAA develop plans to account for all the key components of an implementation plan. The Department of Transportation’s – Inspector General issued a report in June 2014 that contained a recommendation that FAA develop such a plan. The FAA mentioned concerns regarding the augmentation of appropriations and limitations on accepting voluntary services. As a general proposition, an agency may not augment its appropriations from outside sources without specific statutory authority. The Antideficiency Act prohibits federal officers and employees from, among other things, accepting voluntary services except for emergencies involving the safety of human life or the protection of property. 31 U.S.C. § 1342. operations conducted by the test sites must have a COA.requires the test sites to provide safety and operations data collected for each flight. Test site operators have told us incentives are needed to encourage greater UAS operations at the test sites. The operators explained that industry has been reluctant to operate at the test sites because under the current COA process, a UAS operator has to lease its UAS to the test site, thus potentially exposing proprietary technology. With a special airworthiness certificate in the experimental category, the UAS operator would not have to lease its UAS to the test site, therefore protecting any proprietary technology. FAA is, however, working on providing additional flexibility to the test sites to encourage greater use by industry. Specifically, FAA is willing to train designated airworthiness representatives for each test site. These individuals could then approve UASs for a special airworthiness certificate in the experimental category for operation at a test site. As previously indicated, three test sites had designated airworthiness representatives aligned with the test site, but only one experimental certificate had been approved. More broadly, we were told that FAA could do more to make the test sites accessible. According to FAA and some test site operators, FAA is working on creating a broad area COA that would allow easier access to the test site’s airspace for research and development. Such a COA would allow the test sites to conduct the airworthiness certification, typically performed by FAA, and then allow access to the test site’s airspace. As previously stated, one test site received 4 broad area COAs that were aircraft specific. Officials from test sites we spoke with during our ongoing work were seeking broad area COAs that were aircraft “agnostic”—meaning any aircraft could operate under the authority of that COA. According to FAA officials, in an effort to make test sites more accessible, they are working to expand the number of test ranges associated with the test sites, but not increasing the number of test sites. Currently, test sites have ranges in 14 states. Public education program: UAS industry stakeholders and FAA have begun an educational campaign that provides prospective users with information and guidance on flying safely and responsibly. The public education campaign on allowed and safe UAS operations in the national airspace may ease public concerns about privacy and support a safer national airspace in the future. UASs’ operating without FAA approval or model aircraft operating outside of the safety code established by the Academy of Model Aeronautics potentially presents a danger to others operating in the national airspace. To address these safety issues, FAA has teamed up with industry to increase public awareness and inform those wishing to operate UAS how to do so safely. For example, three UAS industry stakeholders and FAA teamed up to launch an informational website for UAS operators. UASs are increasingly available online and on store shelves. Prospective operators—from consumers to businesses—want to fly and fly safely, but many do not realize that, just because you can easily acquire a UAS, that does not mean you can fly it anywhere, or for any purpose. “Know Before You Fly” is an educational campaign that provides prospective users with information and guidance on flying safely and responsibly (see table 4). UAS and air traffic management: As FAA and others continue to address the challenges to UAS integration they are confronted with accounting for expected changes to the operations of the national airspace system as a FAA part of the Next Generation Air Transportation System (NextGen)has stated that the safe integration of UAS into the national airspace will be facilitated by new technologies being deployed. However, according to one stakeholder, UASs present a number of challenges that the existing national airspace is not set up to accommodate. For example, unlike manned aircraft, UASs that currently operate under COAs do not typically follow a civil aircraft flight plan where an aircraft takes off, flies to a destination, and then lands. Such flights require special accommodation by air-traffic controllers. Additionally, the air-traffic-control system uses navigational waypoints for manned aircraft, while UASs use Global Positioning System coordinates. Finally, if a UAS loses contact with its ground-control station, the air traffic controller might not know what the UAS will do to recover and how that may affect other aircraft in the vicinity. NextGen technologies, according to FAA, are continually being developed, tested, and deployed at the FAA Technical Center, and the FAA officials are working closely with MITRE to leverage all available technology for UAS integration. Chairman Ayotte, Ranking Member Cantwell, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information on this testimony, please contact Gerald L. Dillingham, Ph.D., at (202)512-2834 or dillinghamg@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Brandon Haller, Assistant Director; Daniel Hoy; Eric Hudson; Bonnie Pignatiello Leer; and Amy Rosewarne. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
UAS—often called drones—are aircraft that do not carry a pilot but instead operate on pre-programmed routes or are manually controlled. Currently, UAS only operate in the United States with FAA approval on a case-by-case basis. However, in the absence of regulations, unauthorized UAS operations have, in some instances, compromised safety. The FAA Modernization and Reform Act of 2012 emphasized the need to integrate UAS into the national airspace by requiring that FAA establish requirements governing them. In response, FAA has taken a number of steps, most notably, issuing an NPRM for small UAS operations, and designating six UAS test sites which became operational in 2014 and have begun to conduct test flights. Other countries have started to integrate UAS as well, and many currently allow commercial operations. This testimony provides preliminary observations on 1) status of FAA's test sites, 2) how other countries have progressed integrating UAS for commercial purposes, and 3) critical steps for FAA going forward. This testimony is based on GAO's ongoing study examining issues related to UAS integration into the national airspace system for UAS operations. To conduct this work, GAO reviewed documents and met with officials from test sites, FAA, and industry stakeholders. Since becoming operational in 2014, the Federal Aviation Administration's (FAA) unmanned aerial systems (UAS) test sites have conducted over 195 flights across five of the six test sites. These flights provide operations and safety data that FAA can use in support of integrating UAS into the national airspace. FAA has not provided funding to the test sites in support of research and development activities but has provided staff time through, for example bi-weekly meetings to discuss ongoing issues with test site officials. FAA staff said that the sites are a benefit to the integration process and worth this investment. GAO's preliminary observations found that other countries have progressed toward UAS integration and allow commercial use. GAO studied the UAS regulations in Australia, Canada, France, and the United Kingdom and found these countries have similar rules and restrictions on commercial UAS operations, such as allowing line of sight operations only. In November 2014, Canada issued new rules creating exemptions for UAS operations based on size and relative risk. In addition, as of December 2014, Australia had issued over 180 UAS operating certificates to businesses engaged in aerial surveying, photography, and other lines of business. Under the provisions of FAA's proposed rules, operating restrictions would be similar to regulations in these other four countries. For example, all countries have UAS altitude restrictions of 500 feet or below.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Results Act requires annual performance plans to cover each program activity set out in the agencies’ budgets. The act requires the plans to (1) establish performance goals to define the level of performance to be achieved by a program activity; (2) express such goals in an objective, quantifiable, and measurable form; (3) briefly describe the strategies and resources required to meet performance goals; (4) establish performance indicators to be used in measuring or assessing the relevant outputs, service levels, and outcomes of each program activity; (5) provide a basis for comparing actual results with the performance goals; and (6) describe the means to verify and validate information used to report on performance. DOT submitted to the Congress performance plans for fiscal years 1999 and 2000. DOT’s performance plan provides a clear statement of the performance goals and measures that address program results. Program goals and measures are expressed in a quantifiable and measurable manner and define the levels of performance. However, the plan could be improved by consistently linking the performance goals and strategic outcomes and consistently describing interagency coordination for crosscutting programs and the Department’s contribution to these programs. In addition, the plan could be improved by consistently describing how the management challenges facing the Department will be addressed, including how the Department will address certain financial management challenges identified by its OIG. DOT’s plan includes performance goals and measures that address program results and the important dimensions of program performance. The goals and measures define the level of performance and activities for specific programs. For example, the performance goal for reducing recreational boating fatalities from 819 in fiscal year 1997 to 720 or fewer in fiscal year 2000 will be accomplished by the core activities of several U.S. Coast Guard programs—boating safety grants provided to the states, regulations developed by the Recreational Boating Safety program, and boat inspections conducted by the Coast Guard auxiliary. The plan’s goals and measures are objective, quantifiable, and measurable. For all except a few performance goals, DOT’s plan includes projected target levels of performance for fiscal year 2000; for several goals, the plan includes multiyear targets. For goals that have no targets, an appendix to the plan explains why a target was not included. For nearly all of the goals and measures, the plan includes graphs that show baseline and trend data as well as the targets for fiscal years 1999 and 2000. The graphs clearly indicate trends and provide a basis for comparing actual program results with the established performance goals. For example, the performance goal for hazardous materials incidents has a graph that shows the number of serious hazardous materials incidents in transportation from 1985 through 1997. The graph also includes target levels for fiscal years 1999 and 2000 so a reader can conclude that this goal is not new in the fiscal year 2000 plan. If only a fiscal year 2000 target is indicated on a graph, the reader can assume that this is a new goal; however, this point is not explicit. The plan could be improved by indicating new goals that do not have a counterpart in the previous version. In addition, the plan includes performance goals to resolve a few mission-critical management challenges identified by us and/or DOT’s OIG.(See app. I.) For example, we reported that the Federal Aviation Administration (FAA) had encountered delays in implementing security initiatives at airports. The plan includes a performance goal to increase the detection rate of explosive devices and weapons that may be brought aboard aircraft, which will help measure progress in implementing the security initiatives. However, for the majority of the management challenges that have been identified, the plan does not include goals and measures. For example, the plan lists several activities to address problems with FAA’s $41 billion air traffic control modernization program, which since 1995 we have identified as a high-risk information technology initiative. The plan could be improved by consistently including goals and specific measures for addressing the challenges. In addition, the plan could be improved by more fully explaining how the Department will address certain financial management challenges identified by the OIG. For example, the OIG reported that the Department’s accounting system could not be used as the only source of financial information to prepare its financial statements. The fiscal year 2000 plan does not address this issue. Additionally, we question whether the plan includes the most current or complete milestones for solving long-standing financial management weaknesses. For example, the plan states that in fiscal year 1999, FAA’s new cost accounting system will capture financial information by project and activity for all of FAA’s projects. However, according to FAA’s fiscal year 1998 audit report, the cost accounting system that was scheduled to be operational by October 1, 1998, will not be fully implemented until March 31, 2001. DOT’s plan includes strategic outcomes for each of the Department’s five strategic goals. For example, for the strategic goal of safety, the Department aims to achieve six strategic outcomes—such as reducing the number of transportation-related deaths, the number and severity of transportation-related injuries, and the number of reportable transportation incidents and their related economic costs. The plan then lists specific annual performance goals that the Department will use to gauge its progress. However, in a few cases, the strategic outcomes have no related annual performance goals. For example, a strategic outcome related to mobility—to provide preventative measures and expeditious responses to natural and man-made disasters in partnership with other agencies to ensure that the Department provides for the rapid recovery of the transportation system—cannot be logically linked to any annual performance goals. The plan could be improved by including at least one annual performance goal for each strategic outcome. For each performance goal, the plan typically mentions those federal agencies that have outcomes in common with the Department. The plan also indicates goals and measures that are being mutually undertaken to support crosscutting programs. For example, the plan states that both FAA and the National Aeronautics and Space Administration (NASA) have complementary performance goals to decrease by 80 percent the rate of aviation fatalities by the year 2007. However, the plan could be improved by describing the nature of the coordination and consistently discussing the Department’s contribution to the crosscutting programs. The plan does not discuss the roles played by FAA and NASA and how their partnership will help reduce the rate of aviation fatalities. The discussion of performance goals and measures in DOT’s fiscal year 2000 performance plan is a moderate improvement over the discussion in the fiscal year 1999 performance plan and shows some degree of progress in addressing the weaknesses that we identified in the fiscal year 1999 plan. We observed that the fiscal year 1999 plan could have been improved by (1) explaining how the management challenges are related to the rest of the performance plan and by including goals and specific measures for addressing the challenges; (2) consistently linking strategic goals, program activities, and performance goals; and (3) indicating interagency coordination for the crosscutting programs and consistently discussing the Department’s contribution to these programs. Among the improvements, the fiscal year 2000 plan describes the management challenges facing the Department, explains activities that will be undertaken to address them, and provides page citations for specific performance goals that address the challenges discussed elsewhere in the plan. DOT’s plan provides a specific discussion of the strategies and resources that the Department will use to achieve its performance goals. The plan covers each program activity in the Department’s $51 billion proposed budget for fiscal year 2000. An appendix to the performance plan lists the Department’s program activities and proposed funding levels by strategic goal. These funds are also mentioned in the discussions of strategic goals in the body of the plan. For each performance goal, the plan lists an overall strategy for achieving it, as well as specific activities and initiatives. For example, DOT expects to increase transit ridership through investments in transit infrastructure, financial assistance to metropolitan planning organizations and state departments of transportation for planning activities, research on improving train control systems, and fleet management to provide more customer service. However, our work has identified problems associated with some strategies. The plan identifies the rehabilitation of approximately 200 airport runways in the year 2000 as one of the activities contributing to the performance goal concerning the condition of runway pavement. We reported that there is a lack of information identifying the point at which rehabilitation or maintenance of pavement can be done before relatively rapid deterioration sets in. As a result, FAA is not in a position to determine which projects are being proposed at the most economical time. We have also reported on strategies for addressing the performance goal of reducing the rate of crashes at rail-grade crossings, some of which are included in the performance plan. For example, the plan addresses two strategies noted in our report—closing more railroad crossings and developing education and law enforcement programs—but does not address the installation of new technologies. For each performance goal, the plan also describes external factors, called special challenges, that can affect the Department’s ability to accomplish the goal. For example, the performance goal for passenger vessel safety includes the external factors of (1) the remote and unforgiving environment at sea and human factors, which play an important role in maritime accidents; (2) the complexity of the operation and maintenance of passenger vessels; and (3) foreign and international standards that apply to vessels. The plan describes how particular programs, such as the marine safety program, will contribute to reducing the number of casualties associated with high-risk passenger vessels. The plan also indicates activities to address the external factors, including conducting oversight of technologically advanced vessels, such as high-speed ferries, and implementing and marketing the International Safety Management Code. In discussing corporate management strategies, the plan briefly describes how the Department plans to build, maintain, and marshal the resources, such as human capital, needed to achieve results and greater efficiency in departmental operations. The corporate strategies are broadly linked to the strategic goals. For example, the plan states that the human resource management strategy supports the strategic goals by ensuring that DOT’s workforce has the required skills and competencies to support program challenges. The plan lists four key factors that will contribute to this corporate strategy: workforce planning that will identify the need for key occupations; managing diversity; learning and development activities to support employees’ professional growth; and redesigning human resource management programs, such as personnel and payroll processing. In some cases, the plan lists specific programs under the corporate strategies but does not consistently identify the resources associated with them. For example, the plan discusses the completion of all remediation or appropriate contingency plans to make the computer systems ready for the year 2000 so that there are no critical system disruptions. However, there is no discussion of the resources needed to support this strategy. The discussion of strategies and resources in DOT’s fiscal year 2000 performance plan is much improved over the fiscal year 1999 plan. We observed that the fiscal year 1999 plan generally did a good job of discussing the Department’s strategies and resources for accomplishing its goals. However, we noted that the plan could have been improved in several ways, such as by more clearly describing the processes and resources required to meet the performance goals and recognizing additional external factors—such as demographic and economic trends that could affect the Department’s ability to meet its goals. DOT’s fiscal year 2000 plan contains such information. The Department’s fiscal year 2000 performance plan generally provides a clear and comprehensive discussion of the performance information. The plan discusses the quality control procedures for verifying and validating data, which, it says, DOT managers follow as part of their daily activities, as well as an overall limitation to DOT’s data—a lack of timeliness—and how the Department plans to compensate for this problem. In addition, for each performance measure, the plan provides a definition of the measure, data limitations and their implications for assessing performance, procedures to verify and validate data, the source database, and the baseline measure—or a reason why such information is missing. For example, the plan defines the performance measure for maritime oil spills—the gallons spilled per million gallons shipped—as counting only spills of less than 1 million gallons from regulated vessels and waterfront facilities and not counting other spills. The plan further explains that a limitation to the data is that they may underreport the amount spilled because they exclude nonregulated sources and major oil spills. However, the plan explains that large oil spills are excluded because they occur rarely, and, when they do occur, they would have an inordinate influence on statistical trends. The plan also explains that measuring only spills from regulated sources is more meaningful for program management. However, in some cases, we found additional problems with DOT’s data systems that could limit the Department’s ability to assess performance. For example, the performance measure for runway pavement condition—the percentage of runway pavements in good or fair condition—is collected under FAA’s Airport Safety Data Program. We reported that this information provides only a general pavement assessment for all runways. This information is designed to inform airport users of the overall conditions of the airports, not to serve as a pavement management tool. We further noted that these assessments are made by safety inspectors who receive little training in how to examine pavement conditions. The performance plan acknowledges our concerns and states that FAA will update its guidance for inspecting and reporting the condition of runway pavement and will ensure that inspectors are aware of the guidance. However, as of March 1999, FAA had not updated its guidance for inspectors. According to the National Association of State Aviation Officials, which is under contract to FAA to conduct inspections and provide data on runway conditions, new guidance would require additional training for all inspectors, which is not provided for in the contract. In addition, we discuss problems with DOT’s financial management information later in this report. The discussion of data issues in DOT’s fiscal year 2000 performance plan is much improved over that in the fiscal year 1999 plan and is well on its way to addressing the weaknesses that we identified in the fiscal year 1999 plan. We observed that the fiscal year 1999 plan provided a general discussion of procedures to verify and validate data, which was not linked to specific measures in the plan. For most measures, information about the data’s quality was lacking. Among the improvements in the fiscal year 2000 plan is detailed information about each performance measure, which includes information on verification, validation, and limitations. DOT is making good progress in setting results-oriented goals, developing measures to show progress, and establishing strategies to achieve those goals. However, the Department’s progress in implementing performance-based management is impeded primarily by the lack of adequate financial management information. DOT has clearly made good progress in implementing performance-based management. The Department’s September 1997 strategic plan and performance plan for fiscal year 1999 were both considered among the best in the federal government. And, as discussed in this report, DOT’s fiscal year 2000 performance plan improves upon the fiscal year 1999 plan. Furthermore, our work has shown that prior to these Department-wide efforts, several of DOT’s agencies made notable efforts in becoming performance-based. For example, in reviewing programs designated as pilots under the Results Act, we noted the successful progress of the Coast Guard’s marine safety program. We reported that the Coast Guard’s pilot program became more performance-based, changing its focus from outputs (such as the number of vessel inspections) to outcomes (saving lives). The Coast Guard’s data on marine casualties indicated that accidents were often caused by human error—not by deficiencies in the vessels. Putting this information to use, the Coast Guard shifted its resources and realigned its processes away from inspections and toward other efforts to reduce marine casualties. We reported that the marine safety program not only improved its mission effectiveness—for example, the fatality rate in the towing industry declined significantly—but did so with fewer people and at lower cost. Additionally, in 1997, we cited the National Highway Traffic Safety Administration (NHTSA) as a good example of an agency that was improving the usefulness of performance information. The agency’s fiscal year 1994 pilot performance report provided useful information by discussing the sources and, in some cases, the limitations of its performance data. In 1998, we again cited NHTSA as a good example of an agency that was developing performance measures for outcome goals that are influenced by external factors. Additionally, in 1997, we reported that the Federal Railroad Administration had shifted its safety program to focus on results—reducing railroad accidents, fatalities, and injuries—rather than the number of inspections and enforcement actions. The fiscal year 2000 performance plan indicates that the Department is taking further steps to instill performance-based management into its daily operations. According to the plan, DOT has incorporated all of its fiscal year 1999 performance goals into performance agreements between the administrators of DOT’s agencies and the Secretary. At monthly meetings with the Deputy Secretary, the administrators are expected to report progress toward meeting these goals and program adjustments that may be undertaken throughout the year. Finally, some individual agencies in DOT have developed performance information that includes leading indicators associated with the Department-wide goals. For example, the Department’s fiscal year 2000 budget submission for FAA’s facilities and equipment includes 10 performance goals—such as reducing the rate of accidents or incidents in which an aircraft leaves the pavement—related to reducing the fatal accident rate for commercial air carriers. According to DOT’s performance plan, such indicators will be used to help assess the results of DOT’s programs and provide a basis for redirecting them. A key challenge that DOT faces in implementing performance-based management is the lack of accountability for its financial activities. In fact, serious accounting and financial reporting weaknesses at FAA led us to designate FAA’s financial management as a high-risk area. From an overall perspective, DOT’s accounting information system does not provide reliable information about the Department’s financial performance. DOT’s OIG has consistently reported that it has been unable to express an opinion on the reliability of DOT’s financial statements because of, among other things, problems in the Department’s accounting system. Although the fiscal year 1998 audit report stated that FAA is making significant progress, it cited deficiencies that include inaccurate general ledger balances and unreconciled discrepancies between the general ledger balances maintained in FAA’s accounting system and subsidiary records. The OIG also cited problems with the Department’s accounting systems that prevented the systems from complying with the requirements of the Federal Financial Management Improvement Act of 1996. The OIG concluded that for the Department’s systems to comply with the requirements of the act, the Department needs, among other things, to modify its accounting system so that it is the only source of financial information for the consolidated financial statements. Concerns have also been expressed by the OIG about the number and total dollar amount of adjusting entries made outside the accounting system to prepare the financial statements. For example, FAA made 349 adjustments to its accounting records, which totaled $51 billion, in the process of manually preparing its fiscal year 1998 financial statement. DOT is taking actions to correct the financial reporting deficiencies that were identified by the OIG. On September 30, 1998, the Department submitted to the Office of Management and Budget (OMB) a plan that identified actions by DOT, especially FAA and the Coast Guard, to correct the weaknesses reported in the OIG’s audits. For example, the plan called for DOT to complete physical counts of and develop appropriate support for the valuation of property, plant, equipment, and inventory at FAA and the Coast Guard. Furthermore, the Department’s ability to implement performance management is limited by the lack of a reliable cost accounting system or an alternative means to accumulate costs. As a result, DOT’s financial reports (1) may not be capturing the full cost of specific projects and activities and (2) may lack a reliable “Statement of Net Cost,” which includes functional cost allocations. The lack of cost accounting information also limits the Department’s ability to make effective decisions about resource needs and to adequately control the costs of major projects, such as FAA’s $41 billion air traffic control modernization program. For example, without good cost accounting information, FAA cannot reliably measure the actual costs of its modernization program against established baselines, which impedes its ability to effectively estimate future costs. Finally, the lack of reliable cost information limits DOT’s ability to evaluate performance in terms of efficiency and effectiveness, as called for by the Results Act. We provided the Department of Transportation (DOT) with the information contained in this report for review and comment. The Department stated that it appreciated our favorable review of its fiscal year 2000 performance plan and indicated that it had put much work into improving on the fiscal year 1999 plan by addressing our comments on that plan. DOT made several suggestions to clarify the discussion of its financial accounting system, which we incorporated. The Department acknowledged that work remains to be done to improve its financial accounting system and stated that it has established plans to do this. DOT also acknowledged the more general need for good data systems to implement the Results Act and indicated that it is working to enhance those systems. To assess the plan’s usefulness for decisionmakers and maintain consistency with our approach in reviewing the fiscal year 1999 performance plan, we used criteria from our guide on performance goals and measures, strategies and resources, and verification and validation.This guide was developed from the Results Act’s requirements for annual performance plans; guidelines contained in OMB Circular No. A-11, part 2; and other relevant documents. The criteria were supplemented by our report entitled Agency Performance Plans: Examples of Practices That Can Improve Usefulness to Decisionmakers (GAO/GGD/AIMD-99-69, Feb. 26, 1999), which builds on the opportunities for improvement that we identified in the fiscal year 1999 performance plans. In addition, we relied on our knowledge of DOT’s operations and programs from our numerous reviews of the Department. To determine whether the performance plan covered the program activities set out in DOT’s budget, we compared the plan with the President’s fiscal year 2000 budget request for DOT. To determine whether the plan covered mission-critical management issues, we assessed whether the plan included goals, measures, or strategies to address major management challenges identified by us or the OIG. To identify the degree of improvement over the fiscal year 1999 plan, we compared the fiscal year 2000 plan with our observations on the previous plan. We performed our review in accordance with generally accepted government auditing standards from February through April 1999. We are providing the Honorable Rodney E. Slater, Secretary of Transportation, and the Honorable Jacob J. Lew, Director, OMB, with copies of this report. We will make copies available to others on request. If you or your staff have any questions about this report, please call me at (202) 512-2834. Major contributors to this report are listed in appendix II. In January 1999, we reported on major performance and management challenges that have limited the effectiveness of the Department of Transportation (DOT) in carrying out its mission. In December 1998, the Department’s Office of Inspector General (OIG) issued a similar report on the Department. Table I.1 lists the issues covered in those two reports and the applicable goals and measures in the fiscal year 2000 performance plan. Acquisition of major aviation and U.S. Coast Guard systems lacks adequate management and planning. None. The plan, however, acknowledges that air traffic control modernization is a management issue that needs to be addressed. Furthermore, the plan states that DOT has formulated activities to address this issue. The Federal Aviation Administration’s (FAA) $41 billion air traffic control modernization program has experienced cost overruns, delays, and performance shortfalls. The plan also identifies the Coast Guard’s acquisition project as a management issue and describes activities to address it. The Coast Guard needs to more thoroughly address the justification and affordability of its $9.8 billion project to replace/modernize its ships and aircraft. (DOT’s OIG also identified air traffic control modernization as a top priority management issue.) Important challenges remain in resolving FAA’s Year 2000 risks. (The OIG also identified this area as a management issue.) None. However, the plan’s corporate management strategies include an objective to complete all Year 2000 remediation or contingency plans so that there are no critical system disruptions. In addition, the plan states that the Year 2000 issue is a management challenge that needs to be addressed and identifies activities and milestones for addressing it. FAA and the nation’s airports face funding uncertainties. DOT and the Congress face a challenge in reaching agreement on the amount and source of long-term financing for FAA and airports. (The OIG also identified this area as a management issue.) None. However, the plan identifies financing for FAA’s activities as a major issue that the Department, the Congress, and the aviation community need to address. The plan also lists activities that FAA is undertaking to develop the information needed to make financing decisions. (continued) Aviation safety and security programs need strengthening. The plan includes performance goals to Shortcomings in aviation safety programs include the need for FAA to improve its oversight of the aviation industry, record complete information on inspections and enforcement actions, provide consistent information and adequate training for users of weather information, and resolve data protection issues to enhance the proactive use of recorded flight data to prevent accidents. reduce the fatal aviation accident rate for commercial air carriers and general aviation, reduce the number of runway incursions, reduce the rate of operational errors and deviations, FAA has encountered delays in implementing security initiatives at airports. Completing the initiatives will require additional funding and sustained commitment from FAA and the aviation industry. increase the detection rate for explosive devices and weapons that may be brought aboard aircraft, and FAA’s computer security of its air traffic control systems is weak. get threat information to those who need to act within 24 hours. (The OIG also identified aviation safety and transportation security as management issues.) In addition, the plan’s corporate management strategies include objectives to conduct vulnerability assessments on all new information technology systems to be deployed in fiscal year 2001 that fall under the purview of Presidential Decision Directive 63 and ensure that all DOT employees receive or have received general security awareness training in fiscal years 1999 or 2000 and that 60 percent of the systems administrators receive specialized security training by September 30, 2000. The plan also identifies computer security as a management challenge that needs to be addressed. A lack of aviation competition contributes to high fares and poor service for some communities. Increasing competition and improving air service will entail a range of solutions by DOT, the Congress, and the private sector. None. The plan identifies airline competition as a management challenge. DOT has submitted to the Congress a number of legislative proposals to address the issue. DOT needs to continue improving oversight of surface transportation projects. Many highway and transit projects continue to incur cost increases, experience delays, and have difficulties acquiring needed funding. None. The plan identifies surface transportation infrastructure needs as a management challenge and identifies activities to address the issue. (The OIG also identified this area as a management issue.) Amtrak’s financial condition is tenuous. Since it began operations in 1971, Amtrak has received $22 billion in federal subsidies. Because there is no clear public policy that defines the role of passenger rail in the national transportation system and because Amtrak is likely to remain heavily dependent on federal assistance, the Congress needs to decide on the nation’s expectations for intercity rail and the scope of Amtrak’s mission in providing that service. None. The plan identifies the financial viability of Amtrak as a management challenge and states that, as a member of Amtrak’s Board, DOT will work to address the issue. (The OIG also identified this area a management issue.) (continued) DOT’s lack of accountability for its financial activities impairs its ability to manage programs and exposes the Department to potential waste, fraud, mismanagement, and abuse. Since 1993, the OIG has been unable to express an opinion on the reliability of the financial statements of certain agencies within the Department. DOT also lacks a cost accounting system or alternative means of accumulating the full costs of specific projects or activities. None. However, the plan’s corporate management strategies include objectives to receive an “unqualified,” or “clean,” audit opinion on the Department’s fiscal year 2000 consolidated financial statement and stand-alone financial statements; (The OIG also identified this area as a management issue.) enhance the efficiency of the accounting operation in a manner consistent with increased accountability and reliable reporting; and implement a pilot of the improved financial systems environment in at least one operating administration. The plan identifies financial accounting as a management challenge facing the Department and addresses key weaknesses that should be resolved before DOT can obtain a “clean” opinion in fiscal year 2000. DOT’s plan includes performance goals to improving the Department’s motor carrier safety program and taking prompt and meaningful enforcement actions for noncompliance, reduce the rate of fatalities involving large trucks, increase seat belt usage nationwide, increasing the level of safety of commercial trucks and drivers entering the United States from Mexico, reduce the rate of grade-crossing crashes, reduce the rate of rail-related fatalities for trespassers, reducing railroad grade-crossing and trespasser accidents, reduce the number of serious hazardous materials incidents in transportation, and improving compliance with safety regulations by entities responsible for transporting hazardous materials, and reduce the rate of rail-related crashes and fatalities. enhancing the effectiveness of the Federal Railroad Administration’s Safety Assurance and Compliance Program. DOT needs to provide leadership to maintain, improve, and develop the port, waterway, and intermodal infrastructure to meet current and future needs. There is also a need to identify funding mechanisms to maintain and improve the harbor infrastructure of the United States. DOT’s plan includes performance goals to reduce the percentage of ports reporting landside impediments to the flow of commerce and ensure the availability and long-term reliability of the St. Lawrence Seaway’s locks and related navigation facilities in the St. Lawrence River. (continued) DOT faces several challenges in implementing the Government Performance and Results Act. Many of DOT’s performance outcomes, such as improved safety, a reduction in fatalities and injuries, and well-maintained highways, depend in large part on actions by other federal agencies, states, and the transportation industry. Their assistance will be critical in meeting DOT’s goals, which were developed under the Results Act. DOT’s ability to achieve its goals will also be influenced by the effective utilization of human resources. None. The plan identifies the Department’s implementation of the Results Act as a management challenge and mentions activities to address the issue. Helen Desaulniers The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Department of Transportation's (DOT) performance plan for fiscal year (FY) 2000, focusing on: (1) the usefulness of DOT's plan in providing a clear picture of intended performance across the Department; (2) the strategies and resources that DOT will use to achieve its goals; and (3) whether DOT's performance information will be credible. GAO noted that: (1) overall, DOT's performance plan for FY 2000 should be a useful tool for decisionmakers; (2) it provides a clear picture of intended performance across the Department, a specific discussion of the strategies and resources that the Department will use to achieve its goals, and general confidence that the Department's performance information will be credible; (3) DOT's FY 2000 performance plan represents a moderate improvement over the FY 1999 plan in that it indicates some degree of progress in addressing the weaknesses that GAO identified in an assessment of the FY 1999 plan; (4) GAO observed that the FY 1999 plan did not: (a) sufficiently address management challenges facing the Department; (b) consistently link strategic goals, program activities, and performance goals; (c) indicate interagency coordination for crosscutting areas; or (d) provide sufficient information on external factors, the processes and resources for achieving the goals, and the performance data; (5) among the improvements in the FY 2000 plan are more consistent linkages among the program activities and performance goals, additional information on external factors and strategies for achieving the goals, and a more comprehensive discussion of the data's quality; (6) these improvements and other activities indicate that DOT has clearly made good progress in implementing performance-based management; (7) for example, the plan indicates that the Department is incorporating the performance goals into performance agreements between the administrators of DOT's agencies and the Secretary; (8) however, the plan still needs further improvement, especially in explaining how certain management challenges, such as financial management weaknesses, will be addressed; (9) for example, DOT's Office of Inspector General (OIG) reported that the Department's accounting system could not be used as the only source of financial information to prepare its financial statements; (10) while the FY 2000 plan does not address this issue, the Department has recognized the financial reporting deficiencies identified by the OIG and is taking actions to correct them; and (11) the lack of accountability for financial activities is a key challenge that DOT faces in implementing performance-based management.
You are an expert at summarizing long articles. Proceed to summarize the following text: In 1986, the President signed a joint resolution of Congress that directed the Secretary of Defense to establish a unified combatant command for special operations forces. In April 1987, the Secretary of Defense established USSOCOM with the mission to provide trained and combat- ready special operations forces to DOD’s geographic combatant commands. Since 2003, DOD has further expanded the role of USSOCOM to include greater responsibility for planning and leading the department’s efforts in the war on terrorism. In addition to training, organizing, equipping, and deploying combat-ready special operations forces to the geographic combatant commands, USSOCOM has the mission to lead, plan, synchronize, and, as directed, execute global operations against terrorist networks. DOD doctrine describes the characteristics of special operations forces, and provides joint force commanders with the guidance and information necessary to identify, nominate, and select missions appropriate for special operations forces. According to doctrine, special operations forces perform two types of activities: special operations forces perform tasks that no other forces in DOD conduct, and they perform tasks that other DOD forces conduct but do so according to a unique set of conditions and standards. In particular, special operations forces are specifically organized, trained, and equipped to accomplish nine core tasks, which represent the collective capabilities of all special operations forces rather than those of any one unit. Table 1 defines these core tasks. Since 1987, the Marine Corps and USSOCOM have taken several steps to expand the relationship between the two organizations. For example, beginning in 1993, the Marine Corps and USSOCOM established a working group to discuss efforts to improve communication, cooperation, and interoperability. These efforts received a renewed emphasis with the onset of the war on terrorism. In 2002, the Secretary of Defense requested the military services to increase their support to USSOCOM. In 2003, the Marine Corps established a specially trained and equipped unit as a concept to demonstrate the Marine Corps’ ability to conduct special operations missions under the operational control of USSOCOM. This unit deployed to Iraq in April 2004 to perform selected special operations missions. The Secretary of Defense approved the establishment of a Marine Corps service component to USSOCOM in October 2005. In February 2006, the Marine Corps activated its special operations command. Since August 2006, the Marine Corps special operations command has deployed its forces to perform special operations missions to support the geographic combatant commanders’ requirements. Figure 1 provides a timeline. The Marine Corps Forces Special Operations Command is the Marine Corps service component to USSOCOM. The Command is headquartered on Marine Corps Base Camp Lejeune, North Carolina. The Marine Corps special operations command has five major subordinate units. These units include two Marine Special Operations Battalions, the Marine Special Operations Advisor Group, the Marine Special Operations Support Group, and the Marine Special Operations School. Table 2 provides a description of each unit. By fiscal year 2011, the Command will be authorized 2,516 personnel— 2,483 military personnel and 33 civilians. With the exception of one Marine Corps reserve position, all of the authorized military personnel will be drawn from the military services’ active components. The Marine Corps special operations component will be the smallest service component under USSOCOM. The other military services’ special operations components include the following. The Army component is the U.S. Army Special Operations Command. Army special operations forces include Special Forces, Rangers, Special Operations Aviation, Civil Affairs, and Psychological Operations units. The Navy component is the Naval Special Warfare Command. Naval Special Warfare forces include SEAL Teams, SEAL Delivery Vehicle Teams, and Special Boat Teams. The Air Force component is the Air Force Special Operations Command. Air Force special operations forces include fixed and rotary wing aviation squadrons, a combat aviation advisory squadron, special tactics squadrons, and an unmanned aerial vehicle squadron. Figure 2 shows the number of military personnel positions in fiscal year 2007 authorized for DOD’s special operations forces in the active component and reserve component. The authorizations include positions in special operations forces warfighter units, support units, and headquarters units such as USSOCOM and its service component commands. Since fiscal year 2006, the Marine Corps and USSOCOM have requested baseline and supplemental funding for the Marine Corps special operations command. In fiscal year 2006, the Marine Corps and USSOCOM received $109.3 million in supplemental funds to establish the Marine Corps special operations command. In fiscal year 2007, the Marine Corps and USSOCOM received an additional $368.2 million in baseline funds for the Command, and $32 million in supplemental funding. As shown in table 3, the Marine Corps and USSOCOM have projected military construction, operation and maintenance, and procurement funding for the Command for fiscal years 2008 through 2013. Although the Marine Corps has made progress in establishing its special operations command, the Command has not fully identified the force structure needed to enable it to perform its assigned missions. The Marine Corps has taken several steps to establish its special operations command, such as activating the Command’s headquarters, establishing Marine Corps special operations forces units, and deploying these units to conduct special operations missions; however, DOD did not use critical practices of effective strategic planning when developing the initial force structure plans for the Command. As a result of limitations in the strategic planning process, the Marine Corps special operations command has identified several force structure challenges that will likely affect the Command’s ability to perform its full range of responsibilities, and is working to revise its force structure to address these challenges. The Marine Corps has taken several steps to establish the Marine Corps special operations command. For example, the Marine Corps has activated the headquarters of its special operations command, established some of its special operations forces units—including 4 special operations companies and 12 foreign military training teams to date—and deployed these units to conduct special operations missions. However, the initial force structure plans for the Command were not developed using critical practices of effective strategic planning. According to officials from the Office of the Secretary of Defense, USSOCOM, and the Marine Corps, the Secretary of Defense directed that the Marine Corps establish a special operations command to meet the growing demand for special operations forces in the war on terrorism. The Secretary of Defense, with input from the Marine Corps, determined that 2,516 personnel was an appropriate size for the Command based on the assumptions that the Command was to be staffed within the existing Marine Corps end-strength, and the establishment of the Command could not significantly affect the Marine Corps budget. Marine Corps planners then based the composition and number of Marine Corps special operations forces units on existing units within the service that had trained to perform similar missions in the past. For example, Marine Corps officials told us that the force structure plans for its special operations companies were modeled after a Maritime Special Purpose Force, which had previously trained to conduct some special operations missions for conventional Marine Corps units. Additionally, Marine Corps officials told us the initial force structure plan to establish nine special operations companies was based on the need to accommodate the deployment schedule of its Marine Expeditionary Units. The initial force structure plan also included the transfer of the Foreign Military Training Unit from the conventional force to its special operations command. Using this existing force structure, the Marine Corps planned to establish 24 foreign military training teams under its special operations command. DOD did not fully incorporate critical practices of effective strategic planning when it developed these initial force structure plans for the Marine Corps special operations command. We have previously reported that strategic planning is important to ensure that an organization’s activities support its strategic goals. Effective planning principles, such as those embodied in the Government Performance and Results Act of 1993 and used by leading organizations, require federal agencies to set strategic goals and develop strategic plans to accomplish those goals. Our prior work has identified several critical practices for effective strategic planning, including the alignment of activities and resources to meet organizational missions and stakeholder involvement. Our prior work has shown that leading organizations recognize that an organization’s activities, core processes, and resources must be aligned to support its mission and help it achieve its goals. Organizations should assess the extent to which their programs and activities contribute to meeting their mission and desired outcomes. In addition, successful organizations base their strategic planning, to a large extent, on the interests and expectations of their stakeholders. Stakeholder involvement is important to help agencies ensure that their efforts and resources are targeted at the highest priorities. Just as important, involving stakeholders in strategic planning efforts can help create a basic understanding among the stakeholders of the competing demands that confront most agencies, the limited resources available to them, and how those demands and resources require careful and continuous balancing. However, in our review of the planning process that preceded the establishment of the Marine Corps special operations command, we found the Command’s activities and resources were not fully aligned with the organization’s mission. For example, although the alignment of activities and resources to meet organizational missions, a critical practice of effective strategic planning, should include an analysis of the number of personnel required for an organization to accomplish its missions, Marine Corps officials stated that the size of the Marine Corps special operations command (2,516 personnel) was not determined through an analysis of the Command’s assigned missions. Specifically, neither the Office of the Secretary of Defense nor the Marine Corps conducted a comprehensive, data-driven analysis to determine the number of personnel needed to meet the Marine Corps special operations command’s mission requirements that directly tied the number of personnel authorized for the Command with its assigned missions. USSOCOM did not provide official mission guidance to the Marine Corps until October 2006, almost 1 year after the Command’s personnel authorizations had been determined. In the absence of specific guidance, Marine Corps planners did not conduct a comprehensive, data- driven analysis to determine the number of personnel needed to meet the Marine Corps special operations command’s full range of mission requirements. Our prior work has shown that valid and reliable data on the number of employees required to meet an agency’s needs are critical because human capital shortfalls can threaten an agency’s ability to perform its missions efficiently and effectively. The alignment of activities and resources should also include an analysis of the number and composition of Marine Corps special operations forces units. However, the Marine Corps did not determine the number and composition of its special operations forces units based on specific guidance from USSOCOM. Although the Marine Corps special operations command was established as the Marine Corps service component under USSOCOM, USSOCOM did not provide guidance to Marine Corps planners on the full range of missions assigned to the Command, or on the number of special operations forces that the Marine Corps needed to provide. Both USSOCOM and Marine Corps officials reported that USSOCOM provided only informal guidance to Marine Corps planners on the core tasks that would be assigned to Marine Corps special operations forces units. According to Marine Corps officials involved in the planning for the Marine Corps special operations command, the informal guidance did not prioritize the core tasks to focus Marine Corps planning efforts, and the guidance did not identify the required capacity for specific capabilities within the Command. The official guidance that USSOCOM provided to the Marine Corps special operations command in October 2006 contained a complete list of missions the Command would be expected to perform. However, the guidance did not prioritize these missions to focus the Command’s planning efforts. Additionally, the guidance did not establish milestones and benchmarks that the Command could use to determine when, and to what level of proficiency, Marine Corps special operations forces units should be able to perform all of their assigned missions. In the absence of specific guidance, Marine Corps officials told us the initial force structure plan to establish nine special operations companies was not based on a USSOCOM requirement for the number of these companies. Moreover, while the decision to transfer the foreign military training teams to the Marine Corps special operations command met the Command’s mission to provide USSOCOM with a foreign internal defense capability, the decision on the number of teams needed by the Command to meet USSOCOM’s mission requirements was left to the Marine Corps. Marine Corps officials also told us that in the absence of clear guidance on the required capacity for support personnel within the Command, Marine Corps planners prioritized the assignment of personnel in warfighter positions in special operations forces units over positions in support units. Specifically, because planners were basing the Command’s force structure decisions on the personnel limit established by DOD, the Marine Corps exchanged positions related to support functions within the Command for positions in its warfighter units. Support functions such as vehicle maintenance, motor transportation, intelligence operations, communication support, and engineering support provide important and necessary support to Marine Corps special operations forces units, as well as other special operations forces units in USSOCOM’s other service components. Furthermore, we found a lack of involvement by some key stakeholders in the establishment of the Marine Corps special operations command. For example, the special operations components with the department’s geographic combatant commands—which are responsible for commanding special operations forces around the world—were not involved in the process to establish the Marine Corps special operations command or in the decisions to target the service’s resources to their highest priorities and mission requirements. Officials with the U.S. Pacific Command’s special operations command who are responsible for functions such as operations and planning told us they provided little input into the planning process to help determine how Marine Corps special operations forces units should be organized and what capabilities were needed in these units to meet the mission requirements of the geographic combatant commands. Similarly, officials from the U.S. Central Command’s special operations command who were responsible for operations and planning in that command told us they were not included in the planning process that preceded the establishment of the Marine Corps special operations command. In particular, officials told us they were not involved in the decisions regarding the types of missions that Marine Corps special operations forces units would need to perform, although as we noted in our July 2006 report on special operations forces deployment trends, 85 percent of all fiscal year 2005 special operations forces deployments were to the U.S. Central Command’s area of responsibility. The Marine Corps special operations command has identified several force structure challenges that stem from limitations in DOD’s strategic planning process that will likely affect its ability to perform its full range of responsibilities, and the Command is revising its force structure plans to address these challenges. For example, the Command has determined that the number and composition of its special operations forces units are not aligned with the Command’s mission requirements. In particular, the Command has identified shortages in positions such as authorized intelligence personnel, which will affect the Command’s ability to simultaneously provide intelligence support to Marine Corps special operations forces and USSOCOM. Moreover, according to Marine Corps special operations command officials, the limited number of personnel available to perform support functions will prevent the Command from effectively performing all of its mission requirements. To illustrate this point, Marine Corps special operations command officials told us that the initial force structure plans for the Command call for less than one support person available for every person assigned to a warfighter position. According to Command officials, this ratio is less than what would be expected for a command of similar size and assigned missions. Officials said an expected ratio for a command such as theirs would be at least two support personnel to one warfighter, and therefore their goal is to adjust the force structure to meet this ratio. In addition, Marine Corps special operations command officials reported that the number of positions authorized for support personnel will also affect the Command’s ability to meet its responsibilities to organize, train, and equip Marine Corps special operations forces. Officials stated the number of personnel assigned to its command elements, such as the headquarters and the staffs of the subordinate units, is insufficient to effectively accomplish these responsibilities. Current force structure plans authorize approximately 780 military personnel and 33 civilian personnel for the Command’s headquarters and the staffs of its major subordinate units. At the time of our work, the Marine Corps special operations command was developing several proposals to significantly revise its force structure to address the challenges stemming from the limitations in the planning process and to better align the Command to meet USSOCOM’s mission guidance. These revisions would adjust the number and size of the Command’s warfighter units to better meet mission requirements. Additionally, if approved, some of the positions made available through the revisions could be used to remedy shortfalls in personnel who perform support functions such as personnel management, training, logistics, intelligence, and budget-related activities. Command officials told us these proposals would likely mitigate many of the challenges that have resulted from the lack of a comprehensive strategic planning process, but they acknowledged that many of the decisions that are needed to implement the force structure changes will be made by Headquarters, Marine Corps. In order to move forward with its proposals, the Command is working to complete several analyses of the personnel and funding requirements that are tied to these proposed force structure changes. It has set milestones for when these analyses should be completed in order to determine whether any additional funding or personnel would be required. However, the Command expects to be able to implement these proposals within the funding levels already identified and planned for future fiscal years. Until the analyses are completed, the Command will be unable to determine whether the approved plans for its personnel and funding should be adjusted in order for the Command to perform all of its assigned missions. Although preliminary steps have been taken, the Marine Corps has not developed a strategic human capital approach to manage the critical skills and competencies required of personnel in its special operations command. While the Marine Corps special operations command has identified some skills that are needed to perform special operations missions, it has not conducted a comprehensive analysis of the critical skills and incremental training required of personnel in its special operations forces units. Such analyses are critical to the Marine Corps’ efforts to develop a strategic human capital approach for the management of personnel in its special operations forces units. Without the benefit of these analyses, the Marine Corps has developed an interim policy to assign some personnel to special operations forces units for extended tour lengths to account for the additional training and skills needed by these personnel. However, this interim policy is inconsistent with the Marine Corps special operations command’s goal for the permanent assignment of some personnel within the special operations community. While the Marine Corps special operations command has identified some critical skills and competencies that are needed to perform special operations missions, it has not fully identified these requirements because it has not yet conducted a comprehensive analysis to determine all the critical skills and additional training required of personnel in its units. We have previously reported that strategic human capital planning is essential to federal agencies’ efforts to transform their organizations to meet the challenges of the 21st century. Generally, strategic human capital planning addresses two needs: (1) aligning an agency’s human capital program with its current and emerging mission and programmatic goals, and (2) developing long-term strategies for acquiring, developing, motivating, and retaining staff to achieve programmatic goals. Our prior work has shown that the analysis of critical skill and competency gaps between current and future workforce needs is an important step in strategic human capital planning. We have also reported that it is essential that long-term strategies include implementation goals and timelines to demonstrate that progress is being made. As part of the effort to identify these critical skills, the Marine Special Operations School is developing a training course that will provide baseline training to newly assigned personnel to prepare them for positions in warfighter units. For example, the Command plans to provide these personnel with training on advanced survival skills and foreign language in order to prepare them to perform special operations missions. However, the Marine Corps special operations command has not fully identified and documented the critical skills and training that are required for personnel to effectively perform special operations missions, and that build on the skills that are developed in conventional Marine Corps units. Officials told us the Command had not yet identified the full range of training that will be provided in this course in order to establish a minimum level of special operations skills for the Command’s warfighters. Additionally, the Marine Corps special operations command has not fully identified the advanced skills and training necessary to support some of the Command’s more complex special operations missions, such as counterterrorism, information operations, and unconventional warfare. While the Marine Corps special operations command has established a time frame for when it wants to conduct the training course under development, it has not set milestones for when it will complete its analysis of the critical skills and competencies required of its personnel. Moreover, the Marine Corps special operations command has not yet fully determined which positions should be filled by specially trained personnel who are strategically managed to meet the Command’s missions. Officials told us there is broad agreement within the Command that personnel assigned to operational positions in its warfighter units require specialized training in critical skills needed to perform special operations missions, and should therefore be strategically managed to meet the Command’s mission requirements. These personnel include enlisted reconnaissance and communications Marines assigned to the Marine Special Operations Battalions and infantry Marines assigned to the Marine Special Operations Advisor Group, as well as some officers assigned to these units. At the time of our review, however, we found that the Command had not yet determined which additional positions should also be filled by personnel who are strategically managed. In particular, we were told by officials from the Command’s headquarters that a determination has not yet been made as to whether personnel who deploy with warfighter units to provide critical combat support, such as intelligence personnel, require specialized skills and training that are incremental to the training provided in conventional force units. For example, officials have not yet decided whether intelligence personnel should attend the initial training course that is under development. However, the Marine Special Operations School plans to provide these personnel with specialized intelligence training to enable them to support certain sensitive special operations missions in support of deploying units. Officials acknowledge that until the Command determines the extent to which support personnel require specialized skills and training to perform their missions, the Command cannot fully identify which positions should be filled by personnel who are strategically managed. To address the personnel needs of the Marine Corps special operations command, Headquarters, Marine Corps, has established an interim policy that provides for extended assignments of some personnel in special operations forces units; however, the absence of a comprehensive analysis of the critical skills and training required of personnel in special operations forces units has contributed to a lack of consensus within the Marine Corps on a strategic human capital approach to manage these personnel. The extended assignments apply to Marines who are beyond their first term of enlistment, which is typically 3 to 5 years, and who are assigned to one of the Marine Corps special operations command’s warfighter, training, or intelligence units. The policy directs that these personnel will be assigned to the Command for 48 months, in part, to account for the additional training provided to personnel in these units. According to officials at Headquarters, Marine Corps, and the Marine Corps special operations command, the 48-month assignment policy is designed to retain designated personnel within special operations forces units long enough to complete at least two deployments. All other Marines will be assigned to the Command for approximately 36 months, which is a typical tour length for Marines in conventional force units. The interim policy also addresses a concern that personnel assigned to special operations forces units will have opportunities for career progression. In general, Marines are managed according to established career progression models for their respective career fields. These career progression models identify the experiences, skills, and professional military education necessary for personnel to be competitive for promotion to the next grade. For example, as personnel are promoted to a higher grade, they are typically placed in positions with increased responsibilities that are consistent with their career progression models in order to remain competitive for further promotion. The Marine Corps has not established a separate career field for special operations forces personnel; instead, the Marine Corps is assigning personnel from a variety of career fields, such as reconnaissance, to its special operations forces units. However, the current structure of the Marine Corps special operations command cannot support long-term assignments of personnel within the Command, in some cases, due to limited opportunities for progression into positions with increased responsibilities. For example, our analysis of the Marine Corps special operations command’s force structure shows that the Command is authorized 76 percent fewer reconnaissance positions for personnel in the grade of E-7 as compared to the number of reconnaissance positions for personnel in the grade of E-6. The Marine Corps has established targets for the promotion of reconnaissance personnel to the grade of E-7 after they have spent approximately 5 years in the grade of E-6. As a result, many reconnaissance personnel who are promoted to E-7 while assigned to a special operations forces unit will need to be reassigned to the conventional force in order to move into an E-7 position and remain competitive for further promotion. The interim policy is also consistent with the approved plan to increase the authorized end-strength of the Marine Corps. In January 2007, the President approved plans to increase the active duty end-strength of the Marine Corps from 179,000 in fiscal year 2006 to 202,000 by fiscal year 2011. This plan includes growth in the number and size of conventional force units and is intended to reduce the stress on frequently deployed units, such as intelligence units, by achieving a 1 to 2 deployment to home station ratio for these units. Marine Corps officials associated with units that will be affected by these increases, such as reconnaissance and intelligence units, told us that the rotation of personnel from Marine Corps special operations units back into the conventional force is important to help ensure that conventional force units are staffed with experienced and mature personnel. For example, our analysis of Marine Corps data shows that by fiscal year 2009, the Marine Corps will increase the servicewide requirement for enlisted counterintelligence/human intelligence personnel by 50 percent above fiscal year 2006 levels. Although the Marine Corps is adjusting its accession, training, and retention strategies to meet the increased requirement for enlisted counterintelligence/human intelligence personnel, officials stated the rotation of these experienced personnel from the Marine Corps special operations command back into the conventional force can help meet the increased personnel needs of conventional intelligence units, while also ensuring that conventional force units have an understanding of special operations tactics, techniques, and procedures. Additionally, officials told us the rotation of personnel from special operations forces units to conventional force units supports the Marine Corps’ process for prioritizing the assignment of personnel to units that are preparing for deployments to Iraq and other war on terrorism requirements. Notwithstanding the intended outcome of the interim policy, Marine Corps special operations command officials told us that the policy might impact the Command’s ability to prepare its forces to conduct the full range of its assigned missions and that the policy is inconsistent with the Command’s stated goal for the permanent assignment of personnel in its special operations forces units. In congressional testimony, the Commander of the Marine Corps special operations command specified his goal to develop a personnel management strategy that would retain some personnel within the special operations community for the duration of their careers. Officials from the Command told us that a substantial investment of time and resources is required to train personnel in special operations forces units on the critical skills needed to perform special operations missions. For example, Marine Corps special operations forces personnel will receive in-depth training to develop foreign language proficiency and cultural awareness, which is consistent with DOD’s requirement to increase the capacity of special operations forces to perform more demanding and specialized tasks during long-duration, indirect, and clandestine operations in politically sensitive environments. However, these officials believe that the Command’s ability to develop and sustain these skills over time will be hampered if its special operations forces units experience high personnel turnover. In addition, according to USSOCOM doctrine, personnel must be assigned to a special operations forces unit for at least 4 years in order to be fully trained in some advanced special operations skills. Consequently, officials from the Command have determined that limited duration assignments would challenge the Command’s ability to develop the capability to conduct more complex special operations core tasks, and to retain fully trained personnel long enough to use their skills during deployments. The Marine Corps special operations command has determined that to achieve its goal of permanent personnel assignments within the special operations community, it requires a separate career field for its warfighter personnel. According to officials from the Command, a separate career field would allow the Marine Corps to manage these personnel based on a career progression model that reflects the experiences, skills, and professional military education that are relevant to special operations missions. Moreover, according to officials from the Command, the establishment of a special operations forces career field would allow the Marine Corps to develop and sustain a population of trained and qualified personnel, while providing the Command and USSOCOM with a more appropriate return on the investment in training personnel to perform special operations missions. The Command’s goal for the permanent assignment of some special operations forces personnel is also consistent with USSOCOM’s current and projected needs for special operations forces personnel. USSOCOM has identified the retention of experienced personnel who possess specialized skills and training as a key component in its strategy to support the war on terrorism. In its vision of how special operations forces will meet long-term national strategic and military objectives, USSOCOM has identified the need for a comprehensive special operations forces career management system to facilitate the progression of these personnel through increasing levels of responsibility within the special operations community. In addition, senior USSOCOM officials have expressed support for an assignment policy that allows Marine Corps personnel to remain within the special operations community for the duration of their careers. Headquarters, Marine Corps, plans to review its interim policy for assigning personnel to its special operations command annually to determine whether it meets the mission requirements of the Command. Additionally, the Commandant of the Marine Corps recently directed Headquarters, Marine Corps, to study the assignment policies for personnel in certain Army special operations forces units who rotate between conventional Army units and special operations forces units. According to a Headquarters, Marine Corps, official, one purpose of this study is to evaluate whether a similar management strategy may be applied to personnel in Marine Corps special operations forces units. Notwithstanding these efforts, officials with Headquarters, Marine Corps, and the Marine Corps special operations command acknowledge that the analysis of the critical skills and training required of personnel in the Command’s special operations forces units is a necessary step in the development of a strategic human capital approach to the management of these personnel. Until the Marine Corps special operations command completes a comprehensive analysis to identify and document the critical skills and additional training needed by its future workforce to perform the Command’s full range of assigned special operations missions, the Marine Corps will not have a sound basis for developing or evaluating alternative strategic human capital approaches for the management of personnel assigned to its special operations forces units. USSOCOM does not have a sound basis for determining whether Marine Corps special operations forces training programs are preparing units for their missions because it has not established common training standards for many special operations skills and it has not formally evaluated whether these programs will prepare units to be fully interoperable with DOD’s other special operations forces. The Marine Corps special operations command has provided training for its forces that is based on training that was provided to conventional units that were assigned some special operations missions prior to the activation of the Command, and by selectively incorporating the training that USSOCOM’s other service components provide to their forces. However, USSOCOM has not formally validated that the training used to prepare Marine Corps special operations forces meets special operations standards and is effective in training Marine Corps special operations forces to be fully interoperable with the department’s other special operations forces. The Marine Corps special operations command has taken several actions to implement programs to fulfill its responsibility for training personnel to perform special operations missions. For example, the Command operates the Marine Special Operations School, which has recently finalized plans for a training pipeline to initially screen all of the Marines and Sailors identified for assignment to the Command to determine their suitability for such assignments. Once the initial screening is completed, personnel who volunteer for assignments in one of the Command’s warfighter units— such as the Marine Special Operations Battalions and the Marine Special Operations Advisor Group—will undergo an additional assessment that measures mental and physical qualifications. As indicated by the Command’s plans, personnel who successfully complete this assessment will be provided with additional baseline special operations training prior to being assigned to one of the Command’s warfighter units. The Marine Special Operations School also provides training to personnel in special operations companies. This training consists of both classroom instruction and the practical application of specialized skills. For example, the school has provided training to personnel in skills such as precision shooting, close quarters battle, and special reconnaissance techniques. In addition, the school’s instructors conduct exercises to train the special operations companies on the unit’s tactics, techniques, and procedures, as well as predeployment training events, to certify the companies are capable of performing the primary special operations missions assigned to these units. The Command’s Marine Special Operations Advisor Group has also developed a comprehensive training program designed to build the individual and collective skills required to perform the unit’s mission to provide military training and advisor support to foreign forces. The program includes individual training for skills such as light infantry tactics and cultural and language training, as well as training for advanced skills in functional areas such as communications, intelligence, and medical training. The training program culminates with a capstone training event that evaluates the proficiency of personnel in mission-essential skills. The training event is used as a means of certifying that these units are trained to perform their assigned missions. In addition, Marine Corps special operations companies and Marine Special Operations Advisor Group teams conduct unit training to prepare for the missions that will be performed during deployments. According to officials with these units, this training is tailored to prepare personnel for the specific tasks that will likely be performed during the deployment. For example, officials stated that unit training may include enhanced language and cultural awareness training for specific countries and training in environmental terrains where these units will be deployed. Marine Corps special operations forces have used conventional Marine Corps training standards to prepare personnel and units to conduct some special operations missions. Officials with the Marine Corps special operations command and its subordinate units told us that its special operations forces units have trained personnel in some skills based on the training programs for conventional units that were assigned some special operations missions prior to the activation of the Command. For example, according to Marine Corps policy, the service formerly deployed specially organized, trained, and equipped forces as part of the Marine Expeditionary Units that were capable of conducting some special operations missions, such as direct action operations. Officials with the Marine Corps special operations command and the Marine Corps Special Operations Battalions told us that the special operations companies have been provided with training for skills such as urban sniper, specialized demolitions, and dynamic assault that is based largely on the training and standards for these skills that were established for conventional Marine Corps forces. For other skills, Marine Corps special operations forces personnel have reviewed and incorporated the training plans that USSOCOM’s Army, Navy, and Air Force service components use to prepare their special operations forces. Marine Corps special operations command officials told us that conventional Marine Corps units are not typically trained in many of the advanced skills required to perform some special operations missions, such as counterterrorism and unconventional warfare. To develop programs to train personnel on the skills required to perform these and other special operations missions, Marine Corps special operations forces have incorporated the training and standards from the training publications of the U.S. Army Special Operations Command, the Naval Special Warfare Command, and the Air Force Special Operations Command. However, according to a senior USSOCOM official, Marine Corps special operations forces have had the discretion to select the standards to use when training forces to perform special operations skills. During our review, we met with servicemembers who had recently completed deployments with Marine Corps special operations forces units as well as with servicemembers who were preparing for planned deployments. In general, these servicemembers told us that they believed they were adequately trained and prepared to perform their assigned missions. Team leaders with the Marine Special Operations Advisor Group, for example, stated that they received sufficient guidance to properly plan and execute special operations missions during deployments to train and advise foreign military forces. However, at the time of our work, the Marine Corps special operations companies that participated in the first deployments of these units had not yet completed their deployments. As a result, we were unable to discuss whether the training that was provided was adequate to fully meet their mission requirements. USSOCOM has not formally validated that the training used to prepare Marine Corps forces meets special operations standards and prepares forces to be fully interoperable with the department’s other special operations forces. The Marine Corps special operations command has made progress in developing and implementing training programs for Marine Corps special operations forces. However, the Command has not used common training standards for special operations skills because USSOCOM has not developed common training standards for many skills, although work to establish common standards is ongoing. USSOCOM officials stated the headquarters and the service components are working to develop common training standards, where appropriate, because USSOCOM recognizes that the service-specific training conducted for advanced special operations skills may not optimize opportunities for commonality, jointness, or efficiency. In addition, USSOCOM officials told us that common training standards would further promote departmentwide interoperability goals, address potential safety concerns, and provide greater assurances to future joint force commanders that special operations forces are trained to similar standards. Our prior work has shown that the lack of commonality in training standards for joint operations creates potentially hazardous conditions on the battlefield. For example, we reported in 2003 that the military services and the special operations community did not use common standards to train personnel to control air support of ground forces. In particular, we found that the standards for these personnel in special operations units differed among the Army, Navy, and Air Force because personnel were required to meet their service-specific training requirements, which led to hesitation by commanders in Afghanistan to employ some special operations forces personnel to direct air support of ground forces. In 2005, USSOCOM established minimum standards for training, qualifying, evaluating, and certifying special operations forces personnel who control air support of ground forces. USSOCOM formalized a process in 2006 to establish and validate common training standards for special operations skills. As part of this process, USSOCOM established a working group comprised of representatives from USSOCOM and each service component to determine the baseline tasks that define the training standard and the service component training requirements for special operations skills. According to a USSOCOM official, the working group first identified the common training requirements and standards for the skills of military free fall and combat dive. In addition, USSOCOM and its service components are working incrementally to identify common training standards for other special operations skills, such as the training required for personnel assigned to combined joint special operations task forces. However, officials with USSOCOM and the Marine Corps special operations command told us the process to establish common training standards for applicable special operations skills will likely take a considerable amount of time to complete due to the number of advanced special operations skills and the challenge of building consensus among the service components on what constitutes a common training standard. Furthermore, USSOCOM has not formally validated whether the training used to prepare Marine Corps forces meets special operations standards and prepares forces to be fully interoperable with the department’s other special operations forces. USSOCOM has taken some limited steps to evaluate the training provided to Marine Corps special operations forces. In November 2006, for example, USSOCOM representatives attended a training exercise on Marine Corps Base Camp Pendleton for a Marine special operations company that was preparing for an upcoming deployment. In addition, USSOCOM representatives observed training exercises in February 2007 for Marine Special Operations Advisor Group teams that were preparing to deploy. A USSOCOM official told us that the purpose of these evaluations was to observe some of the planned training tasks and focus on areas where USSOCOM could assist the Marine Corps special operations command in future training exercises. However, USSOCOM has not formally assessed the training programs used by the Marine Corps special operations command to prepare its forces for deployments, despite the fact that USSOCOM is responsible for evaluating the effectiveness of all training programs and ensuring the interoperability of all of DOD’s special operations forces. Our review of the reports prepared for USSOCOM leadership and provided to Marine Corps personnel showed that they did not contain a formal evaluation of the training content and they did not provide an assessment of the standards used during the training to determine whether the training was in accordance with special operations forces standards. Officials with the Marine Corps special operations command and its subordinate units told us that USSOCOM has not been extensively involved in the development of Marine Corps special operations forces training programs and the performance standards used to train Marine Corps special operations forces. In addition, USSOCOM officials told us that a formal assessment of Marine Corps training programs has not occurred, and will likely not occur, because the management of the Marine Corps special operations command’s training programs is, like the other service components, a responsibility delegated to the Marine Corps component commander. These officials told us the service component commander has the primary responsibility for establishing training programs and certifying that special operations forces are capable of performing special operations missions prior to deployments. In addition, a USSOCOM official stated that any training-related issues affecting the readiness of special operations forces are identified in readiness reports and are discussed during monthly meetings between senior USSOCOM leadership and the service component commanders. However, without common training standards for special operations skills or a formal validation of the training used to prepare Marine Corps special operations forces for planned deployments in the near term, USSOCOM cannot demonstrate the needed assurances to the geographic combatant commanders that Marine Corps special operations forces are trained to special operations forces standards and that these forces meet departmentwide interoperability goals for special operations forces, thereby potentially affecting the success of future joint operations. Since activating a Marine Corps component to USSOCOM, the Marine Corps has made considerable progress integrating into the special operations force structure, and several Marine Corps units have successfully completed deployments to train foreign military forces—a key focus area in DOD’s strategy for the war on terrorism. The Marine Corps has also taken an initial step to meet the unique personnel needs of its special operations command. However, it does not have complete information on all of the critical skills and additional training required of its personnel in special operations forces units. This information would enable the Marine Corps to assess the effectiveness of its human capital planning to date and build consensus on the development of alternative approaches for the management of its personnel assigned to special operations forces units. Until the Marine Corps develops a strategic human capital approach that is based on an analysis of the critical skills and training required of personnel in Marine Corps special operations forces units, it may be unable to align its personnel with the Marine Corps special operations command’s actual workforce requirements, which could jeopardize the long-term success of this new Command. The Marine Corps special operations command faces an additional challenge in training its forces to special operations forces standards and meeting DOD interoperability goals because USSOCOM has not yet established common training standards for many advanced skills. In the absence of common training standards, the Marine Corps special operations command is training its newly established special operations forces units in some skills that were not previously trained in conventional Marine Corps units. Unless USSOCOM validates that the training currently being used to prepare Marine Corps special operations forces is effective and meets DOD’s interoperability goals, it will be unable to ensure that Marine Corps special operations forces are interoperable with other special operations forces in the department, thereby potentially affecting the success of future joint operations. To facilitate the development of a strategic human capital approach for the management of personnel assigned to the Marine Corps special operations command and to validate that Marine Corps special operations forces are trained to be fully interoperable with DOD’s other special operations forces, we recommend that the Secretary of Defense take the following two actions. Direct the Commandant of the Marine Corps to direct the Commander, Marine Corps Forces Special Operations Command, to conduct an analysis of the critical skills and competencies required of personnel in Marine Corps special operations forces units and establish milestones for conducting this analysis. This analysis should be used to assess the effectiveness of current assignment policies and to develop a strategic human capital approach for the management of these personnel. Direct the Commander, USSOCOM, to establish a framework for evaluating Marine Corps special operations forces training programs, including their content and standards, to ensure the programs are sufficient to prepare Marine Corps forces to be fully interoperable with DOD’s other special operations forces. In written comments on a draft of this report, DOD generally concurred with our recommendations and noted that actions consistent with the recommendations are underway. DOD’s comments are reprinted in appendix II. DOD also provided technical comments, which we incorporated into the report as appropriate. DOD partially concurred with our recommendation to require the Commandant of the Marine Corps to direct the Commander, Marine Corps Forces Special Operations Command, to establish milestones for conducting an analysis of the critical skills and competencies required in Marine Corps special operations forces units and, once completed, use this analysis to assess the effectiveness of current assignment policies and develop a strategic human capital approach for the management of these personnel. DOD stated that the Marine Corps special operations command is currently conducting a detailed analysis of the critical skills and competencies required to conduct the missions assigned to the Command. The department further noted that the Command will also fully develop mission-essential task lists, and individual and collective training standards in order to clearly state the requirements for training and personnel. DOD also stated that USSOCOM is providing assistance so that these processes are integrated with USSOCOM’s development of the Joint Training System, which is mandated by the Chairman of the Joint Chiefs of Staff. We believe these are important steps if fully implemented. We note, however, DOD’s response does not address the issue of milestones and gives no indication when the ongoing analysis will be completed. We believe milestones are important because they serve as a means of holding people accountable. Furthermore, DOD did not address the need for the Marine Corps to use the analysis being conducted by the Command to assess the effectiveness of the current assignment policy. Without such an assessment, neither the Marine Corps nor DOD will have needed assurances that the current Marine Corps policy for assigning personnel to its special operations command is providing DOD with an appropriate return on the investment the department is making to train Marine Corps special operations forces personnel. Moreover, without a strategic human capital approach that is based on the comprehensive analysis of the critical skills and training required of its special operations forces personnel, the Marine Corps may be unable to effectively align its personnel with the Marine Corps special operations command’s workforce requirements. DOD partially concurred with our recommendation to require the Commander, USSOCOM, to establish a framework for evaluating Marine Corps special operations forces training programs to ensure the programs are sufficient to prepare Marine Corps forces to be fully interoperable with DOD’s other special operations forces. DOD stated that USSOCOM is currently implementing the Joint Training System that is mandated by the Chairman of the Joint Chiefs of Staff Instruction 3500.01D. According to DOD, the Joint Training System will provide the framework for USSOCOM to evaluate component training programs to ensure special operations forces operational capabilities are achieved. DOD also stated that Headquarters, USSOCOM, established the Training Standards and Requirements Integrated Process Team to complement the Joint Training System, which is focusing on standardizing training for individual skills across USSOCOM, and ensuring increased efficiency and interoperability. DOD stated that USSOCOM delegates many authorities to its service component commanders, including training their service-provided forces. DOD further stated that the Marine Corps special operations command has established the Marine Corps Special Operations School, which is tasked with evaluating all unit training programs to assess their combat capability and interoperability with special operations forces. While we agree that implementing the Joint Training System and standardizing training through the integrated process team will help ensure the interoperability of Marine Corps special operations forces, according to USSOCOM officials, these efforts will likely take several years to complete. We continue to believe that in the near term, USSOCOM needs to evaluate the Marine Corps special operations forces training programs that are currently being conducted. While the Marine Corps has trained its conventional forces in skills related to the special operations forces’ core tasks of direct action and special reconnaissance, it has not traditionally trained its forces in other special operations forces core tasks, such as unconventional warfare. For this reason, it is incumbent on USSOCOM to validate the ongoing training to ensure these new Marine Corps special operations forces units are adequately prepared to perform all of their assigned missions and are interoperable with DOD’s other special operations forces. We are sending a copy of this report to the Secretary of Defense, the Secretary of the Navy, the Commandant of the Marine Corps, and the Commander, United States Special Operations Command. We will also make copies available to other interested parties upon request. In addition, this report will be made available at no charge on the GAO Web site at www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9619 or pickups@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To assess the extent to which the Marine Corps special operations command (Command) has identified the force structure needed to perform its mission, we identified and reviewed Department of Defense (DOD) reports related to the department’s efforts to increase the size of special operations forces by integrating Marine Corps forces into the U.S. Special Operations Command (USSOCOM). These documents included the 2002 Special Operations Forces Realignment Study, the 2006 Operational Availability Study, the 2006 Quadrennial Defense Review Report, and the 2006 Unified Command Plan. We analyzed available internal DOD documentation such as briefings, guidance, and memoranda that identified DOD’s plans and time frames for establishing the Marine Corps special operations command. We discussed with officials at DOD organizations the processes that DOD utilized to determine and implement the plans for the new Command. These organizations include, but are not limited to, the Office of the Secretary of Defense, Assistant Secretary of Defense for Special Operations and Low Intensity Conflict; the Joint Staff, Force Structure, Resources, and Assessment Directorate; Marine Corps Plans, Policies, and Operations; Marine Corps Combat Development Command; and Marine Corps Manpower and Reserve Affairs. We also interviewed officials with USSOCOM and the special operations components of the U.S. Central Command and U.S. Pacific Command to determine the role of these commands in the decision-making processes. We reviewed prior GAO reports and the Government Performance and Results Act of 1993 that discuss key elements of effective strategic planning. We interviewed officials from the Marine Corps special operations command to determine the status of the Command’s efforts to activate Marine Corps special operations forces units and discussed the challenges the Command has identified that may affect the Command’s ability to meet its full range of responsibilities. We analyzed documents that describe the Marine Corps special operations command’s proposals to readjust its force structure to overcome its identified challenges. We discussed the status of these proposals with officials from the Marine Corps special operations command and Headquarters, Marine Corps. However, at the time of our review, the Marine Corps special operations command had not finalized decisions on proposed changes to its force structure and concepts of employment for its special operations forces units. As a result, we were unable to assess the extent to which any proposed changes to the Command’s force structure would mitigate identified challenges and specified personnel shortfalls. To assess the extent to which the Marine Corps has determined a strategic human capital approach to manage the critical skills and competencies required of personnel in its special operations command, we examined relevant Marine Corps policies for assigning personnel to conventional force units and the service’s interim policy for assigning personnel to special operations forces units. We interviewed officials from the Marine Corps special operations command and Headquarters, Marine Corps, to discuss the service’s career progression models for personnel assigned to Marine Corps special operations forces units. We also reviewed DOD plans to increase the active duty end-strength of the Marine Corps, and interviewed officials from Headquarters, Marine Corps, to discuss the service’s strategy to meet the personnel needs of its special operations forces units and its conventional force units. We analyzed the Marine Corps special operations command’s planned force structure and interviewed officials with Headquarters, Marine Corps, and the Marine Corps special operations command to determine the challenges the Marine Corps may face in developing a long-term plan to assign personnel to its special operations forces units. To better understand the unique personnel needs of the Marine Corps special operations command, we interviewed officials from the Command to discuss the specialized skills and training that are required by personnel who are assigned to special operations forces units to perform the Command’s assigned missions. We reviewed available documentation on the current and proposed training plans that identify the critical skills and training that will be provided to Marine Corps special operations forces personnel, and we interviewed officials with the Command to discuss the status of their efforts to fully identify all special operations critical skills and training requirements. We reviewed congressional testimony by the Commander of the Marine Corps special operations command and relevant Command planning documents to identify the Marine Corps special operations command’s goal for a human capital plan that supports its assigned missions. We examined USSOCOM annual reports and strategic planning documents relevant to the Marine Corps special operations command, and interviewed USSOCOM officials to discuss the management of special operations forces personnel. We also reviewed our past reports that discuss effective strategies for workforce planning. To assess the extent to which USSOCOM has determined whether Marine Corps special operations training programs are preparing these forces for assigned missions, we examined relevant laws and DOD doctrine related to the responsibilities of the Marine Corps and USSOCOM for training special operations forces personnel. We analyzed Marine Corps special operations command and USSOCOM training guidance for special operations forces. We examined USSOCOM documents related to the processes in place to establish common training standards for advanced special operations skills, and interviewed officials to discuss the status of USSOCOM’s efforts to establish common training standards for special operations skills. We examined available documents that detail training programs for Marine Corps special operations forces. We interviewed officials from the Marine Corps special operations command and USSOCOM to discuss the processes used to identify and select training standards for special operations skills. We collected and analyzed documents related to USSOCOM’s evaluations of Marine Corps special operations forces training, and we discussed the efforts that have been taken by the Marine Corps special operations command and USSOCOM to assess the effectiveness of these training programs. We conducted our work from August 2006 through July 2007 in accordance with generally accepted government auditing standards. Using our assessment of data reliability, we concluded that the data used to support this review were sufficiently reliable to answer our objectives. We interviewed the source of these data to determine how data accuracy was ensured, and we discussed their data collection methods, standard operating procedures, and other internal control measures. We interviewed officials and obtained documentation at the following locations: Office of the Secretary of Defense Office of the Assistant Secretary of Defense for Special Operations and Force Structure, Resources, and Assessment Directorate, J8 U.S. Marine Corps Headquarters (Combat Development Command) U.S. Marine Corps Headquarters (Installations and Logistics Department) U.S. Marine Corps Headquarters (Intelligence Department) U.S. Marine Corps Headquarters (Manpower and Reserve Affairs) U.S. Marine Corps Headquarters (Plans, Policies, and Operations) U.S. Marine Corps Headquarters (Programs and Resources) U.S. Marine Corps Headquarters (Training and Education Command) In addition to the contact named above, Carole Coffey, Assistant Director; Renee Brown; Jason Jackson; David Malkin; Karen Thornton; and Matthew Ullengren also made key contributions to this report.
The Department of Defense (DOD) has relied on special operations forces to conduct military operations in Afghanistan and Iraq and to perform other tasks such as training foreign military forces. To meet the demand for these forces, DOD established a Marine Corps service component under the U.S. Special Operations Command (USSOCOM) to integrate Marine Corps forces. Under the authority of the Comptroller General, GAO assessed the extent to which (1) the Marine Corps special operations command has identified its force structure requirements, (2) the Marine Corps has developed a strategic human capital approach to manage personnel in its special operations command, and (3) USSOCOM has determined whether Marine Corps training programs are preparing its forces for assigned missions. GAO performed its work with the Marine Corps and USSOCOM and analyzed DOD plans for this new command. While the Marine Corps has made progress in establishing its special operations command (Command), the Command has not yet fully identified the force structure needed to perform its assigned missions. DOD developed initial force structure plans to establish the Command; however, it did not use critical practices of strategic planning, such as the alignment of activities and resources and the involvement of stakeholders in decision-making processes when developing these plans. As a result of limitations in the strategic planning process, the Command has identified several force structure challenges that will likely affect the Command's ability to perform its full range of responsibilities, and is working to revise its force structure. Although preliminary steps have been taken, the Marine Corps has not developed a strategic human capital approach to manage the critical skills and competencies required of personnel in its special operations command. While the Command has identified some skills needed to perform special operations missions, it has not conducted a comprehensive analysis to determine all of the critical skills and incremental training required of personnel in its special operations forces units. These analyses are critical to the Marine Corps' efforts to develop a strategic human capital approach for the management of personnel in its special operations forces units. Without the benefit of these analyses, the Marine Corps has developed an interim policy to assign some personnel to special operations forces units for extended tour lengths to account for the additional training and skills; however, the policy is inconsistent with the Command's goal for the permanent assignment of some personnel within the special operations community. Until the Command completes an analysis to identify and document the critical skills and competencies needed by its future workforce to perform its full range of special operations missions, the Marine Corps will not have a sound basis for developing or evaluating alternative strategic human capital approaches for managing personnel assigned to its special operations forces units. USSOCOM does not have a sound basis for determining whether the Command's training programs are preparing units for their missions because it has not established common training standards for many special operations skills and it has not formally evaluated whether these programs prepare units to be fully interoperable with other special operations forces. The Command is providing training to its forces that is based on training programs for conventional units that were assigned some special operations missions prior to the Command's activation and incorporates the training that USSOCOM's other service components provide to their forces. However, USSOCOM has not validated that the training for Marine Corps forces prepares them to be fully interoperable with DOD's other special operations forces. Without an evaluation, USSOCOM cannot demonstrate the needed assurances that Marine Corps forces are fully interoperable with its other forces, which may jeopardize the success of future joint missions.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Homeland Security Act, as well as other statutes, provide legal authority for both cross-sector and sector-specific protection and resiliency programs. For example, the purpose of the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 is to improve the ability of the United States to prevent, prepare for, and respond to acts of bioterrorism and other public health emergencies, and the Pandemic and All-Hazards Preparedness Act of 2006 addresses public health security and all-hazards preparedness and response. Also, the Cyber Security Research and Development Act of 2002 authorized funding for the National Institute of Standards and Technology and the National Science Foundation to facilitate increased research and development for computer and network security and to support research fellowships and training. CIKR protection issues are also covered under various presidential directives, including HSPD-5 and HSPD-8. HSPD-5 calls for coordination among all levels of government as well as between the government and the private sector for domestic incident management, and HSPD-8 establishes policies to strengthen national preparedness to prevent, detect, respond to, and recover from threatened domestic terrorist attacks and other emergencies. These separate authorities and directives are tied together as part of the national approach for CIKR protection through the unifying framework established in HSPD-7. The NIPP outlines the roles and responsibilities of DHS and its partners— including other federal agencies, state, local, territorial, and tribal governments, and private companies. Within the NIPP framework, DHS is responsible for leading and coordinating the overall national effort to enhance protection via 18 CIKR sectors. HSPD-7 and the NIPP assign responsibility for CIKR sectors to SSAs. As an SSA, DHS has direct responsibility for leading, integrating, and coordinating efforts of sector partners to protect 11 of the 18 CIKR sectors. The remaining sectors are coordinated by 8 other federal agencies. Table 1 lists the SSAs and their sectors. The DHS’s Office of Infrastructure Protection (IP), located in the National Protection and Programs Directorate, is responsible for working with public- and private-sector CIKR partners and leads the coordinated national effort to mitigate risk to the nation’s CIKR through the development and implementation of the CIKR protection program. Using a sector partnership model, IP’s Partnership and Outreach Division (POD) works with owners and operators of the nation’s CIKR to develop, facilitate, and sustain strategic relationships and information sharing, including the sharing of best practices. The POD also works with public and private partners to coordinate efforts to establish and operate various councils intended to protect CIKR and provide CIKR functions to strengthen incident response. These councils include the aforementioned SCCs, which coordinate sectorwide CIKR activities and initiatives among private sector owners, operators, and trade associations in each of the 18 sectors, and the GCCs that represent federal, state, and local government and tribal interests to support the effort of SCCs to develop collaborative strategies for CIKR protection for each of the 18 sectors. The partnership model also includes various cross-sector councils, including the CIKR Cross-Sector Council, which addresses cross-sector issues and interdependencies among SCCs; the NIPP Federal Senior Leadership Council, which focuses on enhanced communication and coordination between and among federal departments and agencies responsible for implementing the NIPP and HSPD-7; and the State, Local, Tribal, and Territorial Government Coordinating Council, which promotes coordination across state and local jurisdictions. The model also includes a Regional Consortium Coordinating Council, which bring together representatives of regional partnerships, groupings, and governance bodies to foster coordination among CIKR partners within and across geographical areas and sectors. Figure 1 illustrates the sector partnership model and the interrelationships among the various councils, sectors, and asset owners and operators. IP’s Protective Security Coordination Division (PSCD) also operates the Protective Security Advisor Program, which deploys critical infrastructure protection and security specialists, called PSAs, to local communities throughout the country. Established in 2004, the program has 93 PSAs serving in 74 districts in 50 states and Puerto Rico, with deployment locations based on population density and major concentrations of CIKR throughout the United States. PSAs lead IP’s efforts in these locations and act as the link between state, local, tribal, and territorial organizations and DHS infrastructure mission partners. PSAs are to assist with ongoing state and local CIKR security efforts by establishing and maintaining relationships with state Homeland Security Advisors, State Critical Infrastructure Protection stakeholders, and other state, local, tribal, territorial and private-sector organizations. PSAs are to support the development of the national risk picture by conducting vulnerability and security assessments to identify security gaps and potential vulnerabilities in the nation’s most critical infrastructures. PSAs also are to share vulnerability information and protective measure suggestions with local partners and asset owners and operators. In addition, PSAs are to coordinate training for private-and public-sector officials in the communities in which they are located; support incident management; and serve as a channel of communication for state, local, tribal, and territorial officials and asset owners and operators seeking to communicate with DHS. “Despite ongoing vigilance and efforts to protect this country and its citizens, major accidents and disasters, as well as deliberate attacks, will occur. The challenge is to build the capacity of American society to be resilient in the face of disruptions, disasters, and other crises. Our vision is a Nation that understands the hazards and risks we face; is prepared for disasters; can withstand the disruptions disasters may cause; can sustain social trust, economic, and other functions under adverse conditions; can manage itself effectively during a crisis; can recover quickly and effectively; and can adapt to conditions that have changed as a result of the event.” The report also articulates that one of the goals for this mission is to “Rapidly Recover.” The two objectives associated with this goal are to (1) enhance recovery capabilities: establish and maintain nationwide capabilities for recovery from major disasters and (2) ensure continuity of essential services and functions: improve capabilities of families, communities, private-sector organizations, and all levels of government to sustain essential services and functions. Consistent with recent changes to the NIPP, DHS has begun to increase its emphasis on resiliency in the various programs it uses to assess vulnerability and risk at and among CIKR facilities so that it can help asset owners and operators identify resiliency characteristics of their facilities and provide suggested actions, called options for consideration, to help them mitigate gaps that have been identified. However, DHS has not developed an approach to measure owners’ and operators’ actions to address resiliency gaps identified as a result of these assessments. DHS has also begun to train PSAs about resiliency and how it applies to asset owners and operators, but it has not updated guidance that discusses PSAs’ roles and responsibilities to explicitly include resiliency and resiliency strategies. In March 2010 we reported that DHS has increased its emphasis on resiliency in the 2009 NIPP by, among other things, generally pairing it with the concept of protection. We further stated that DHS has encouraged SSAs to emphasize resiliency in guidance provided to them in updating their sector-specific plans. Consistent with these efforts, DHS has also taken action to develop or enhance the programs it uses to work with asset owners and operators to bring a stronger focus to resiliency. In 2009 DHS developed the RRAP to assess vulnerability and risk associated with resiliency. The RRAP is an analysis of groups of related infrastructure, regions, and systems in major metropolitan areas. The RRAP evaluates CIKR on a regional level to examine vulnerabilities, threats, and potential consequences from an all-hazards perspective to identify dependencies, interdependencies, cascading effects, resiliency characteristics, and gaps. In conducting the RRAP, DHS does an analysis of a region’s CIKR and protection and prevention capabilities and focuses on (1) integrating vulnerability and capability assessments and infrastructure protection planning efforts; (2) identifying security gaps and corresponding options for considerations to improve prevention, protection, and resiliency; (3) analyzing system recovery capabilities and providing options to secure operability during long-term recovery; and (4) assessing state and regional resiliency, mutual aid, coordination, and interoperable communication capabilities. RRAP assessments are to be conducted by DHS officials, including PSAs, in collaboration with SSAs: other federal officials; state, local, tribal, and territorial officials; and the private sector depending upon the sectors and facilities selected as well as a resiliency subject matter expert(s) deployed by the state’s homeland security agency. The results of the RRAP are to be used to enhance the overall security posture of the facilities, surrounding communities, and the geographic region covered by the project and are shared with the state. According to DHS officials, the results of specific asset-level assessments conducted as part of the RRAP are made available to asset owners and operators and other partners (as appropriate), but the final analysis and report is delivered to the state where the RRAP was conducted. One of the assessment tools DHS developed for the RRAP analysis is a “resiliency assessment builder,” which contains a series of questions designed to help officials identify resiliency issues associated with facilities included in the RRAP. The resiliency assessment builder, among other things, focuses on: the impact of loss associated with the facility, including any national security, sociopolitical, and economic impacts; interdependencies between the facility under review and other infrastructure—such as electrical power or natural gas suppliers, water, and supply chain systems—that if disrupted, could cause deterioration or cessation of facility operations; the impact of the loss of significant assets—such as an electrical substation to provide power or a rail spur to transport supplies— critical to the operation of the facility and backup systems available to maintain operations if losses occur; and specific vulnerabilities, unusual conditions, threats, or events—such as hurricanes, transportation chokepoints, or hazardous materials issues—that could disrupt operations and whether the facility is prepared to address the situation via specific capabilities or an action plan. Senior IP officials told us that they believe the RRAP has been successful in helping DHS understand resiliency in the context of interdependencies among individual assets. For example, while the focus of the Tennessee Valley Authority RRAP was energy sector sites and resources, DHS and its partners examined sites and resources in those sectors, like water and dams, which appeared to be obvious interdependencies. However, they also found that they needed to examine sites and resources in those sectors that appeared less obvious but were interdependent because they were intricately connected to the Tennessee Valley Authority operations, like sites and resources in the transportation sector. Also, in fiscal year 2010, DHS started an RRAP in Atlanta that focused primarily on commercial facilities. DHS’s related vulnerability assessment of sites (see the discussion below for additional details of these assessments) and resources associated with the water sector in Atlanta showed that an accident or attack involving one component of the water sector could disrupt the operations of sites or resources of other sectors in the geographic area covered by the RRAP. By discovering this vulnerability, and taking steps to address it, asset owners and operators in various sectors that were provided this information were better positioned to be able to work together to mitigate this potential problem. Senior IP officials said that the overall RRAP effort was piloted in five projects, but they no longer consider it a pilot program. They added that they plan to conduct five other RRAPs in 2010 in addition to the one already started in Atlanta. They further stated that because the program focuses only on areas with a high density of critical assets, they plan to develop a new “mini-RAP.” According to these officials, the mini-RAP is intended to provide assessments similar to those provided during an RRAP (but on a reduced scale) to groups of related infrastructure or assets that are not selected to receive an RRAP. An IP official stated that he anticipates that the mini- RAP, which is under development, will be finalized in October 2010. DHS is also revising another vulnerability assessment called the SAV to foster greater emphasis on resiliency at individual CIKR sites. The SAV, which is a facility-specific “inside-the-fence” vulnerability assessment conducted at the request of asset owners and operators, is intended to identify security gaps and provide options for consideration to mitigate these identified gaps. SAVs are conducted at individual facilities or as part of an RRAP and are conducted by IP assessment teams in coordination with PSAs, SSAs, state and local government organizations (including law enforcement and emergency management officials), asset owners and operators, and the National Guard, which is engaged as part of a joint initiative between DHS and the National Guard Bureau. The National Guard provides teams of subject matter experts experienced in conducting vulnerability assessments. The private sector asset owners and operators that volunteer for the SAV are the primary recipient of the SAV analysis, which produces options for consideration to increase their ability to detect and prevent terrorist attacks. In addition, it provides mitigating options that address the identified vulnerabilities of the facility. The SAV is developed using a questionnaire that focuses on various aspects of the security of a facility, such as vulnerabilities associated with access to facility air handling systems; physical security; and the ability to deter or withstand a blast or explosion. Our review of the SAV questionnaire showed that it focuses primarily on vulnerability issues related to the protection of the facility. The SAV questionnaire also contains some questions that focus on resiliency issues because it asks questions about backup systems or contingencies for key systems, such as electrical power, transportation, natural gas, water, and telecommunications systems. Officials with IP’s PSCD said that they are working with IP’s Field Operations Branch to update the SAV to include more questions intended to capture the resiliency of a facility, especially since the SAV is used during the RRAP. They said that the effort is ongoing and, as of June 8, 2010, DHS had developed a time line showing the revised SAV is to be introduced in October or November 2010. DHS is also revising its ECIP security survey to further focus on resiliency at individual facilities. Under the ECIP survey, PSAs meet with facility owners and operators in order to provide awareness of the many programs, assessments, and training opportunities available to the private sector; educate owners and operators on security; and promote communication and information sharing among asset owners and operators, DHS, and state governments. ECIP visits are also used to conduct security surveys using the ECIP security survey, a Web-based tool developed by DHS to collect, process, and analyze vulnerability and protective measures information during the course of a survey. The ECIP security survey is also used to develop metrics; conduct sector-by-sector and cross-sector vulnerability comparisons; identify security gaps and trends across CIKR sectors and sub-sectors; establish sector baseline security survey scores; and track progress toward improving CIKR security through activities, programs, outreach, and training. Our review of the ECIP security survey showed that the original version of the survey made references to resiliency-related concepts—business continuity plans and continuity of operations. The newest version of the survey, published in June 2009, contains additional references to resiliency and resiliency- related concepts, including identifying whether or not a facility has backup plans for key resources such as electrical power, natural gas, telecommunications, and information technology systems. It is also used to identify key dependencies critical to the operation of the facility, such as water and wastewater, and to state whether backup plans exist for service or access to these dependencies in the event of an interruption. Further, senior IP officials told us that in addition to the updates on resiliency in the latest version of the ECIP security survey, they plan to incorporate 22 additional questions to a subsequent update of the survey that will focus on determining the level of resiliency of a facility. According to these officials, DHS also intends to use the updated survey to develop a resiliency “dashboard” for CIKR owners and operators that is intended to provide them a computerized tool that shows how the resiliency of their facility compares with other similar facilities (see the discussion below for a more detailed discussion of DHS’s ECIP dashboard). A DHS document on revisions to the SAV showed that the revised ECIP security survey is to be introduced at the same time as the revised SAV (October or November 2010) so that data collection associated with each remains compatible. DHS’s current projected release of the updated ECIP security survey is planned for October 2010. DHS intends to take further actions to enhance the programs and tools it uses to work with asset owners and operators when assessing resiliency, but it has not developed an approach to measure its effectiveness in working with asset owners and operators in their efforts to adopt measures to mitigate resiliency gaps identified during the various vulnerability assessments. According to the NIPP, the use of performance measures is a critical step in the NIPP risk management process to enable DHS and the SSAs to objectively and quantitatively assess improvement in CIKR protection and resiliency at the sector and national levels. The NIPP states that while the results of risk analyses help sectors set priorities, performance metrics allow NIPP partners to track progress against these priorities and provide a basis for DHS and the SSAs to establish accountability, document actual performance, facilitate diagnoses, promote effective management, and provide a feedback mechanism to decision makers. Consistent with the NIPP, senior DHS officials told us that they have recently begun to measure the rate of asset owner and operator implementation of protective measures following the conduct of the ECIP security survey. Specifically, in a June 2010 memorandum to the Assistant Secretary for NPPD, the Acting Director of PSCD stated that 234 (49 percent) of 437 sites where the ECIP security survey had been conducted implemented protective measures during the 180-day period following the conduct of the ECIP survey. The Acting Director reported that the 234 sites made a total of 497 improvements across the various categories covered by the ECIP security survey, including information sharing, security management, security force, physical security, and dependencies while 239 sites reported no improvements during the period. The Acting Director stated that the metrics were the first that were produced demonstrating the impact of the ECIP program, but noted that PSCD is reexamining the collection process to determine whether additional details should be gathered during the update to the ECIP security survey planned for October 2010. However, because DHS has not completed its efforts to include resiliency material as part of its vulnerability assessment programs, it does not currently have performance metrics of resiliency measures taken by asset owners and operators. Moving forward, as DHS’s efforts to emphasize resiliency evolve through the introduction of new or revised assessment programs and tools, it has the opportunity to consider including additional metrics of resiliency measures adopted at the facilities it assesses for vulnerability and risk, particularly as it revises the ECIP security survey and develops the resiliency dashboard. Moreover, DHS could consider developing similar metrics for the SAV at individual facilities and the RRAP and mini-RAP in the areas covered by RRAPs and mini-RAPs. By doing so, DHS could be able to demonstrate its effectiveness in promoting resiliency among the asset owners and operators it works with and would have a basis for analyzing performance gaps. Regarding the latter, DHS managers would have a valuable tool to help them assess where problems might be occurring or alternatively provide insights into the tools used to assess vulnerability and risk and whether they were focusing on the correct elements of resiliency at individual facilities or groups of facilities. DHS uses PSAs to provide assistance to asset owners and operators on CIKR protection strategies. Although DHS had begun to train PSAs about resiliency and how it applies to the owners and operators they interact with, DHS has not updated PSAs’ guidance that outlines their roles and responsibilities to reflect DHS’s growing emphasis on resiliency. In April 2010, DHS provided a 1-hour training course called “An Introduction to Resilience” to all PSAs at a conference in Washington, D.C. The training was designed to define resilience; present resilience concepts, including information on how resilience is tied to risk analysis and its link to infrastructure dependencies and interdependencies; discuss how resilience applies to PSAs, including a discussion of the aforementioned updates to programs and tools used to do vulnerability assessments; and explain how DHS’s focus on resilience can benefit asset owners and operators. According to the Acting Deputy Director of PSCD, PSCD is expected to deliver the training to PSAs again during regional conferences to foster further discussions about resiliency and to give PSAs an additional opportunity to ask questions about the training they received in April 2010. Although DHS’s training discusses how resiliency applies to PSAs and how it can benefit asset owners and operators, DHS has not updated guidance that discusses PSA roles and responsibilities related to resiliency. The guidance DHS has provided to PSAs on certain key job tasks, issued in 2008, includes discussions about how PSAs are to (1) implement their role and responsibilities during a disaster; (2) conduct vulnerability assessments; and (3) establish or enhance existing strong relationships between asset owners and operators and DHS, federal, state, and local law enforcement personnel. However, the guidance does not articulate the role of PSAs with regard to resiliency issues, or how PSAs are to promote resiliency strategies and practices to asset owners and operators. For example, our review of DHS’s engagement guidance for PSAs showed that the guidance does not explicitly discuss resiliency; rather, it focuses primarily on protection. Specifically, the executive summary of the guidance states that one of the key infrastructure protection roles for DHS in fiscal year 2008 was to form partnerships with the owners and operators of the nation’s identified high-priority CIKR, known as level 1 and level 2 assets and systems. The guidance describes particular PSA responsibilities with regard to partnerships, including (1) identifying protective measures currently in place at these facilities and tracking the implementation of any new measures into the future; (2) informing owners and operators of the importance of their facilities in light of the ever-present threat of terrorism; and (3) establishing or enhancing existing relationships between owners and operators, DHS, and federal, state, and local law enforcement personnel to provide increased situational awareness regarding potential threats, knowledge of the current security posture at each facility, and a federal resource to asset owners and operators. There is one reference to a resiliency-related concept in an appendix where DHS indicated that the criteria to identify level 2 assets in the Information Technology sector should be “those assets that provide incident management capabilities, specifically, sites needed for rapid restoration or continuity of operations.” PSA program officials said that they are currently developing guidelines on a number of issues as DHS transitions from a CIKR program heavily focused on protection to one that incorporates and promotes resiliency. They said that PSAs do not currently have roles and responsibilities specific to “resiliency” because resiliency is a concept that has only recently gained significant and specific attention. They added that PSA roles and responsibilities, while not specifically mentioning resiliency, include component topics that comprise or otherwise contribute to resiliency as it is now defined. Nonetheless, the Acting Deputy Director of IP’s PSCD said that he envisions updating PSA guidance to incorporate resiliency concepts and that he intends to outline his plan for doing so in October 2010 as part of IP’s program planning process. However, he was not specific about the changes he plans to make to address resiliency concepts or whether the PSA’s roles and responsibilities related to resiliency would be articulated. According to standards for internal control in the federal government, management is responsible for developing and documenting the detailed policies and procedures to ensure that they are an integral part of operations. By updating PSA guidance that discusses the role PSAs play in assisting asset owners and operators, including how PSAs can work with them to mitigate vulnerabilities and strengthen their security, PSA program officials would be better positioned to help asset owners and operators have the tools they need to develop resilience strategies. This would be consistent with DHS efforts to train PSAs about resiliency and how it affects asset owners and operators. Updating PSA guidelines to address resiliency issues would also be consistent with DHS’s efforts to treat resiliency on an equal footing with protection, and would comport with DHS guidance that calls for SSAs to enhance their discussion of resiliency and resiliency strategies in SSPs. DHS’s efforts to emphasize resiliency in the programs and tools it uses to work with asset owners and operators also creates an opportunity for DHS to better position itself to disseminate information about resiliency practices to asset owners and operators within and across sectors. Currently, DHS shares information on vulnerabilities and protective measures on a case-by-case basis. However, while it is uniquely positioned and has considered disseminating information about resiliency practices, DHS faces barriers in doing so and has not developed an approach for sharing this information more broadly, across sectors. According to the NIPP, its effective implementation is predicated on active participation by government and private-sector partners in meaningful, multidirectional information sharing. The NIPP states that when asset owners and operators are provided with a comprehensive picture of threats or hazards to CIKR and participate in ongoing multidirectional information flow, their ability to assess risks, make prudent security investments, and develop appropriate resiliency strategies is substantially enhanced. Similarly, according to the NIPP, when the government is provided with an understanding of private-sector information needs, it can adjust its information collection, analysis, synthesis, and dissemination accordingly. Consistent with the NIPP, DHS shares information on vulnerabilities and potential protective measures with asset owners and operators after it has collected and analyzed information during SAVs and ECIP security surveys performed at their individual facilities. This information includes vulnerabilities DHS has identified, and corresponding steps these owners and operators can take to mitigate these vulnerabilities, including options for consideration, which are suggestions presented to owners and operators to help them resolve vulnerabilities identified during DHS’s assessments. For example, DHS issues SAV reports to owners and operators that, among other things, identify vulnerabilities; help them identify their security posture; provide options for consideration to increase their ability to detect and prevent terrorist attacks; and enhance their ability to mitigate vulnerabilities. Regarding the ECIP security survey, DHS provides owners and operators an ECIP “dashboard” which shows the results for each component of the survey for a facility using an index, called the Protective Measures Index (PMI), which are scores DHS prepares for the facility and individual components that can be compared to other similar facilities’ scores. SAV reports and the ECIP dashboard generally focus on similar protection issues, such as facility or physical security, security personnel, and access control. The SAV reports and the ECIP dashboard discuss some continuity of operations issues that could be considered resiliency related. For example, the ECIP dashboard contains PMIs focused on whether the facility has a continuity plan and conducts continuity exercises, while the SAV report discusses whether the facility would be able to operate if resources such as electricity, water, or natural gas were not available. As discussed earlier, DHS is currently updating the SAV to include, among other things, an assessment of resiliency characteristics and gaps, and is taking action to develop a resiliency dashboard similar to that used under the ECIP security survey. Senior IP officials also stated that they share information on steps owners and operators can take to protect their facilities via Common Vulnerabilities, Potential Indicators, and Protective Measures (CV/PI/PM) reports. DHS develops and disseminates these reports to various stakeholders, generally on a need-to-know basis, including specific owners and operators, such as those that have been included in assessments by PSAs; law enforcement officials, emergency responders, and state homeland security officials; and others who request access to the reports. These reports, which focus on vulnerabilities and security measures associated with terrorist attacks, are intended to provide information on potential vulnerabilities and specific protective measures that various stakeholders can implement to increase their security posture. According to DHS, these reports are developed based on DHS’s experiences and observations gathered from a range of security-related vulnerability assessments, including SAVs, performed at infrastructures over time, such as the chemical and commercial facilities sectors and subsectors and asset types within those sectors, such as the chemical hazardous storage industry or the restaurant industry, respectively. For example, like other CV/PI/PM reports, DHS’s report on the restaurant industry gives a brief overview of the industry; potential indicators of terrorist activity; common vulnerabilities; and protective measures. Common vulnerabilities include unrestricted public access and open access to food; potential indicators of terrorist activity include arson, small arms attack, persons wearing unusually bulky clothing to conceal explosives, and unattended packages; and protective measures include developing a comprehensive security plan to prepare for and respond to food tampering and providing appropriate signage to restrict access to nonpublic areas. The CV/PI/PM reports discuss aspects of resiliency such as infrastructure interdependencies and incident response, but they do not discuss other aspects of resiliency. For example, the report on restaurants discusses protective measures including providing security and backup for critical utility services, such as power or water––efforts that may also enhance the resiliency of restaurants. Moving forward, as its efforts to emphasize resiliency evolve, DHS could consider including other aspects of resiliency in the CV/PI/PM reports. Senior IP officials told us that they have considered ways to disseminate information that DHS currently collects or plans to collect with regard to resiliency. However, they have not explored the feasibility of developing an approach for doing so. Senior IP officials explained that given the voluntary nature of the CIKR partnership, DHS should not be viewed as identifying or promoting practices, particularly best practices, which could be construed to be standards or requirements. They said that DHS goes to great lengths to provide assurance to owners and operators that the information gathered during assessments will not be provided to regulators. They also stated that they provide owners and operators assurance that they will not share proprietary information with competitors. For example, certain information that they collect is protected under the Protected Critical Infrastructure Information (PCII) program, which institutes a means for the voluntary sharing of certain private sector, state, and local CIKR information with the federal government while providing assurance that the information will be exempt from disclosure under the Freedom of Information Act, among other things, and will be properly safeguarded. DHS has established a PCII program office, which among other things, is responsible for validating information provided by CIKR partners as PCII, and developing protocols to access and safeguard information that is deemed PCII. IP senior officials further explained that DHS relies on its private-sector partners to develop and share information on practices they use to enhance their protection and resilience. They said that the practices shared by sector partners, including best practices, are largely identified and developed by the private sector, at times with the support of its partners in government such as the SSAs. DHS facilitates this process by making various mechanisms available for information sharing, including information they deem to be best practices. For example, according to senior IP officials, DHS’s Homeland Security Information Network-Critical Sectors (HSIN-CS) was designed to provide each sector a portal to post useful or important information, such as activities or concepts that private-sector partners discern to be best practices on protection and resiliency topics. They also said that one factor to consider is that resiliency can mean different things to different sectors, as measures or strategies that are applicable or inherent to one sector may not be applicable to another given the unique characteristics of each sector. For example, the energy sector, which includes oil refineries, is inherently different than the government facilities sector, which includes government office buildings. In our March 2010 report on DHS’s increased emphasis on resilience in the NIPP, we reported that DHS officials told us that the balance between protection and resiliency is unique to each sector and the extent to which any one sector increases the emphasis on resiliency in its sector-specific plans will depend on the nature of the sector and the risks to its CIKR. Further, the Branch Chief of IP’s Office of Information Coordination and Analysis Office explained that differences in corporate cultures across the spectrum of companies could be a barrier to widely disseminating information on resiliency practices because it is often challenging to translate information, such as what constitutes a success or failure, from one company to another. He further stated that differences in the regulatory structures affecting different industries may be a factor that could limit the extent to which certain types of information could be disseminated. We recognize that DHS faces barriers to sharing information it gathers on resiliency practices within and among sectors. However, as the primary federal agency responsible for coordinating and enhancing the protection and resiliency of critical infrastructure across the spectrum of CIKR sectors, DHS is uniquely positioned to disseminate this information which would be consistent with the NIPP’s emphasis on information sharing. By working to explore ways to address any challenges or barriers to sharing resiliency information, DHS could build upon the partnering and information-sharing arrangements that CIKR owners and operators use in their own communities. For example, our work at CIKR assets along the Gulf Coast in Texas and in southern California showed that asset owners and operators viewed resiliency as critical to their facilities because it is in their best interests to either keep a facility operating during and after an event, or rebound as quickly as possible following an event. They said that they rely on a variety of sources for information to enhance their ability to be more resilient if a catastrophic event occurs, including information- sharing or partnering arrangements within and among CIKR partners and their local communities. Each of the 15 owners and operators we contacted in Texas and California said that they have partnering relationships with their sector coordinating councils, local/state government, law enforcement, emergency management, or mutual aid organizations. Furthermore, 14 of the 15 said that they work with these organizations to share information, including best practices and lessons learned, from recent disasters. Among the owners and operators we contacted: Representatives of one facility said that following a recent event, their company shared lessons learned with the local mutual aid association and various trade associations. These officials said that they also share best practices within the industry and across their facilities in other locations on an ongoing basis and that the company is currently organizing a committee made up of security staff from each facility within the organization whose primary responsibility is expected to be the sharing of best practices. Officials representing another facility told us that following an event or a drill, they critique the event and their response to garner any lessons learned or best practices. They said that they share information with the local fire department and a regional trade association. These officials stated that they will share information with other trade association members if they believe that it would be beneficial to others, but will not discuss proprietary information. Officials representing a different facility said that, following a hurricane in the same area, the company’s managers from various facilities met to share lessons learned and adopted best practices from other facilities within the same company and with external partners, including a mutual aid organization and local emergency responders. They said that they also have learned from the experiences of others— after an explosion at a similar company’s facility, they became aware that the other company had located its administration building too close to the company’s operations, thereby jeopardizing employee safety. By developing an approach for disseminating information it gathers or intends to gather with regard to resiliency, DHS would then be in a position to reach a broader audience across sectors or in different geographic locations. Senior IP officials said that they agree that disseminating information on resiliency practices broadly across the CIKR community would be a worthwhile exercise, but questioned whether they would be the right organization within DHS to develop an approach for sharing resiliency information. They said that IP does not currently have the resources to perform this function and suggested that an organization like the Federal Emergency Management Agency (FEMA) might be more appropriate for sharing information on resiliency because it already has mechanisms in place to share information on practices organizations can adopt to deal with all-hazards events, including terrorism. For example, FEMA manages DHS’s Lessons Learned Information Sharing portal, called LLIS.gov, which is a national online network of lessons learned and best practices designed to help emergency response providers and homeland security officials prevent, prepare for, and respond to all hazards, including terrorism. According to FEMA officials, LLIS.gov contains information on critical infrastructure protection and resiliency and system users, such as state and local government officials, are encouraged to submit content which is then vetted and validated by subject matter experts before being posted to the system. FEMA officials explained that FEMA does not actively collect information from system users, but encourages them to submit documents for review and possible inclusion into LLIS.gov. According to FEMA, access to LLIS.gov is restricted to members that request access to the system, particularly emergency response providers and homeland security officials. In March 2010, FEMA’s Outreach and Partnerships Coordinator for Lessons Learned Information Sharing told us that LLIS.gov had about 55,000 members, of which approximately 89 percent were representatives of state and local government; about 6 percent were representatives of private-sector organizations; and about 5 percent were representatives of the federal government. Regardless of which DHS organization would be responsible for disseminating information on resiliency practices, we recognize that DHS will face challenges in addressing any barriers it believes could hinder its ability to disseminate resiliency information. As part of this effort, DHS would have to determine what resiliency information it is collecting or plans to collect that might be most appropriate to share and what safeguards would be needed to protect against the disclosure of proprietary information within the confines of the voluntary nature of the CIKR partnership. Also, in doing so, DHS could consider some of the following questions: What additional actions, if any, would DHS need to take to convey that the information is being gathered within the voluntary framework of the CIKR partnership? To what extent does DHS need to take additional actions, if any, to provide assurance that the information being disseminated is nonregulatory and nonbinding on the owners and operators that access it? What additional mechanisms, if any, does DHS need to establish to provide assurance that reinforces the PCII process and how can resiliency practices information be presented to avoid disclosures of information that is PCII security sensitive or proprietary in nature? What mechanism or information system is most suitable for disseminating resiliency practices information, and which DHS component would be responsible for managing this mechanism or system? What approach should DHS take to review the information before it is disseminated to ensure that resiliency practices identified by DHS at one facility or in one sector are valid and viable, and applicable across facilities and sectors? What additional resources and at what additional cost, if any, would DHS need to devote to gathering and broadly disseminating information about resiliency practices across facilities and sectors? What actions can DHS take to measure the extent to which asset owners and operators are using resiliency information provided by DHS, and how can DHS use this information to make improvements, if needed? By determining the feasibility of overcoming barriers and developing an approach for disseminating resiliency information, DHS could better position itself to help asset owners and operators consider and adopt resiliency strategies, and provide them with information on potential security investments, based on the practices and experiences of their peers both within and across sectors. In the wake of concerns by stakeholders, including members of Congress, academia, and the private sector that DHS was placing emphasis on protection rather than resilience, DHS has increased its emphasis on critical infrastructure resiliency in the NIPP. Consistent with these changes, DHS has also taken actions to increase its emphasis on resilience in the programs and tools it uses to assess vulnerability and risk that are designed to help asset owners and operators identify resiliency characteristics and gaps. These actions continue to evolve and could be improved if DHS were to strengthen program management by developing measures to assess the extent to which asset owners and operators are taking actions to address resiliency gaps identified during vulnerability assessments; and updating PSA guidelines to articulate PSA roles and responsibilities with regard to resiliency during their interactions with asset owners and operators. By developing performance measures to assess the extent to which asset owners and operators are taking actions to resolve resiliency gaps identified during the various vulnerability assessments, DHS would, consistent with the NIPP, be better positioned to demonstrate effectiveness in promoting resiliency among the asset owners and operators it works with and would have a basis for analyzing performance gaps. DHS managers would also have a valuable tool to help them assess where problems might be occurring, or alternatively provide insights into the tools used to assess vulnerability and risk and whether they were focusing on the correct elements of resiliency at individual facilities or groups of facilities. Furthermore, by updating PSA guidance to discusses the role PSAs play during interaction with asset owners and operators, including how PSAs can work with them to mitigate vulnerabilities and strengthen their security, DHS would have greater assurance that PSAs are equipped to help asset owners and operators have the tools they need to develop resilience strategies. This would also be consistent with DHS efforts to train PSAs about resiliency and how it affects asset owners and operators. Related to its efforts to develop or update its programs designed to assess vulnerability at asset owners’ and operators’ individual facilities and groups of facilities, DHS has considered how it can disseminate information on resiliency practices it gathers or plans to gather with asset owners and operators within and across sectors. However, it faces barriers in doing so because it would have to overcome perceptions that it is advancing or promoting standards that have to be adopted and concerns about sharing proprietary information. We recognize that DHS would face challenges disseminating information about resiliency practices within and across sectors, especially since resiliency can mean different things to different sectors. Nonetheless, as the primary federal agency responsible for coordinating and enhancing the protection and resiliency of critical infrastructure across the spectrum of CIKR sectors, DHS is uniquely positioned to disseminate this information. By determining the feasibility of overcoming barriers and developing an approach for disseminating resiliency information, DHS could better position itself to help asset owners and operators consider and adopt resiliency strategies, and provide them with information on potential security investments, based on the practices and experiences of their peers within the CIKR community, both within and across sectors. To better ensure that DHS’s efforts to incorporate resiliency into its overall CIKR protection efforts are effective and completed in a timely and consistent fashion, we recommend that the Assistant Secretary for Infrastructure Protection take the following two actions: develop performance measures to assess the extent to which asset owners and operators are taking actions to resolve resiliency gaps identified during the various vulnerability assessments; and update PSA guidance that discusses the role PSAs play during interactions with asset owners and operators with regard to resiliency, which could include how PSAs work with them to emphasize how resiliency strategies could help them mitigate vulnerabilities and strengthen their security posture and provide suggestions for enhancing resiliency at particular facilities. Furthermore, we recommend that the Secretary of Homeland Security assign responsibility to one or more organizations within DHS to determine the feasibility of overcoming barriers and developing an approach for disseminating information on resiliency practices to CIKR owners and operators within and across sectors. We provided a draft of this report to the Secretary of Homeland Security for review and comment. In written comments DHS agreed with two of our recommendations and said that it needed additional time to internally consider the third. Regarding our first recommendation that IP develop performance measures to assess the extent to which asset owners and operators are taking actions to resolve resiliency gaps identified during vulnerability assessments, DHS said that IP had developed measures on owners’ and operators’ efforts to implement enhancements to security and resilience, and NPPD officials are reviewing these new performance metrics. With regard to our second recommendation to update guidance that discusses the role PSAs play during interactions with asset owners and operators about resiliency, DHS said that IP is actively updating PSA program guidance to reflect the evolving concept of resilience and will include information on resilience in the next revision to the PSA program management plan. Finally, regarding our third recommendation that DHS assign responsibility to one or more organizations within DHS to determine the feasibility of developing an approach for disseminating information on resiliency practices, DHS said that its components need time to further consider the recommendation and will respond to GAO and Congress at a later date. DHS also provided technical comments which we incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Homeland Security, the Under Secretary for the National Protection Programs Directorate, appropriate congressional committees, and other interested parties. If you have any further questions about this report, please contact me at (202) 512-8777 or caldwells@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. In addition to the contact named above, John F. Mortin, Assistant Director, and Katrina R. Moss, Analyst-in-Charge, managed this assignment. Katherine M. Davis, Anthony J. DeFrank, Michele C. Fejfar, Tracey L. King, Landis L. Lindsey, Thomas F. Lombardi, Lara R. Miklozek, Steven R. Putansu, Edith N. Sohna, and Alex M. Winograd made significant contributions to the work. Critical Infrastructure Protection: Updates to the 2009 National Infrastructure Protection Plan and Resiliency in Planning. GAO-10-296. Washington, D.C.: March 5, 2010. The Department of Homeland Security’s (DHS) Critical Infrastructure Protection Cost-Benefit Report. GAO-09-654R. Washington, D.C.: June 26, 2009. Influenza Pandemic: Opportunities Exist to Address Critical Infrastructure Protection Challenges That Require Federal and Private Sector Coordination. GAO-08-36. Washington, D.C.: October 31, 2007. Critical Infrastructure: Sector Plans Complete and Sector Councils Evolving. GAO-07-1075T. Washington, D.C.: July 12, 2007. Critical Infrastructure Protection: Sector Plans and Sector Councils Continue to Evolve. GAO-07-706R. Washington, D.C.: July 10, 2007. Critical Infrastructure: Challenges Remain in Protecting Key Sectors. GAO-07-626T. Washington, D.C.: March 20, 2007. Critical Infrastructure Protection: Progress Coordinating Government and Private Sector Efforts Varies by Sectors’ Characteristics. GAO-07-39. Washington, D.C.: October 16, 2006. Critical Infrastructure Protection: Challenges for Selected Agencies and Industry Sectors. GAO-03-233. Washington, D.C.: February 28, 2003. Critical Infrastructure Protection: Commercial Satellite Security Should Be More Fully Addressed. GAO-02-781. Washington, D.C.: August 30, 2002. Critical Infrastructure Protection: Current Cyber Sector-Specific Planning Approach Needs Reassessment. GAO-09-969. Washington, D.C.: September 24, 2009. Cybersecurity: Continued Federal Efforts Are Needed to Protect Critical Systems and Information. GAO-09-835T. Washington, D.C.: June 25, 2009. Information Security: Cyber Threats and Vulnerabilities Place Federal Systems at Risk. GAO-09-661T. Washington, D.C.: May 5, 2009. National Cybersecurity Strategy: Key Improvements Are Needed to Strengthen the Nation’s Posture. GAO-09-432T. Washington, D.C.: March 10, 2009. Critical Infrastructure Protection: DHS Needs to Better Address Its Cybersecurity Responsibilities. GAO-08-1157T. Washington, D.C.: September 16, 2008. Critical Infrastructure Protection: DHS Needs to Fully Address Lessons Learned from Its First Cyber Storm Exercise. GAO-08-825. Washington, D.C.: September 9, 2008. Cyber Analysis and Warning: DHS Faces Challenges in Establishing a Comprehensive National Capability. GAO-08-588. Washington, D.C.: July 31, 2008. Critical Infrastructure Protection: Further Efforts Needed to Integrate Planning for and Response to Disruptions on Converged Voice and Data Networks. GAO-08-607. Washington, D.C.: June 26, 2008. Information Security: TVA Needs to Address Weaknesses in Control Systems and Networks. GAO-08-526. Washington, D.C.: May 21, 2008. Critical Infrastructure Protection: Sector-Specific Plans’ Coverage of Key Cyber Security Elements Varies. GAO-08-64T. Washington, D.C.: October 31, 2007. Critical Infrastructure Protection: Sector-Specific Plans’ Coverage of Key Cyber Security Elements Varies. GAO-08-113. October 31, 2007. Critical Infrastructure Protection: Multiple Efforts to Secure Control Systems are Under Way, but Challenges Remain. GAO-07-1036. Washington, D.C.: September 10, 2007. Critical Infrastructure Protection: DHS Leadership Needed to Enhance Cybersecurity. GAO-06-1087T. Washington, D.C.: September 13, 2006. Critical Infrastructure Protection: Challenges in Addressing Cybersecurity. GAO-05-827T. Washington, D.C.: July 19, 2005. Critical Infrastructure Protection: Department of Homeland Security Faces Challenges in Fulfilling Cybersecurity Responsibilities. GAO-05-434. Washington, D.C.: May 26, 2005. Critical Infrastructure Protection: Improving Information Sharing with Infrastructure Sectors. GAO-04-780. Washington, D.C.: July 9, 2004. Technology Assessment: Cybersecurity for Critical Infrastructure Protection. GAO-04-321. Washington, D.C.: May 28, 2004. Critical Infrastructure Protection: Establishing Effective Information Sharing with Infrastructure Sectors. GAO-04-699T. Washington, D.C.: April 21, 2004. Critical Infrastructure Protection: Challenges and Efforts to Secure Control Systems. GAO-04-628T. Washington, D.C.: March 30, 2004. Critical Infrastructure Protection: Challenges and Efforts to Secure Control Systems. GAO-04-354. Washington, D.C.: March 15, 2004. Posthearing Questions from the September 17, 2003, Hearing on “Implications of Power Blackouts for the Nation’s Cybersecurity and Critical Infrastructure Protection: The Electric Grid, Critical Interdependencies, Vulnerabilities, and Readiness”. GAO-04-300R. Washington, D.C.: December 8, 2003. Critical Infrastructure Protection: Challenges in Securing Control Systems. GAO-04-140T. Washington, D.C.: October 1, 2003. Critical Infrastructure Protection: Efforts of the Financial Services Sector to Address Cyber Threats. GAO-03-173. Washington, D.C.: January 30, 2003. High-Risk Series: Protecting Information Systems Supporting the Federal Government and the Nation’s Critical Infrastructures. GAO-03-121. Washington, D.C.: January 1, 2003. Critical Infrastructure Protection: Federal Efforts Require a More Coordinated and Comprehensive Approach for Protecting Information Systems. GAO-02-474. Washington, D.C.: July 15, 2002. Critical Infrastructure Protection: Significant Challenges in Safeguarding Government and Privately Controlled Systems from Computer-Based Attacks. GAO-01-1168T. Washington, D.C.: September 26, 2001. Critical Infrastructure Protection: Significant Challenges in Protecting Federal Systems and Developing Analysis and Warning Capabilities. GAO-01-1132T. Washington, D.C.: September 12, 2001. Critical Infrastructure Protection: Significant Challenges in Developing Analysis, Warning, and Response Capabilities. GAO-01-1005T. Washington, D.C.: July 25, 2001. Critical Infrastructure Protection: Significant Challenges in Developing Analysis, Warning, and Response Capabilities. GAO-01-769T. Washington, D.C.: May 22, 2001. Critical Infrastructure Protection: Significant Challenges in Developing National Capabilities. GAO-01-323. Washington, D.C.: April 25, 2001. Critical Infrastructure Protection: Challenges to Building a Comprehensive Strategy for Information Sharing and Coordination. GAO/T-AIMD-00-268. Washington, D.C.: July 26, 2000. Critical Infrastructure Protection: Comments on the Proposed Cyber Security Information Act of 2000. GAO/T-AIMD-00-229. Washington, D.C.: June 22, 2000. Critical Infrastructure Protection: “ILOVEYOU” Computer Virus Highlights Need for Improved Alert and Coordination Capabilities. GAO/T-AIMD-00-181. Washington, D.C.: May 18, 2000. Critical Infrastructure Protection: National Plan for Information Systems Protection. GAO/AIMD-00-90R. Washington, D.C.: February 11, 2000. Critical Infrastructure Protection: Comments on the National Plan for Information Systems Protection. GAO/T-AIMD-00-72. Washington, D.C.: February 1, 2000. Critical Infrastructure Protection: Fundamental Improvements Needed to Assure Security of Federal Operations. GAO/T-AIMD-00-7. Washington, D.C.: October 6, 1999. Critical Infrastructure Protection: Comprehensive Strategy Can Draw on Year 2000 Experiences. GAO/AIMD-00-1. Washington, D.C.: October 1, 1999. Defense Critical Infrastructure: Actions Needed to Improve Identification and Management of Electrical Power Risks and Vulnerabilities to DoD Critical Assets. GAO-10-147. October 23, 2009. Defense Critical Infrastructure: Actions Needed to Improve the Consistency, Reliability, and Usefulness of DOD’s Tier 1 Task Critical Asset List. GAO-09-740R. Washington, D.C.: July 17, 2009. Defense Critical Infrastructure: Developing Training Standards and an Awareness of Existing Expertise Would Help DOD Assure the Availability of Critical Infrastructure. GAO-09-42. Washington, D.C.: October 30, 2008. Defense Critical Infrastructure: Adherence to Guidance Would Improve DOD’s Approach to Identifying and Assuring the Availability of Critical Transportation Assets. GAO-08-851. Washington, D.C.: August 15, 2008. Defense Critical Infrastructure: DOD’s Risk Analysis of Its Critical Infrastructure Omits Highly Sensitive Assets. GAO-08-373R. Washington, D.C.: April 2, 2008. Defense Infrastructure: Management Actions Needed to Ensure Effectiveness of DOD’s Risk Management Approach for the Defense Industrial Base. GAO-07-1077. Washington, D.C.: August 31, 2007. Defense Infrastructure: Actions Needed to Guide DOD’s Efforts to Identify, Prioritize, and Assess Its Critical Infrastructure. GAO-07-461. Washington, D.C.: May 24, 2007. Electricity Restructuring: FERC Could Take Additional Steps to Analyze Regional Transmission Organizations’ Benefits and Performance. GAO-08-987. Washington, D.C.: September 22, 2008. Department of Energy, Federal Energy Regulatory Commission: Mandatory Reliability Standards for Critical Infrastructure Protection. GAO-08-493R. Washington, D.C.: February 21, 2008. Electricity Restructuring: Key Challenges Remain. GAO-06-237. Washington, D.C.: November 15, 2005. Meeting Energy Demand in the 21st Century: Many Challenges and Key Questions. GAO-05-414T. Washington, D.C.: March 16, 2005. Electricity Restructuring: Action Needed to Address Emerging Gaps in Federal Information Collection. GAO-03-586. Washington, D.C.: June 30, 2003. Restructured Electricity Markets: Three States’ Experiences in Adding Generating Capacity. GAO-02-427. Washington, D.C.: May 24, 2002. Energy Markets: Results of FERC Outage Study and Other Market Power Studies. GAO-01-1019T. Washington, D.C.: August 2, 2001. Combating Terrorism: Observations on National Strategies Related to Terrorism. GAO-03-519T. Washington, D.C.: March 3, 2003. Critical Infrastructure Protection: Significant Challenges Need to Be Addressed. GAO-02-961T. Washington, D.C.: July 24, 2002. Critical Infrastructure Protection: Significant Homeland Security Challenges Need to Be Addressed. GAO-02-918T. Washington, D.C.: July 9, 2002.
According to the Department of Homeland Security (DHS), protecting and ensuring the resiliency (the ability to resist, absorb, recover from, or successfully adapt to adversity or changing conditions) of critical infrastructure and key resources (CIKR) is essential to the nation's security. By law, DHS is to lead and coordinate efforts to protect several thousand CIKR assets deemed vital to the nation's security, public health, and economy. In 2006, DHS created the National Infrastructure Protection Plan (NIPP) to outline the approach for integrating CIKR and increased its emphasis on resiliency in its 2009 update. GAO was asked to assess the extent to which DHS (1) has incorporated resiliency into the programs it uses to work with asset owners and operators and (2) is positioned to disseminate information it gathers on resiliency practices to asset owners and operators. GAO reviewed DHS documents, such as the NIPP, and interviewed DHS officials and 15 owners and operators of assets selected on the basis of geographic diversity. The results of these interviews are not generalizable but provide insights. DHS's efforts to incorporate resiliency into the programs it uses to work with asset owners and operators is evolving but program management could be strengthened. Specifically, DHS is developing or updating programs to assess vulnerability and risk at CIKR facilities and within groups of related infrastructure, regions, and systems to place greater emphasis on resiliency. However, DHS has not taken commensurate efforts to measure asset owners' and operators' actions to address resiliency gaps. DHS operates its Protective Security Advisor Program, which deploys critical infrastructure protection and security specialists, called Protective Security Advisors (PSA), to assist asset owners and operators on CIKR protection strategies, and has provided guidelines to PSAs on key job tasks such as how to establish relationships between asset owners and operators and DHS, federal, state, and local officials. DHS has provided training to PSAs on resiliency topics, but has not updated PSA guidelines to articulate the role of PSAs with regard to resiliency issues, or how PSAs are to promote resiliency strategies and practices to asset owners and operators. A senior DHS official described plans to update PSA guidelines and the intent to outline this plan in October 2010, but did not provide information on what changes would be made to articulate PSA roles and responsibility with regard to resiliency. By developing measures to assess the extent to which asset owners and operators are addressing resiliency gaps and updating PSA guidance, DHS would be better positioned to manage its efforts to help asset owners and operators enhance their resiliency. DHS faces barriers disseminating information about resiliency practices across the spectrum of asset owners and operators. DHS shares information on potential protective measures with asset owners and operators and others including state and local officials (generally on a case-by-case basis) after it has completed vulnerability assessments at CIKR facilities. DHS officials told GAO that they have considered ways to disseminate information that they collect or plan to collect with regard to resiliency. However, DHS faces barriers sharing information about resiliency strategies. For example, given the voluntary nature of the CIKR partnership, DHS officials stated that DHS should not be viewed as identifying and promoting practices which could be construed by CIKR partners to be standards. Also, according to DHS officials, the need for and the emphasis on resiliency can vary across different types of facilities depending on the nature of the facility. For example, an oil refinery is inherently different than a government office building. DHS's efforts to emphasize resiliency when developing or updating the programs it uses to work with owners and operators creates an opportunity for DHS to position itself to disseminate information about resiliency practices within and across the spectrum of asset owners and operators. By determining the feasibility of overcoming barriers and developing an approach for disseminating information on resiliency practices within and across sectors, DHS could better position itself to help asset owners and operators consider and adopt resiliency strategies. GAO recommends that DHS develop resiliency performance measures, update PSA guidelines, and determine the feasibility of developing an approach to disseminate resiliency information. DHS is taking action to implement two recommendations and is internally considering the third.
You are an expert at summarizing long articles. Proceed to summarize the following text: Offset arrangements are not new to military export sales. The use of offsets, specifically coproduction agreements, began in the late 1950s and early 1960s in Europe and Japan. A country’s desire to coproduce portions of weapon systems was based on needs such as maintaining domestic employment, creating a national defense industrial base, acquiring modern technology, and assisting its balance of payments position. In 1984, we reported that offsets were a common practice and that demands for offsets on defense sales would continue to increase. The United States is the world’s leading defense exporter and held about 52 percent of the global defense export market in 1994 (the latest year for which statistics are available). Offsets are often an essential part of defense export sales. Offset agreements may specify the level of offset required, normally expressed as a percentage of the sales contract. Offset agreements may also specify what types of activity are eligible for offset credit. Offset activities that are directly related to the weapon system sold are considered “direct” offset, while those involving unrelated defense or nondefense goods or services are considered “indirect.” An offset may directly relate to the weapon system being sold or to some other weapon system or even nondefense goods or services. Countries may also include conditions specifying the transfer of high technology and where and with whom offset business must be done. Other provisions include requirements that offset credit be granted only for new business and that credits be granted only if local content exceeds a minimum level. Negotiating offset credit is an important part of implementing offset agreements. Countries can grant additional offset credit to encourage companies to undertake highly desirable offset activities. For example, countries may offer large multipliers for advanced technology or training that can greatly reduce a company’s cost of meeting its offset obligation.However, a country can also establish criteria that make it difficult for a company to earn offset credit. Some countries, such as the United Kingdom and the Netherlands, cite restrictions in the United States and other defense markets and note that their offset policies are needed to ensure that their defense industries are given an opportunity to compete. The United States does not require offsets for its foreign military purchases, but it does have requirements that favor domestic production. The Defense Production Act of 1950 allows the Secretary of Defense to preserve the domestic mobilization base by restricting purchases of critical items from foreign sources. While not precluding foreign suppliers, regulations implementing the Buy America Act of 1933 allow price preferences for domestic manufacturers, and annual Department of Defense (DOD) appropriation acts sometimes contain prohibitions on foreign purchases of specific products. The General Agreement on Tariffs and Trade (GATT) prohibits the practice of offsets in government procurement, except for procurement of military weapons. In 1990, the North Atlantic Treaty Organization (NATO) proposed a code of conduct for defense trade to regulate offsets in military exports, but did not adopt it. In addition, reciprocal memorandums of understanding between the United States and several major allies include provisions to consult on the adverse effects of offsets. Over the last 10 years, the countries in our study have increased their demands for offsets, begun to emphasize longer term offset projects and commitments, or initiated offset requirements. All the countries in our review have increased their offset demands on U.S. companies to achieve more substantial economic benefits. Canada, the Netherlands, Spain, South Korea, and the United Kingdom have all had offset policies since at least 1985. These countries are using new approaches in their offsets to increase economic benefits. These changes include targeting offset activities and granting offset credit only for new business rather than existing business. For example, Canada and the United Kingdom are less willing to grant offset credit for a company’s existing business in the country, and South Korea has increased its demands for technology transfer and training as part of any offset agreement. Since 1990, Kuwait, Taiwan, and the United Arab Emirates have all established a new policy for offsets on foreign military purchases. They are now using offsets to help diversify their economies or promote general economic development. Although these countries are new entrants, company officials said they are knowledgeable about the defense market, and their offset policies can be equally as demanding as countries with existing offset policies. For example, the United Arab Emirates requires 60 percent of the value of the contract to be offset by commercially viable business ventures and grants offset credit based only on the profits generated by these investments. Singapore and Saudi Arabia have both recently reinstated their offset policies. Both countries have intermittently required offsets since the 1980s. However, company officials said these countries now regularly pursue offsets on their defense purchases. Saudi Arabia’s new approach is less formal and relies on best effort commitments from companies rather than formal agreements. Previously, some of the countries in our review allowed companies to meet offset obligations with existing business in the country or with one-time purchases of the country’s goods. A country’s requirements for direct offsets were sometimes met through projects calling for the simple assembly of weapon systems components. These types of offset activities often did not result in any long-term economic benefits. More recently, buying countries have changed their offset strategies in an attempt to achieve lasting economic benefits. Countries such as Kuwait and the United Arab Emirates are seeking offset activities that will help create viable businesses, increase local investment, or diversify the economy. Countries such as Canada, the Netherlands, and the United Kingdom are trying to form long-term strategic relationships with the selling companies to generate future work, instead of always linking offset activities to individual sales. The types of offsets required by the countries in our review depend on their offset program goals and the country’s economy—whether it is developed, newly industrialized, or less industrialized. Companies undertake a broad array of activities to meet these offset obligations. A country’s offset requirements policy outlines the types of offset projects sought by the country. All 10 countries in our review now have offset requirements. These requirements include the amount of offset required (expressed as a percentage of the purchase price); what projects are eligible for offset credit; how these projects are valued (e.g., offering multipliers for calculating credit for highly desired projects); nonperformance penalties; and performance periods. Countries in our study with developed economies encourage offsets related to the defense or aerospace industries. These offsets typically involve production and coproduction activities related to the weapon system being acquired but could also involve unrelated defense or aerospace projects. These countries have well-established defense industries and are using offsets to channel work to their defense companies, thus supporting their defense industrial base. Canada, the Netherlands, Spain, and the United Kingdom are all in this group. We reviewed 40 offset agreements, with a stated value of $5.6 billion, between U.S. companies and countries with developed economies. The following are highlights from these agreements: The agreements with the United Kingdom reflected its focus on defense, requiring that offsets be satisfied through British companies certified by the government as performing defense-related work. A majority of the agreements required that 100 percent of the sale be offset, although the percentage specified in the agreements ranged from 50 percent to 130 percent. The offset agreements with the Netherlands focused on defense-related or high-technology nondefense projects and specify a minimum local content threshold before full offset credit will be granted. Such local content requirements effectively increased the amount of business activity required to generate credit. Most of the agreements required 100 percent of the sale to be offset with a range of 45 percent to over 130 percent. Coproduction of defense systems is a feature found in some of the offset agreements with Spain. These agreements specified the particular products that would be procured from Spain’s defense industry as part of the offset program. The offset percentage required in these agreements ranged from less than 30 percent to over 100 percent. The offset agreements with Canada showed the country’s focus on encouraging U.S. procurement and other arrangements with Canadian suppliers in defense, aerospace, and other high-technology industries. Most of the agreements also included requirements that contractors place work throughout the Canadian provinces and also specified that a portion of the offset be done with small businesses. The offset percentage required in these agreements ranged from less than 40 percent to 100 percent. The following are examples of the offset projects that both U.S. and foreign firms have implemented or proposed in these developed economies: The German company Krauss-Maffei agreed to coproduce tanks in Spain to offset Spain’s purchase of 200 Leopard 2 main battle tanks. (Countertrade Outlook, Vol. XIII, No. 16, Aug. 21, 1995, p.10.) Lockheed will establish a Canadian firm as an authorized service center for C-130 aircraft to satisfy offset obligations for its sale of C-130s to Canada. This will ensure that the Canadian firm has ongoing repair and overhaul work for this aircraft. Lockheed will also procure assemblies and avionics in Canada for its C-5 transport aircraft. (Countertrade Outlook, Vol. XIII, No. 10, May 22, 1995, p.3.) McDonnell Douglas will offset the United Kingdom’s purchase of Apache attack helicopters (valued at nearly $4 billion) by producing much of the aircraft in the United Kingdom, with British equipment. U.S. suppliers are committed to buying $350 million worth of British equipment for U.S.-built Apache helicopters. In addition, Westland Helicopters, a United Kingdom firm, has the potential to get up to $955 million worth of sales for future support services for Apache helicopters worldwide. (Defense News, Aug. 21-27, 1995, p. 12.) Most U.S. companies we reviewed did not have significant difficulty meeting defense-related offsets in Canada, the Netherlands, and the United Kingdom because those countries have well-established defense industries. In addition, many of the companies have significant existing business in these countries, often making it easier for the companies to implement offset projects. Meeting Spain’s offset demands was more difficult because its defense industry is not as advanced as other Western industrialized countries. Some of the U.S. companies in our review expressed concern about the impact of defense-related offsets on the U.S. defense industry, particularly the loss of production to U.S. defense subcontractors and suppliers. Appendix I provides detailed information on the terms of the offset agreements and the requirements for each developed country we reviewed. Countries in our study with developing defense and commercial industries, such as South Korea, Singapore, and Taiwan, have pursued both defense-related and nondefense-related offsets. Offsets in these countries typically involve technology transfer in defense or comparable high-technology industries. They see offsets as a means to further develop their defense base and economy. We reviewed 31 offset agreements, with a stated value of $5.1 billion, with countries that have newly industrialized economies. The following are highlights from these agreements: The agreements with South Korea emphasized work in the defense and aerospace industries, particularly the transfer of related high technology. Many agreements included multipliers to encourage work in these sectors. Many also required the purchase of unrelated products for export resale in the United States and other markets. Offset agreements generally required at least a 30-percent offset with a range of less than 30 percent to more than 60 percent. The offset agreements with Singapore focused on defense-related offset projects, including direct production of parts for purchased weapon systems. The offset percentage required in these agreements ranged from 25 percent to 30 percent. In contrast to other newly industrialized countries, the agreements with Taiwan focused on commercial projects aimed at developing long-term supplier relationships with foreign firms. The agreements offered multipliers for technology transfer, training, and technical assistance reflecting the priority the government places on these activities. These agreements all called for a 30-percent offset goal. The following are examples of the offset projects that both U.S. and foreign firms have implemented or proposed in these newly industrialized economies: Dassault, as part of an offset arrangement for the $3.5-billion sale of Mirage fighter aircraft to Taiwan, agreed to form partnerships with firms in Taiwan to transfer technology and manufacture equipment for civilian markets. (Jane’s Defence Weekly, Sept. 2, 1995, p.17.) Lockheed-Martin, as part of its offset obligation for the sale of 150 F-16 fighter aircraft to Taiwan, is seeking suppliers in Taiwan for repair contracts for more than 500 aircraft components. Taiwan regards the offset program as an opportunity to (1) become a regional aviation maintenance center and (2) obtain similar work on another aircraft under development by Lockheed-Martin. (Countertrade Outlook, Vol. XIII, No. 13, July 10, 1995, p.4.) Lockheed-Martin Tactical Aircraft Systems, formerly the General Dynamics Fort Worth Company, is in the process of satisfying South Korea’s offset requirements on the purchase of 120 F-16 fighter aircraft through several aerospace projects. These projects include codevelopment of a new trainer aircraft, training, transfer of castings and forgings technology, and repair and overhaul of aerospace equipment. As part of the sale, General Dynamics agreed to transfer relevant manufacturing and assembly know-how to allow South Korea to manufacture 72 aircraft and assemble an additional 36 aircraft from kits that were manufactured in the United States. The remaining 12 aircraft were to be completely assembled in the United States. U.S. companies generally considered the offset requirements of Singapore and Taiwan to be manageable. However, company officials noted that despite the relatively low percentage of offset required in South Korea, these requirements can be as difficult as a 100-percent offset requirement. Appendix II provides detailed information on the offset requirements of each newly industrialized country and the terms of the offset agreements we reviewed. Countries with less industrialized economies, such as Kuwait, Saudi Arabia, and the United Arab Emirates, generally pursue indirect offsets to help create profitable businesses and build their country’s infrastructure. These countries usually do not pursue direct offsets because they have limited defense and other advanced technology industries and are not interested in attracting work that would require importing foreign labor. The United Arab Emirates’ new offset policy grants credit only for profits generated rather than the value of the investment. We reviewed five offset agreements, with a value of at least $1.6 billion, with countries that have less industrialized economies. The following are highlights of the agreements we reviewed: The agreements with Kuwait required that 30 percent of the sales be offset through investment projects, including infrastructure development. Kuwait’s offset policy grants multipliers up to 3.5 for investments in high priority areas. The agreements with Saudi Arabia were informal and did not require a specified offset percentage. The agreements primarily called for nondefense-related investment projects. The agreements required joint ventures between Saudi Arabian and foreign companies and assigned values to technology transfers at the cost the country would have incurred to develop them. The agreements with the United Arab Emirates required that 60 percent of the sale be offset through nondefense-related investment projects and granted multipliers for various types of investment projects. The following are representative examples of the offset projects that both U.S. and foreign firms have implemented or proposed in these less industrialized economies: Several French firms have established manufacturing facilities or other investments in the United Arab Emirates to satisfy offset obligations. For example, Thomson-CSF started a garment manufacturing enterprise in Abu Dhabi in connection with a contract for tactical transceivers and audio systems. Giat Industries created an engineering company specializing in air conditioning as part of its offset commitment for the United Arab Emirates’ purchase of battle tanks. (Countertrade Outlook, Vol. XIII, No. 8, Apr. 24, 1995, pp.3-4.) McDonnell-Douglas Helicopter Company entered into several joint ventures with firms in the United Arab Emirates to satisfy offset commitments for the sale of AH-64 Apache helicopters. Projects included forming a company to manufacture a product that cleans up oil spills and creating another firm that will recycle used photocopier and laser computer printer cartridges. The defense contractor is also paying for a U.S. law firm to draft the country’s environmental laws. (Countertrade Outlook, Vol. XIII, No. 2, Jan. 23, 1995, pp. 2-3.) General Dynamics and McDonnell-Douglas contracted with companies in Saudi Arabia to satisfy offset obligations from several weapons sales. In one case, a Saudi firm will manufacture circuit boards for tanks, while in another instance, a Saudi company will manufacture components for F-15 fighter aircraft. (Countertrade Outlook, Vol. XIII, No. 6, Mar. 27, 1995, p. 5.) The United Arab Emirates is working with Chase Manhattan to establish an off-shore investment fund to provide international contractors doing business in the country the opportunity to satisfy part of their offset obligations. (Countertrade Outlook, Vol. XIII, No. 2, Jan. 23, 1995, p. 1.) Some company officials commented that indirect offsets make more sense for the countries than defense-related offsets. Although U.S. companies generally found meeting offset demands in Kuwait and Saudi Arabia manageable, some companies expressed concern over the limited number of commercially viable investment opportunities in these countries. Further, the United Arab Emirates’ offset demands were seen as particularly costly and impractical since offset credits were based on profits actually generated by the newly established enterprise. Appendix III provides detailed information on the offset requirements of each less industrialized country and the terms of the offset agreements we reviewed. Views on the effects of offsets are divided between those who accept offsets as an unavoidable part of doing business overseas and those who believe that offsets negatively affect the defense industrial base and other U.S. interests. It is difficult to accurately measure the impact of offsets on the overall U.S. economy and on specific industry sectors that are critical to defense. Company officials told us that without offsets, most export sales would not be made and the positive effects of these exports on the U.S. economy and defense industrial base would be lost. Offsets help foreign buyers build public support for purchasing U.S. products, especially since weapon procurement often involves the expenditure of large amounts of public monies on imported systems. Other company officials indicated that export sales provide employment for the U.S. defense industry and orders for larger production runs, thus reducing unit costs to the U.S. military. They also noted that many offset deals create new and profitable business opportunities for themselves and other U.S. companies. Critics charge that offsets have effects that limit or negate the economic and defense industrial base benefits claimed to be associated with defense export sales. Mandated offshore production may directly displace U.S. defense firms that previously performed this work, and offsets that transfer technology and provide marketing assistance give foreign defense firms the capabilities to subsequently produce and market their products, often in direct competition with U.S. defense companies. According to company officials, indirect offsets involving procurement, technology transfer, marketing assistance, and unrelated commodity purchases may harm nondefense industries by establishing and promoting foreign competitors. Defense exports involving offsets are small relative to the economy as a whole, making it difficult to measure any effects using national aggregated data. Similarly, the impact of offsets on specific sectors of the U.S. economy cannot be accurately measured because reliable data on the number and size of offset agreements and the transactions used to fulfill these offsets are not readily available. In addition, it would be difficult to isolate the effects of offsets from numerous other factors affecting specific industry sectors. According to officials from large defense firms and an association representing U.S. suppliers, reliable information on the impact of offsets is difficult to obtain because company officials are generally not aware that a particular offset arrangement caused them to lose or gain business. Only limited anecdotal information from these companies is available. The lack of reliable information is a long-standing problem. Recognizing the need for more information, Congress required in 1984 that the President annually assess the impact of offsets. The President tasked the Office of Management and Budget (OMB) to coordinate these assessments and submit a report to Congress. However, OMB was not able to accurately measure the impact of offsets on U.S. industry sectors critical to defense with the information it collected. The Defense Production Act Amendments of 1992 directed the Commerce Department to take the lead in assessing the impact of offsets. As part of this effort, the statute requires companies to submit information on their offset agreements that are valued at $5 million or more. Commerce plans to issue its first report in 1996. In response to concerns raised about the impact of offsets, the President issued a policy statement in 1990 that reaffirmed DOD’s standing policy of not encouraging or participating directly in offset arrangements. This policy statement also recognized that certain offsets are economically inefficient and directed that an interagency team, led by DOD in coordination with the Department of State, consult with foreign nations on limiting the adverse effects of offsets in defense procurement. In 1992, Congress adopted this policy as part of the Defense Production Act Amendments. According to the Commerce Department, DOD and the State Department have not consulted with foreign nations on the adverse effects of offsets as detailed in the 1990 presidential policy statement or the 1992 law. However, in 1990, as part of the discussions over the NATO Code of Conduct for defense trade, U.S. officials proposed to limit offsets in defense trade, but no action was taken because countries could not agree to the Code. DOD took action to include, as part of memorandums of understanding between the United States and its allies, a provision to consult on the adverse effects of offsets. DOD has discussed offsets on a case-by-case basis with several countries in the context of specific weapon sales. Commerce officials noted that offsets are driven by the demands of foreign governments against private U.S. companies. These demands place second and third tier U.S. suppliers at a disadvantage since their interests are not usually represented in these sales. Commerce officials said that DOD should take action, in accordance with the 1990 presidential policy, to consult with other nations to limit the adverse effects of offsets. One DOD official noted that negotiating the offset issue by itself would not give the United States a strong bargaining position because of U.S. reluctance to change Buy America and small business preferences. According to the Commerce Department, industry is not opposed to the initiation of consultations on offsets, but is concerned that the U.S. government might unilaterally limit the use of offsets. Officials from several large defense companies we interviewed also expressed concern about any unilateral action by the U.S. government that would limit offsets. Similarly, several officials expressed doubt that any multilateral agreement limiting offsets would be enforceable, and some noted that any ban would likely force offset activity underground. In addition, some company officials said that unilateral action banning offsets or an unenforceable multilateral agreement would place U.S. exporters at a competitive disadvantage in winning overseas defense contracts. Commerce and DOD officials agreed that unilateral action to limit offsets could harm U.S. defense companies. The Departments of Commerce, Defense, and State were given the opportunity to comment on a draft of this report. The Department of Commerce provided written comments (see app. IV) and the Departments of State and Defense provided oral comments. Commerce said our report provides a balanced view of the subject. State commented that the report accurately describes the growth in offset demands and the requirements countries place on their purchases of foreign military equipment. DOD concurred with our report and commented that it should contribute to a better understanding of the nature of offset demands and the role of offsets in military export sales. We have made minor technical corrections to the report where appropriate based on suggestions provided by Commerce and Defense. To assess how countries’ offset requirements have evolved and how companies were meeting these obligations, we focused our analyses on 10 countries. We selected these countries based on their geographic distribution and their significant purchases of foreign military equipment. We then visited nine major U.S. defense companies. These firms were chosen based on their roles as prime contractors and subcontractors that provide a full range of defense goods and services. We interviewed company officials regarding each country in our study and obtained the offset agreements that they entered into with these countries since 1985. For the limited number of agreements that we could not obtain, we relied on summarized data provided by the company. Due to the proprietary nature of the offset agreements, we are limited in our ability to present specific information on a particular contract. However, to illustrate the types of offset projects U.S. and foreign companies undertook in the countries we reviewed, we used examples from various defense journals. We did not corroborate the information reported in these journals. To determine what each country’s offset policy required, we interviewed company officials and reviewed each country’s requirements, as provided by the companies in our study. We then reviewed other government studies that examined offset requirements for these countries. We did not discuss these policies with officials from each country to confirm their accuracy. To examine the implications of offsets on the U.S. economy, we examined studies of defense offsets performed by other U.S. government agencies and other groups. We interviewed DOD, Commerce, and State officials on offset trends and any U.S. actions taken regarding offsets. We also interviewed officials from prime contractors as well as trade associations that represent mostly smaller U.S. companies. The companies in our study were cooperative and provided the information we requested in a timely manner. However, our ability to fully review the actual offset projects was affected by access restraints. This information is considered commercially sensitive by defense companies, and information on projects implementing the offset agreements was selectively provided by the companies. The companies reviewed our report to ensure that no sensitive information was disclosed. We conducted our review from May 1995 to February 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees and the Secretaries of Defense, State, and Commerce. We will also make copies available to other interested parties upon request. Please contact me at (202) 512-4587 if you or your staff have any questions concerning this report. Major contributors to this report were Karen Zuckerstein, Davi D’Agostino, David C. Trimble, Tom Hubbs, and John Neumann. Canada seeks offsets through its Industrial and Regional Benefits policy to develop and maintain the capabilities and competitiveness of Canadian companies. It solicits offsets that will benefit its manufacturing and advanced technological capabilities, including technology transfer, investments in plants or productivity improvement, and coproduction with Canadian suppliers. Offset agreements generally range from 75 percent to 100 percent of the weapon systems contract’s value. Most offsets involve purchasing products from Canadian firms in the defense, aerospace, or other high-technology industries. The official guidelines do not state a threshold for requiring offsets, and offsets have been provided on contracts with values as low as $12 million. Canada is distinctive in its emphasis on distributing offset projects across its various regions, particularly in its lesser-industrialized Western and Atlantic provinces. Most offset agreements require regional distribution, including several that specify which suppliers and regions should receive offset work. In addition, some agreements contain penalty provisions for not achieving a certain percentage of offset in each Canadian region. Many offset agreements also specify that small businesses must receive a portion of the offset projects. Several agreements included detailed requirements for determining the amount of offset credit. For example, offset projects will receive credit only if the minimum Canadian content requirement is met, which was 35 percent in several of the agreements. Also, offset credit will only be granted for new business or increases in existing business. Companies are now usually not able to get offset credit for existing business in the country, as they were in the past. Generally, the companies in our study did not have significant difficulty meeting offset requirements in Canada. Several companies found the defense-related offsets easy to implement because Canada has a developed defense industry and the companies have a significant amount of existing business in the country. Table I.1 summarizes Canada’s offset guidelines and agreements. Generate long-term industrial benefits. Agreements generated long-term industrial benefits with an emphasis on the defense and aerospace industries. Value of contracts with offsets started at $12 million. Both direct and indirect offsets are accepted, with emphasis on high-technology industries. Many agreements show preference for offsets related to defense or aerospace industries. Recent agreements required offsets ranging from 75 percent to 100 percent of the contract value. Two agreements provided for 20-percent additional credit for an increase in direct offset amount. Banking permitted in several agreements. Penalties varied from 2.5 percent to 12 percent of shortfall. Several agreements did not have penalties. Ranged from less than 5 years to over 10 years. Several agreements had yearly milestones for completing offset commitments. Several agreements required a minimum of 35-percent Canadian content to receive any offset credit. Request offset projects that promote regional and small business development and provide subcontracts to Canadian suppliers. Most agreements included regional distribution and small business requirements. Several recent agreements specified the actual suppliers to be used in carrying out offset agreements. Recent agreements only provided offset credit for new business. Several agreements have high administrative oversight to determine if offset resulted in new business and met Canadian content and other requirements. Banking refers to the practice of allowing companies to earn extra offset credit under one offset agreement and save or “bank” those credits to satisfy a later offset obligation. The Netherlands uses offsets to maintain and promote its technical capabilities in defense and other industries. The country has a well-established defense industry and requires offsets that are related to defense or high-technology civilian industries. The defense-related offsets typically involve coproduction of components, parts and assemblies, and technical services rendered by Dutch firms. Nondefense-related offsets include a wide range of activities designed to contribute to the Netherlands industrial base, including purchasing products from Dutch firms in the aircraft, automotive, electronics, optical, or shipbuilding industries. The Netherlands’ guidelines require offsets on all weapons contracts valued at more than $3 million. The standard offset demand is 100 percent, and the majority of agreements over the last 10 years reflect this requirement. Many of the agreements require that 70 percent to 85 percent of any product purchased be produced in the Netherlands in order to receive full credit toward the offset obligation. In addition, several recent agreements state that credit will only be granted for new business created or an increase in existing business. Company representatives told us that implementing defense-related offsets in the Netherlands is not a problem, given the country’s sophisticated and highly developed industrial base. Several companies identified offsets as a critical factor in winning a contract in the Netherlands and believe the country would choose a less-desired weapon system to get a better offset package. Table I.2 summarizes the Netherlands’ offset guidelines and agreements. Maintain and increase the industrial capacity of the defense industry. Most agreements included defense-related offsets. All defense contracts valued at more than $3 million require offsets. All agreements exceeded the official offset threshold. Both direct and indirect offsets are accepted, with emphasis on dual-use (military and civilian) technology. Agreements showed preference for direct offsets or indirect offsets in the defense or other technologically equivalent industry. Government seeks 100-percent offset. Most agreements over last 10 years required 100-percent offset. Multipliers are rarely included. However, according to company officials, the amount of credit granted for an offset project can be negotiated, achieving the same results as a multiplier. Banking permitted in several agreements. Penalties not stated. However, according to a May 1995 press report, the Netherlands legislature requested that penalties be included in one offset agreement. Ranged from 4 years to 15 years. Milestones are generally not included in the agreements. Most agreements required a minimum of 70-percent local content to receive 100-percent offset credit. Some agreements specified the actual suppliers to be used in carrying out the offset agreement or required that a portion of the offset activities be fulfilled by collaboration with small- and medium-sized businesses. Require indirect offsets to include new business or a significant increase in existing orders. Several agreements specified that offset credit would be granted only for new business or an increase in business. Spain uses offsets on defense orders to support and develop its defense industry. Although Spain does not have written offset guidelines, it does have a policy of demanding offsets, including coproduction by designated Spanish firms, technology transfer, and export of Spanish defense products. Spain’s standard offset requirement is 100 percent; however, the agreements over the last 10 years have ranged from 30 percent to 100 percent of the value of the weapon system. Spain does not have a stated threshold amount for requiring offsets, but all of the offset agreements over the last 10 years were for weapons sales over $7 million. In some agreements, Spain has included provisions to only credit offset projects that create new business or represent an increase in existing business, and not grant credit for companies’ current business in the country. In addition, Spain has sometimes included a local content requirement for offset projects, providing credit only for the portion of the projects that are produced in Spain. Companies report that to get approval for offset projects, the work usually has to be spread across various Spanish regions, even though the agreements do not explicitly contain this requirement. In addition, Spain has targeted specific Spanish companies that it wants to get offset work. One U.S. company said offsets were relatively easy to implement in Spain because Spain’s participation has consisted of producing less sophisticated components. Another company observed that offsets are more difficult to implement in Spain than in other European countries because of Spain’s less diverse industrial base. Table I.3 summarizes Spain’s offset guidelines and agreements. Has official offset policy, but not written guidelines. Provide support for Spain’s defense industry. Some agreements reflected goal of providing opportunities for defense industry. Agreements were for contracts valued at over $7 million. Emphasis on defense-related offsets. Agreements reflected preference for offsets in the defense industry, including coproduction, technology transfer, and export of Spanish defense products. 100 percent is the standard offset demand. Agreements required from 30-percent to 100-percent offset. Some agreements included multipliers for technology and production licenses and joint development programs. Banking excess credits common. Generally requires penalties. Some agreements included penalties ranging from 3 percent to 5 percent of offset commitment shortfall. Ranged from 5 years to 8 years, with grace periods sometimes included. Only one agreement had milestones. Sometimes grants credit only for value of local content. Included in some agreements. Sometimes specifies regional or supplier requirements. Some agreements specified the actual supplier to be used in carrying out offset agreement. In addition, companies are encouraged to spread offset projects out over Spanish regions. Some agreements required regular reporting of offset implementation status. The United Kingdom uses offsets to channel work to its defense companies. The country has a well-established defense industry and requests offsets that are related to defense, including production, technology transfer, capital investment, and joint ventures. Offset agreements focus on procurement of defense-related products and services from British firms. According to the country’s guidelines, offsets are not mandatory, but are used as an assessment factor in contract evaluations. Offsets are commonly sought from North American companies and on a case-by-case basis from European companies. Offsets are encouraged for weapon sales worth more than $16 million. A majority of the agreements required 100 percent of the sale to be offset. Some companies stated that implementing defense-related offsets in the United Kingdom is not a problem, given the country’s sophisticated and diverse industries and the significant amount of existing business these companies have in the country. However, several recent agreements specify that offset credit will be given only for new business or a verifiable increase in existing business, based on a prior 3-year average. A company’s existing business in the country is not eligible for offset credit. Furthermore, recent agreements specify that any purchase orders or subcontracts for offset credit must be placed with one of the companies on the country’s registry of recognized defense companies. However, this is not a problem for U.S. companies partly because many British firms are on the registry. Table I.4 summarizes the United Kingdom’s offset guidelines and agreements. Table I.4: United Kingdom—Offset Guidelines and Agreements Compensate for loss of work to the United Kingdom’s defense industrial sector. Agreements reflected guidelines’ goal to provide work to the defense industrial sector. All defense contracts valued at more than $16 million require offsets. Most were for contracts valued above the threshold amount. All offsets must be defense-related. Agreements reflected requirement for defense-related offsets. Government seeks 100-percent offset. Offset percentage ranged from 50 percent to 130 percent. Most agreements required at least 100-percent offset. Offset credit can be negotiated. Offset credit can be negotiated. For example, one agreement provided for “extra credit” if a specific offset project was undertaken. Permitted in certain circumstances. Banking permitted in most agreements. No penalties; agreements call for “best efforts” to fulfill. Not to exceed the delivery period of the contract. Ranged from 3 years to 13 years. Not stated. Offset activities must be placed with a qualified United Kingdom defense manufacturer. Such companies are listed in a central registry and are from various regions of the country. Most agreements specified that offset credit would only be granted for work with recognized United Kingdom defense contractors. Offset activities must be new and consist of products not previously purchased, products purchased from new suppliers, or new contracts for existing business valued at over $50,000. Several recent agreements specified that offset credit would be granted only for new business or an increase in business. Offset proposals commonly submitted at time of contract tender for approval. No other mention of oversight. Several agreements required regular reporting of offset activity progress. Staff to review offset credit. Singapore uses offsets to build its capability to produce, maintain, and upgrade its defense systems. It has required offsets on an ad hoc basis since the mid-1980s, but has recently begun to consistently demand offsets. Singapore’s official policy requires all major purchases to be offset with a 30-percent offset performance goal. All the offset arrangements we reviewed emphasized defense-related projects. These arrangements required producing components for the weapon system being purchased or establishing a Singaporean firm as a service center for a weapon system. Singapore seeks technology transfer and training, and most offset agreements include multipliers or provide credits in excess of contractor costs for highly desired projects. For example, manufacturing technology transferred for one weapon system was valued at several times the cost to the company to provide it. Generally, companies that had offset agreements with Singapore considered the requirements manageable. Table II.1 summarizes Singapore’s offset guidelines and agreements. Assist the Ministry of Defense in building up Singapore’s capabilities to provide necessary maintenance, production, and upgrade capability to support equipment and systems the Ministry has procured. To be accomplished through technology transfer, technical assistance, participation in research and development, and marketing assistance. Consistent with the guidelines. All “major” purchases of equipment, material, and services; however, the guidelines do not provide a specific threshold. All the agreements we reviewed were for sales valued at over $5 million. Direct offset is preferred but indirect offset is acceptable. Most included a mix of direct and indirect offset transactions. At least 30 percent of main contract value, expressed as a goal. Ranged from 25 percent to 30 percent. Some agreements provided multipliers for activities such as technology transfer (valued at up to 10 times the cost), training, or technical assistance. Permitted banking in most agreements. 10 percent of unfulfilled obligation. 3 to 5 percent of unfulfilled obligation. Concurrent with the duration of the main contract up to a maximum of 10 years, plus a 1-year grace period. Agreements are generally consistent with the guidelines. Generally not stated. Firms owned by the Ministry of Defense are given first preference on bidding for work with U.S. contractors. Agreements are generally consistent with the guidelines. The Ministry of Defense is very involved in selecting Singaporean firms that U.S. defense contractors must work with. South Korea uses offsets to acquire advanced technologies for its defense and commercial industry. Technology transfer and related training has consistently been a high priority for South Korea, and it has received increased emphasis in recent years as South Korea has developed its aerospace industry. To obtain technology transfer and training, South Korea grants multipliers and awards offset credit that exceeds the actual cost to the company of providing these items. As a result of U.S. government pressure to reduce offset demands in the late 1980s, South Korea’s policy calls for a 30-percent offset on defense purchases exceeding $5 million. Although some agreements required a 30-percent offset, others required an offset of 40 percent or higher. South Korea has a preference for defense-related offsets, but is also willing to accept a wide variety of indirect offsets to help develop its industry, especially its aerospace industry. In addition, South Korea frequently has required U.S. contractors to buy products, such as forklifts and printing press parts, for export resale that were unrelated to the weapon system being purchased. Several U.S. companies indicated that it can be difficult to work with South Korea. They noted that the 30-percent offset requirement is tougher to satisfy than the old 50-percent requirement and can be as tough as a 100-percent requirement. Several company officials also noted that they have had difficulty in not being allowed to use banked credits. However, some contractors commented that South Korea was consistent in its requirements and would negotiate if the U.S. company was trying to meet its offset obligation. Table II.2 summarizes South Korea’s offset guidelines and agreements. Offset requirements first begun before 1985. Latest version published in January 1992. Acquire key advanced technologies required for defense and commercial industry research and development and production; enhance depot maintenance capability; enhance opportunities for manufacturing equipment and its components; and provide opportunities to repair and overhaul foreign military equipment and to export defense-related products. Agreements were generally consistent with the guidelines. However, certain offset projects had no relationship to the weapon systems being purchased. Military procurements exceeding $5 million are subject to offset. Several offset agreements prior to 1992 involved contracts that were below the current $5-million threshold. In addition, according to one contractor, South Korea combined two separate purchases into one contract to reach the offset threshold. Direct offset is preferred, but indirect offset is acceptable. Agreements were generally consistent with the guidelines and reflected a willingness to accept indirect offset, especially involving technology transfer and training, that will contribute to economic development. At least 30 percent of contract value. Since 1985, agreements have generally required at least a 30-percent offset—and frequently more. Limited use of multipliers. Facilities, equipment, and tooling provided by the contractor free of charge are given a multiplier of two times their actual cost. Several offset agreements provided multipliers that were larger than the published guidelines, especially for technology transfer and training. For example, providing on-the-job training for South Korean engineers at a U.S. contractor’s plant was valued at 10 times the cost of providing the training. Banking excess credits allowed in several individual agreements, but most were silent on banking. 10 percent of unfulfilled obligation. Agreements were consistent with the guidelines. Generally corresponds to the performance period for the main contract. Agreements were generally consistent with the guidelines. Agreements occasionally required and paralleled overall contract performance periods. Many agreements were prescriptive and specified the South Korean partners to be used by U.S. contractors or the exact training to be provided by the U.S. contractor to South Korean workers. Agreements frequently required U.S. contractors to buy South Korean products for export resale that had no relationship to the contract. Taiwan instituted its offset policy about 1993. Taiwan uses offsets to encourage private investment, upgrade its industries, and enhance international competitiveness. Taiwan’s goal is to form long-term supplier relationships with foreign companies, using training and technology transfer to gain expertise. Taiwan emphasizes these areas by offering large multipliers for such projects. For example, the agreements included multipliers as high as 25 for technology transfer, while other activities such as purchases from local firms received no or very low multipliers. Company officials noted that Taiwan recently passed a requirement calling for 30-percent offsets. Taiwan’s offset guidelines are broad, laying out several categories of industrial cooperation and methods to achieve it—from production of weapon system components to local investment. Offset agreements appear flexible, with projects targeted to areas considered strategic for economic development. In contrast to South Korea and Singapore, Taiwan generally prefers commercial offset projects rather than defense-related projects. Although some agreements include defense-related offset projects, such as coproduction of weapons components, the agreements more commonly involve commercial projects, such as marketing assistance. Generally, the companies we visited believe that Taiwan’s offset requirements have been easily managed. Table II.3 summarizes Taiwan’s offset guidelines and agreements. All are after date of guidelines. To achieve the timely introduction of key technologies and high-tech industries to Taiwan. Targeted industries include aerospace, semiconductors, advanced materials, information products, precision machinery and automation, and advanced sensors. Agreements are consistent with the guidelines. To be determined on a case-by-case basis; both civilian and military government procurements are subject to offset. The smallest contract we reviewed with an offset requirement was for about $60 million. Both direct and indirect offsets are acceptable. Agreements reflected preference for indirect offset; they either required indirect offset only or were heavily weighted toward indirect. To be determined on a case-by-case basis. However, company officials noted that Taiwan’s legislature passed a law in 1994 requiring 30-percent offsets. Most of the agreements we reviewed required 10-percent offset with an additional 20 percent expressed as a goal; however, the most recent agreement required 30-percent offset. Range from 2 for local purchases to 10 for technology transfer. Multipliers provided for a broad range of transactions—technology transfer, training, technical assistance, marketing assistance, investments, and joint ventures—valued at between 2 and 25 times the cost of the service provided. Most agreements do not specifically discuss banking excess credits. None. Guidelines based on good faith. However, the policy notes that a contractor’s track record in fulfilling an offset obligation is considered when awarding future contracts. Agreements did not include penalties. Concurrent with master contract. All agreements had a 10-year performance period. Not stated. Goal is to participate in long-term supplier relationships, using training and technology transfer to gain expertise. Guidelines are broad, laying out several categories of industrial cooperation and methods to achieve it—from production of weapon system components to local investment. Consistent with guidelines, the offset projects were targeted to areas considered “strategic” to economic development. In 1992, Kuwait began requiring offsets for all defense purchases over $3 million. Kuwait pursues offsets that will generate wealth and stimulate the local economy through joint ventures and other investments in the country’s infrastructure. The limited number of agreements we reviewed call for U.S. contractors to propose investment projects and then manage and design the projects selected by the Kuwaiti government. The agreements required offsets equal to 30 percent of the contract values, as stated in Kuwait’s offset policy. U.S. companies have had limited experience with Kuwait’s offset program to date, but generally consider it manageable. Table III.1 summarizes Kuwait’s offset guidelines and agreements. Offset policy instituted in July 1992. Revised guidelines issued in March 1995. All are after the institution of the 1992 guidelines. Promote and stimulate the local economy. Agreements are consistent with program goals. Offset threshold is about $3 million. Exceed threshold. Indirect offsets. Agreements involved indirect offsets. 30 percent of the value of the contract. Agreements required 30-percent offset. The relative value of multipliers reflect Kuwait’s preference for capital expenditures, research and development, training, and increased export sales of locally produced goods and services (multipliers of 3.5). Other activities are given smaller multipliers. Not stated. Allowed up to 100 percent of offset obligation. Banking permitted. 6 percent of unfulfilled obligation. Not stated. Not stated. 50 percent of the offset should be completed within 4 years. Not stated. Long-term investment through joint ventures is encouraged. Agreements reflected interest in developing viable businesses. Saudi Arabia has intermittently required offsets since the mid-1980s. Officials at one company observed that Saudi Arabia has recently pursued “best effort” agreements with U.S. defense contractors, rather than formal offset agreements. Saudi Arabia uses its offset policy to broaden its economic base and provide employment and investment opportunities for its citizens. The offset agreements are informal with no set offset percentage, although officials at one company estimated their arrangement was equivalent to a 35-percent offset agreement. The agreements include a requirement that companies enter into joint ventures with local companies to implement offset activities. The offset activities consist of defense- and nondefense-related projects. In some instances, the offset projects include local production of parts or components for the weapon system being purchased. However, these represent small portions of the overall offset projects, and the Saudi government agreed to pay price differentials to make Saudi manufacturers price competitive. The agreements do not include explicit multipliers, but some agreements grant credits for technology transfers at the cost Saudi Arabia would have incurred to develop the technology. Companies commented that Saudi Arabia wants to establish strategic partnerships and long-term relationships with its suppliers and that the Saudi government has been fairly flexible in negotiating offset agreements. Table III.2 summarizes Saudi Arabia’s offset guidelines and agreements. Table III.2: Saudi Arabia—Offset Guidelines and Agreements 1990-93 (One prior agreement in 1988.) Broaden the economic base, increase exports, diversify the economy, transfer state-of-the-art technology, and provide investment opportunities for Saudi Arabian investors. Agreements were consistent with program goals. Not stated. Offset applies to both military and civil federal procurement. Agreements were associated with high-dollar value contracts. Indirect offsets are preferred. Mostly indirect offsets that were unrelated to defense. 35 percent of contract value. Agreements were consistent with the requirement or called for “best efforts” commitment. Offset credit for training Saudi Arabian nationals will be given at two times the contractors’ cost (i.e., a multiplier of two). No other multipliers cited. Not stated. However, technology transfers were valued at the cost Saudi Arabia would have incurred to develop the technology, plus the value of future benefits. Not stated. Not stated. Agreements generally called for “best efforts” as part of Saudi Arabia’s desire to establish long-term relationships. 10 years. Not stated. Oil- and gas-related projects are not eligible for credit. Offset activity involved mostly nondefense-related projects unrelated to the oil and gas industry. Should be 50 percent of total offset obligation. Joint ventures sought between foreign and Saudi firms; foreign firm’s ownership share may decrease to 20 percent by end of 10 years. Agreements required joint ventures, but appeared to be less formal than published guidelines. Agreements cited specific Saudi Arabian firms for joint venture partners. The United Arab Emirates first instituted its offset policy in 1990. In 1993, it issued new requirements granting offset credit only for the profits generated by offset projects. The policy requires a 60-percent offset on all contracts valued at $10 million or more. The United Arab Emirates uses offsets to generate wealth and diversify its economy by establishing profitable business ventures between foreign contractors and local entrepreneurs. The United Arab Emirates is interested in a wide range of nondefense-related offset projects. Company officials generally questioned the feasibility of the United Arab Emirates’ current offset requirements. They said only a small number of viable investment opportunities exist and such projects take several years to generate profits. Table III.3 summarizes the United Arab Emirates’ offset guidelines and agreements. Table III.3: United Arab Emirates—Offset Guidelines and Agreements New guidelines issued about 1993. Prior guidelines dated 1990. All after the institution of the 1990 requirements. Generate wealth by creating commercially viable businesses through partnerships with local entrepreneurs. Agreements were consistent with guidelines in effect at the time. For all “substantial” defense procurement. Requirements specifically cite a $10-million threshold for any government procurement. All agreements exceeded the threshold. Policy implies nondefense, wealth-generating investments are preferred. The policy explicitly discourages, however, labor-intensive projects. Agreements involved indirect offsets unrelated to defense. At least 60 percent of the value of the imported content. All agreements required a 60-percent offset. Not mentioned under current policy. Credit is based on profit generated rather than a valuation (using multipliers) of the investment in the project. The 1990 policy permitted multipliers. Some agreements that pre-date the new offset policy included multipliers that reflected the United Arab Emirates’ preference for investment. Banking of offset credits is permitted. Agreements permitted banking of offset credits and buying of excess credits from other companies. 8.5 percent of the unfulfilled obligation. Consistent with guidelines. Some agreements exceeded the 7-year performance period requirement. To be negotiated for each offset proposal. Agreements included milestones throughout the obligation. Companies must demonstrate that offset ventures are new work or extensions of existing activities. Agreements required projects to be preapproved for eligibility and offset credit. May require financial investment in an offset development fund in lieu of conventional offsets. Chase Manhattan is working to set up a United Arab Emirates investment fund. According to company officials, the fund will require a minimum $5-million investment for at least 10 years, with a guarantee of at least a 2.5-percent return. The country will provide 20-percent offset credit against investments in the fund. Offset credit for technology transfer, training, parts production, and all offset projects is granted based on the profits generated by these activities rather than the contractor’s implementation cost. Company officials noted that this requirement was impractical. Trade Offsets in Foreign Military Sales (GAO/NSIAD-84-102, Apr. 13, 1984). Foreign Military Sales and Offsets (Testimony, Oct. 10, 1985). Military Exports: Analysis of an Interagency Study on Trade Offsets (GAO/NSIAD-86-99BR, Apr. 4, 1986). Security Assistance: Update of Programs and Related Activities (GAO/NSIAD-89-78FS, Dec. 28, 1988). Defense Production Act: Offsets in Military Exports and Proposed Amendments to the Act (GAO/NSIAD-90-164, Apr. 19, 1990). Military Exports: Implementation of Recent Offset Legislation (GAO/NSIAD-91-13, Dec. 17, 1990). U.S.-Korea Fighter Coproduction Program—The F-16 Version (GAO/NSIAD-91-53, Aug. 1, 1991). Military Sales to Israel and Egypt: DOD Needs Stronger Controls Over U.S.-Financed Procurements (GAO/NSIAD-93-184, July 7, 1993). Military Aid to Egypt: Tank Coproduction Raised Costs and May Not Meet Many Program Goals (GAO/NSIAD-93-203, July 27, 1993). Military Exports: Concerns Over Offsets Generated With U.S. Foreign Military Financing Program Funds (GAO/NSIAD-94-127, June 22, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed offset requirements associated with military exports, focusing on: (1) how the offset goals and strategies of major buying countries have changed; (2) the offset requirements of these countries and how they are being satisfied; and (3) the impact of offsets and any action taken by the U.S. government. GAO found that: (1) demands for offsets in foreign military procurement have increased in selected countries; (2) countries that previously pursued offsets are now demanding more; (3) countries are requiring more technology transfer, higher offset percentages, and higher local content requirements to offset their foreign military purchases; (4) further, countries that previously did not require offsets now require them as a matter of policy; (5) the offset strategies of many countries in GAO's study now focus on longer term offset deals and commitments; (6) this shift highlights these countries' use of offsets as a tool in pursuing their industrial policy goals; (7) the types of offset projects sought or required by buyer countries in GAO's review depend on their offset program goals, which in turn are driven by their industrial and economic development needs; (8) companies are undertaking a broad array of activities to satisfy offset requirements; (9) countries with established defense industries are using offsets to help channel work to their defense companies; (10) countries with developing defense and commercial industries pursue both defense- and nondefense-related offsets that emphasize the transfer of high technology; (11) countries with less industrialized economies often pursue indirect offsets as a way to encourage investment and create viable commercial businesses; (12) views on the impact of offsets on the U.S. economy and specific industries are divided; (13) measuring the impact of offsets on the economy as well as specific defense industries is difficult without reliable data; (14) the Department of Commerce is currently gathering additional information on the impact of offsets and is expected to issue a report in 1996; (15) to date, the executive branch agencies have consulted with other countries about certain offsets associated with individual defense procurements, but have not had an interagency team hold the broad-ranging discussions on the ways to limit the adverse impacts of offsets as called for in a 1990 presidential policy statement; (16) according to the Commerce Department, industry is not opposed to the initiation of consultations, but is concerned about unilateral U.S. government actions to limit the use of offsets; and (17) moreover, representatives from several defense companies expressed doubt about the government being able to enforce restrictions on offsets.
You are an expert at summarizing long articles. Proceed to summarize the following text: Generally, HCFA considers transportation costs to be part of physicians’ practice expense for a service under Medicare’s physician fee schedule. For example, physicians do not receive separate transportation payments when they visit Medicare beneficiaries in nursing homes. However, this policy is not followed when it comes to the transportation of equipment used to do diagnostic tests. HCFA established specific guidance for carriers to follow regarding portable x-ray and EKG services. Because HCFA did not issue specific instructions for other diagnostic tests, such as ultrasound, each Medicare carrier developed its own policies. Section 1861(s)(3) of the Social Security Act provides the basis for the coverage of diagnostic x-rays furnished in a Medicare beneficiary’s residence. HCFA believes that because of the increased costs associated with transporting x-ray equipment to the beneficiary, the Congress intended for HCFA to pay an additional amount for the transportation service furnished by an approved portable x-ray supplier. Thus, HCFA has established specific procedure codes to pay for the transportation of x-ray equipment. HCFA added EKG services allowed in homes to the established list of approved services that suppliers may provide and established a code to pay for the transportation of EKG equipment. Many Medicare carriers limited payment of transportation costs for EKG services to portable x-ray suppliers. However, others had allowed it for other types of providers such as independent physiological laboratories (IPL). HCFA never established a national policy for transportation costs related to ultrasound services. Each carrier developed its own policy. Medical directors for each of the carriers decided whether to reimburse for transportation costs separately. In 15 states, carriers had a policy to reimburse separately for transportation costs associated with ultrasound services. Beginning January 1, 1996, carriers could allow transportation payments for only the following services: (1) x-ray and standard EKG services furnished by an approved portable x-ray supplier and (2) standard EKG services furnished by an IPL under special conditions. For all other types of diagnostic tests payable under the physician fee schedule, travel expenses were considered “bundled” into the procedure payment. For example, carriers could no longer make separate transportation payments associated with ultrasound services. After further review, HCFA again revised its policy. HCFA concluded that the statute authorized carriers to make separate transportation payments only for portable x-ray services. Therefore, HCFA published a final regulation providing that effective January 1, 1997, carriers would no longer make separate transportation payments associated with EKG services. The enactment of the Balanced Budget Act in August 1997 caused additional changes in Medicare’s transportation payment policy. First, BBA temporarily restored separate payments for transporting EKG equipment but not ultrasound equipment during 1998. The law requires the Secretary of Health and Human Services to make a recommendation by July 1, 1998, to the Committees on Commerce and Ways and Means of the House of Representatives and the Committee on Finance of the Senate on whether there should be a separate Medicare transportation fee for portable EKGs starting in 1999. Second, BBA phases in a prospective payment system for skilled nursing care that will pay an all-inclusive per diem rate for covered services. Beneficiaries needing skilled care after being discharged from the hospital are covered under Part A for 100 days of care during a benefit period. Part A coverage includes room and board, skilled nursing and rehabilitative services, and other services and supplies. Thus, the per diem rate paid to nursing facilities would include all services during the period the beneficiary is receiving posthospital extended care. For example, services such as EKGs and ultrasound will no longer be paid for separately but will be included in the per diem rate. The prospective payment provision begins July 1, 1998. Third, BBA establishes an ambulance service fee schedule beginning in 2000. This provision is designed to help contain Medicare spending on ambulance service. Medicare paid for more than 14 million EKG and 5 million ultrasound services in 1995 at a cost to the Medicare program of about $597 and $976 million, respectively. Most EKG and ultrasound services were performed in physicians’ offices or hospitals. In 1995, about 2 percent of the EKG and less than 1 percent of the ultrasound services were provided in beneficiaries’ homes or nursing homes, costing the Medicare program about $12 million for the EKGs and $8 million for the ultrasound services. Of these services, about 88 percent of the EKG and 82 percent of the ultrasound services were done in a nursing home. These services were usually provided by portable x-ray suppliers and IPLs. Table 1 compares these services in these settings. Because HCFA regulations allowed EKG service transportation payments to be paid only to portable x-ray providers and certain IPLs for EKG services done in a beneficiary’s residence, it is not surprising that these providers accounted for 83 percent of all Medicare EKG services performed in nursing homes. Likewise, these two types of providers accounted for a high portion of the Medicare ultrasound services provided in nursing homes. General practitioners, cardiologists, and internists also provided EKG and ultrasound services. In 1995, 1,317 providers were doing EKGs and 337 were doing ultrasound services in nursing homes. Of the total EKG providers, 676 were portable x-ray suppliers and 75 were IPLs. Of the total ultrasound providers, 51 were portable x-ray suppliers and 83 were IPLs, and combined they accounted for more than half of the ultrasound services done in nursing homes. About one-fifth of the states accounted for a disproportionately high concentration of EKG and ultrasound services in 1995, compared with these states’ nursing home populations. In addition, it appears that these services were generally provided by a few large providers. Thus, this change in transportation policy will have a larger effect on Medicare spending in some geographic areas. Eleven states accounted for nearly three-fourths of the 255,000 EKGs done in nursing homes. This appears to be disproportionately high when compared with the nursing home population in the 11 states. Figure 1 shows the use rates in each state per 100 Medicare nursing home residents. Furthermore, a handful of providers in each of these states accounted for most of the services. For example, in New York 7 percent of the providers accounted for 77 percent of the services. (See table 2.) Similarly, the data show that 10 states accounted for more than 84 percent of the ultrasound services done in nursing homes in 1995. The use rate in these 10 states appears to be somewhat higher than in the 40 other states. Figure 2 shows the ultrasound use rates in each state. Less than half of the portable x-ray suppliers and IPLs did most of the ultrasound services for which separate transportation payments were made, and only a handful of them did more than half of these services. Data show that 54 portable x-ray suppliers and IPLs did 89 percent of these services. Further, 11 of these 54 providers accounted for 52 percent of the transportation claims. Similar to what we found in the EKG data, there were a few high-volume providers in the 10 states, as shown in table 3. About 19 percent of the EKGs and 21 percent of the ultrasound tests done in nursing homes in 1995 would be unaffected by any change in the transportation payment policy because BBA eliminates separate payments for services provided to beneficiaries in skilled facilities while their stay is covered under posthospital extended care. An additional 37 percent of the portable EKGs and 68 percent of the ultrasound tests were done without the providers’ receiving additional payments for transporting the equipment. Consequently, 56 percent of the EKG services and 89 percent of the ultrasound tests provided to beneficiaries in their place of residence would be unaffected by the elimination of separate transportation payments. There is some uncertainty, however, as to whether (and to what extent) providers will cut back on services for which they previously received a transportation payment. Nonetheless, it is reasonable to assume that at least some of these services would also continue under a revised payment policy. If providers reduced services in nursing homes, some residents would be inconvenienced by having to travel to obtain these tests. In some instances, the nursing home may need to provide transportation or staff to accompany a resident to a test site. Consequently, nursing homes could be affected as well. In the future, all services provided to Medicare beneficiaries in skilled facilities who are under posthospital extended care will be included under a per diem prospective payment rate. Nursing facilities will receive a per diem rate for routine services such as room and board and all other services such as EKGs and ultrasound. Based on the 1995 data, 19 percent (48,000) of the EKG services and 21 percent (6,520) of the ultrasound services will be incorporated under the prospective rates. In 1995, only portable x-ray suppliers and certain IPLs received separate transportation payments. Therefore, any EKG services done in nursing homes by other medical providers such as general practitioners, internists, and cardiologists did not include separate transportation payments. Data for 1995 show that 55,580 of the EKG services done in nursing homes did not include a separate transportation payment. (See table 4.) When an EKG or ultrasound service is done in conjunction with an x-ray, the provider receives a transportation fee for the x-ray service but not the EKG or ultrasound. The 1995 data covering EKG services with separate transportation payments show that 38,820 of the beneficiaries who received an EKG service also had an x-ray service done during the same visit. Thus, any provider doing an EKG and an x-ray service would continue to receive a separate transportation payment for the x-ray service. Before HCFA issued regulations in December 1995, Medicare providers in less than a third of the states were paid for transporting ultrasound equipment to beneficiaries’ residences. Each carrier had its own policy regarding reimbursement for ultrasound equipment transportation costs. Carrier representatives responsible for Medicare Part B program payments in only 14 states and part of another told us that they had a policy to make transportation payments when billed for ultrasound services. See figure 3. Because carriers responsible for fewer than one-third of the states allowed separate transportation payments, most ultrasound services performed in nursing homes were done without such payment. Only 3,220 (15 percent) of the 23,600 ultrasound services done in nursing homes in 1995 had claims for separate transportation payments. The remainder, approximately 20,380, were done without a separate transportation payment. (See table 4.) Even in states where carriers had a policy to pay separate transportation payments, there were many instances in which providers performed ultrasound services in nursing homes but did not receive a separate transportation payment. For example, in Maryland and Pennsylvania, where carriers had policies to make separate transportation payments, 79 and 55 percent, respectively, of the ultrasound services done in nursing homes by providers did not involve separate transportation payments. The average frequency of ultrasound tests per nursing home resident varied among states but did not vary systematically with carriers’ transportation payment policies. That is, there is no indication from the 1995 data that nursing home residents systematically received fewer services in states that did not make separate transportation payments compared with residents in states that did pay. For example, Michigan and New York—states where separate transportation payments were generally not made—had high ultrasound use rates, while Massachusetts—where separate transportation payments were made—had a low rate. Advocacy groups gave contradictory opinions as to the possible effects HCFA’s changed policy would have on Medicare beneficiaries. Generally, officials representing medical groups believed that EKG and ultrasound services would continue to be available and thus did not see an adverse effect on the availability of care for patients. In contrast, representatives from nursing homes and EKG provider associations expressed concern about potential decreases in quality of care, especially for frail elderly beneficiaries who would be most affected by being transported away from their homes. In addition, officials at several nursing homes we visited said that sending beneficiaries out also imposes additional costs and burdens on the nursing home because often these beneficiaries have to be accompanied by a nursing home representative. We cannot predict whether the revised payment policy will decrease or increase Medicare spending because we do not know the extent to which providers will continue to supply portable EKG and ultrasound services without separate transportation payments. Because of these uncertainties, we developed a range estimate of potential savings and costs associated with the revised payment policy. In 1995, if the prospective payment system for skilled nursing care and the policy of not making transportation payments had been in effect, Medicare outlays would have been lower by as much as $11 million on EKGs and $400,600 on ultrasound services. However, these savings would have materialized only to the extent that homebound beneficiaries and nursing home residents did not travel outside in Medicare-paid ambulances to receive these tests. We cannot predict the likelihood that savings will be realized because they depend upon the future actions of portable equipment providers and nursing home operators. Providers of portable equipment may continue to provide EKG and ultrasound services even if they no longer receive the separate transportation payments. Many mobile providers have established private business relationships with the nursing homes they serve and may be eager to maintain those relationships. In addition, many also provide other services to nursing homes, such as x-ray services. Therefore, they would be likely to continue EKG services to some degree. Prospective payment may change the way nursing facilities provide services. Some nursing homes may purchase the equipment to provide diagnostic tests in house. Representatives from two of the seven nursing homes we visited told us that they were considering purchasing EKG equipment and having nursing home staff perform the tests. The representatives noted that this would be feasible because EKG equipment is relatively inexpensive and staff need only limited training to perform the tests (no certification is needed). They also noted that residents needing EKGs would receive quicker service if the equipment were always on the premises. Because nursing homes may have additional transportation or staff costs for each test, the revised payment policy may produce Medicare savings by reducing the use of EKG and ultrasound services. During our review of case files at selected nursing homes, we observed a number of instances in which beneficiaries entering the nursing home were receiving EKG tests, although there were no indications that these beneficiaries were experiencing any problems to warrant such tests. In many of these situations, nursing home officials said that the tests provided baseline information. To the extent that eliminating the transportation payment would reduce inappropriate screening tests billed to Medicare, it would produce savings. Eliminating separate transportation payments could increase Medicare spending if beneficiaries travel to hospitals or physicians’ offices to be tested. Some very sick or frail beneficiaries would need to travel by ambulance. We found that the costs for the service itself are about the same whether the service is delivered in a hospital, a physician’s office, or a nursing home. However, the cost of transporting a beneficiary by ambulance is substantially greater than the amount paid to mobile providers for transporting equipment to a beneficiary’s residence. We estimate that the potential annual net costs to Medicare from eliminating transportation payments could be as much as $9.7 million for EKGs and $125,000 for ultrasound tests. These estimates, based on 1995 data, represent an upper limit that would be reached only if equipment providers stopped providing all services for which they previously received a transportation payment and the beneficiaries were transported by ambulance to receive the services. Our net cost estimates are based on (1) the number of beneficiaries who would be likely to need transporting by ambulance to receive EKG and ultrasound services, (2) the cost of ambulance transportation, and (3) the costs of EKGs and ultrasound tests in other settings. We estimate that about half of the beneficiaries who received an EKG and more than one-third of the beneficiaries who received an ultrasound service in 1995 would likely have been transported by ambulance had the equipment not been brought to them. Our estimates are based on our review of beneficiary case files from several nursing homes in two states. (See appendix I for more detail.) The transportation payments by Medicare for ambulance services are significantly greater than the transportation payments made to providers of portable EKG and ultrasound equipment. In 1995, the average ambulance transportation payment for beneficiaries in skilled nursing facilities who were transported for an EKG test ranged from $164 (for an average trip in North Carolina) to $471 (for an average trip in Connecticut). For the same period, the average payment made for transporting EKG equipment to a nursing home ranged from about $26 (in Illinois) to $145 (in Hawaii, Maine, Massachusetts, New Hampshire, and Rhode Island). The cost for EKG or ultrasound services is about the same in every setting. Anywhere other than a hospital outpatient setting, the Medicare payment for the service is determined by the physician fee schedule. In a hospital outpatient setting, Medicare payments for services such as EKGs and ultrasound tests are limited to the lesser of reasonable costs, customary charges, or a “blended amount” that relates a percentage of the hospital’s costs to a percentage of the prevailing charges that would apply if the services had been performed in a physician’s office. Our analysis of 1995 hospital cost reports does not suggest that Medicare would pay more for the services if they were performed at a hospital. While millions of EKG and ultrasound tests are provided yearly to Medicare beneficiaries, only a small percentage of these tests are performed in a beneficiary’s home or nursing home. Many of the EKGs and most of the ultrasound tests performed in those settings would be unaffected by the elimination of separate transportation payments. We cannot predict how providers of portable EKG and ultrasound equipment will react over the long term to the elimination of transportation payments or what actions nursing homes might take to provide services if they were not delivered. Also, we cannot predict what actions skilled facilities may take as a result of the prospective payment system that will be implemented. Consequently, our estimate of the effect of a revised payment policy ranges from a savings of $11 million to a cost of $9.7 million for EKG tests and a savings of $400,000 to a cost of $125,000 for ultrasound tests. Because providers’ reactions are uncertain, HCFA would have to eliminate transportation payments to reliably gauge the revised policy’s effect on Medicare spending. By carefully monitoring the revised policy over a sufficient period of time, HCFA could determine whether the revised payment policy caused a net decrease in Medicare spending or a net increase. In the absence of such hard data, however, we cannot recommend a specific course of action regarding the retention or elimination of separate Medicare transportation payments for portable EKG and ultrasound tests. HCFA officials stated that our methodology was appropriate and that they generally agreed with the results of our review. Furthermore, they agreed that precisely estimating the potential cost of the revised payment policy is difficult. However, HCFA officials believe that the upper limit of our potential Medicare spending estimate is based on very conservative assumptions and that this amount of additional Medicare spending is unlikely to occur if separate transportation payments are eliminated. We agree that our approach was conservative so as not to understate the potential for additional Medicare spending. However, as we state in the report, if providers continue to supply these services for business reasons, then Medicare might save money or incur additional costs below our estimated upper limit because fewer beneficiaries would need transporting by ambulance for the services. This would also be true, especially in the case of EKGs, if nursing homes purchase the necessary equipment and keep it on site. HCFA officials were also concerned over what appears to be a disproportionate amount of EKG and ultrasound services by a few providers in selected states. HCFA officials thought this pattern may indicate potential abuse. We did not attempt to determine appropriate use rates for these services and thus cannot conclude whether the rates are too high or too low in some areas. Our purpose in showing the concentration of EKG and ultrasound services was to provide some perspective on the beneficiaries likely to be most affected by HCFA’s changed payment policy. We incorporated other HCFA comments in the final report where appropriate. As agreed with your office, unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. We will then send copies to the Secretary of the Department of Health and Human Services, the Administrator of HCFA, interested congressional committees, and others who are interested. We will also make copies available to others on request. Please call James Cosgrove, Assistant Director, at (202) 512-7029 if you or your staffs have any questions about this report. Other major contributors include Cam Zola and Bob DeRoy. To obtain information on electrocardiogram (EKG) and ultrasound tests done in 1995, we extracted pertinent use data from a national database consisting of all Medicare Part B claims from a 5-percent sample of beneficiaries. We used valid 1995 EKG and ultrasound procedure codes for the diagnostic procedure itself. We eliminated all codes that represented only a physician’s interpretation or report and codes for procedures that were delivered in settings other than nursing homes. We used 1995 data because it was the last year in which both EKG and ultrasound transportation costs could have been reimbursed under Medicare. In addition, we obtained data on outpatient costs for radiological and other diagnostic tests for all hospitals reporting such data to the Health Care Financing Administration (HCFA) in 1995. Because paying transportation costs relating to ultrasound services was a “local” decision, we contacted all the Medicare Part B carriers to determine the reimbursement practices in effect in every state in 1995. We visited 12 judgmentally chosen nursing homes in Florida and Pennsylvania and randomly selected 176 cases of beneficiaries who had an EKG or ultrasound test done in the home during 1995. We discussed the reasons for the test and the general condition of the beneficiary at the time of the test with an appropriate nursing home official, usually a nurse. We asked the nurses to provide us with their opinion as to how each beneficiary would have been transported if he or she had to travel away from the home for the test. These beneficiaries may better reflect the need for ambulance services by most nursing home beneficiaries. From our sample, we determined that about 50 percent of the beneficiaries who received an EKG test and 40 percent of the beneficiaries who received an ultrasound test would most likely have been transported by ambulance if the tests had been done outside the nursing home. Most of the beneficiaries who the nurses believed would have needed an ambulance were totally bedridden. The concern generating the order for the test had been either that an episode developed late at night or that a condition was serious enough to border on a call to 911. Beneficiaries whom the nurses believed could be transported by means other than an ambulance were usually ambulatory and their medical situations generally involved a scheduled service done 1 or 2 days after the order or a baseline test requested upon entering the home. We discussed HCFA’s policy with HCFA officials, representatives of organizations representing portable x-ray suppliers, independent physiological laboratory providers, and several individual providers of EKG and ultrasound services. Also, we sought the opinions of several medical associations, including the American College of Cardiology, the American College of Physicians, and the American College of Radiology. In addition, we solicited comments from 11 health care associations. In estimating the potential net cost to Medicare from eliminating transportation payments, we did the following: (1) identified, from the sample 5-percent national claims data file, the Medicare beneficiary population that received an EKG or ultrasound service from a provider that was paid a transportation fee for delivering the service; (2) reduced this count by the beneficiaries who also had an x-ray service (since the provider would continue to get transportation fees for the x-ray), the beneficiaries who had the service delivered by a provider who could not be paid transportation expenses, and beneficiaries receiving the services while covered under posthospital extended care; (3) estimated the percentage of beneficiaries who would have been transported by ambulance (using our observations from case files in two states); (4) developed an average ambulance fee paid in each state (using data on the skilled nursing home beneficiaries who went by ambulance in 1995 to an outpatient facility for a diagnostic test); and (5) determined the transportation fee paid to mobile providers in each state. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed how the Health Care Financing Administration's (HCFA) revised payment policies would affect Medicare beneficiaries and program costs, focusing on the: (1) Medicare recipients, places of service, and providers who might be affected most; (2) number of services that would be affected by the changed policy; and (3) effect on Medicare's program costs. GAO noted that: (1) only a fraction of the electrocardiogram (EKG) and ultrasound tests paid for by Medicare are performed outside of physicians' offices or hospital settings and, thus, are potentially affected by the payment policy changes; (2) in 1995, Medicare paid approximately $597 million for 14 million EKGs and about $976 million for 5 million ultrasound tests in various settings; (3) only 290,000 of the EKGs and only 37,000 of the ultrasound tests were done in locations such as nursing homes or beneficiaries' residences where the provider needed to transport the diagnostic equipment; (4) nearly 90 percent of the services that required transporting equipment were provided to residents of nursing homes; (5) they were usually provided by portable x-ray and ultrasound providers; (6) some states appear to have a higher concentration of these services, with a small number of providers accounting for a large portion of each state's total portable EKG and ultrasound services; (7) many EKGs and ultrasound services provided in nursing homes would be unaffected if transportation payments were eliminated; (8) given the experience of 1995, about 56 percent of the EKGs and 89 percent of the ultrasound services provided in nursing homes would be unaffected by transportation payment changes and presumably would continue to be provided in those settings; (9) in July 1998, nursing homes will receive an inclusive per diem payment for all services provided to beneficiaries receiving Medicare-covered skilled nursing care; (10) a decision to eliminate or retain separate transportation payments for other beneficiaries will not affect the per diem payment; (11) another reason is that many nursing home EKGs and most ultrasound services in 1995 were performed by providers who did not receive a transportation payment; (12) the effect of eliminating transportation payments on the remaining 44 percent of the EKG and 11 percent of the ultrasound services is unknown because it depends on how providers respond; (13) because relatively few services would be affected, eliminating transportation payments would likely have a nominal effect on Medicare spending; (14) Medicare could save $11 million if mobile providers continue to supply services; (15) however, if mobile providers stopped bringing portable EKG equipment to beneficiaries, then some people would travel in Medicare-paid ambulances to obtain these tests; (16) eliminating transportation payments for ultrasound services would have a smaller effect; and (17) GAO estimates the effect on Medicare spending might range from $400,000 in savings to $125,000 in increased costs.
You are an expert at summarizing long articles. Proceed to summarize the following text: Although training for employed workers is largely the responsibility of employers and individuals, publicly funded training seeks to fill potential gaps in workers’ skills. In recent years, the federal government’s role in training employed workers has changed. In 1998, WIA replaced the Job Training Partnership Act after 16 years and, in doing so, made significant changes to the nation’s workforce development approach. Before implementation of WIA, federal employment and training funds were primarily focused on helping the unemployed find jobs; the WIA legislation allowed state and local entities to use federal funds for training employed workers. TANF block grants to states also allowed more flexibility to states in serving low-wage workers and, like WIA funds, federal funding authorized under TANF can now be used for training employed workers, including low-wage workers. WIA funds provide services to adults, youth, and dislocated workers and are allocated to states according to a formula. States must allocate at least 85 percent of adult and youth funds to local workforce areas and at least 60 percent of dislocated worker funds to local workforce areas. For training employed workers, the WIA funds used are from those appropriated to provide services to all adults as well as dislocated workers, funded at about $2.5 billion for program year 2001. WIA also permits states to set aside up to 15 percent of WIA funds allocated for adults, youth, and dislocated workers to their states to support a variety of statewide workforce investment activities that can include implementing innovative employed worker programs. These funds can also be spent for providing assistance in the establishment and operation of one-stop centers, developing or operating state or local management information systems, and disseminating lists of organizations that can provide training. In a previous GAO report, we reported that several states used these state set-aside funds specifically for implementing employed worker training. WIA also required that all states and localities offer most employment and training services to the public through the one-stop system—about 17 programs funded through four federal agencies provide services through this system. For this system, WIA created three sequential levels of service—core, intensive, and training. The initial core services, such as job search assistance and preliminary employment counseling and assessment, are available to all adults and WIA imposes no income eligibility requirements for anyone receiving these core services. Intensive services, such as case management and assistance in developing an individual employment plan, and training require enrollment in WIA and generally are provided to persons judged to need more assistance. In order to move from the core level to the intensive level, an individual must be unable to obtain or retain a job that pays enough to allow the person to be self-sufficient, a level that is determined by either state or local workforce boards. In addition, to move from the intensive level to the training level, the individual must be unable to obtain other grant assistance, such as Department of Education grants, for such training services. Under WIA, states are encouraged to involve other agencies besides workforce development—including the agencies responsible for economic development and the Department of Health and Human Services’ TANF program—in the planning and delivery of services in the one-stop center system. WIA performance measures are designed to indicate how well program participants are being served by holding states and local areas accountable for such outcomes as job placement, employment retention, and earnings change. WIA requires the Department of Labor and states to negotiate expected performance levels for each measure. States, in turn, must negotiate performance levels with each local area. The law requires that these negotiations take into account such factors as differences in economic conditions, participant characteristics, and services provided. WIA holds states accountable for achieving their performance levels by tying those levels to financial sanctions and incentive funding. States meeting or exceeding their measures may be eligible to receive incentive grants that generally range from $750,000 to $3 million. States failing to meet their expected performance measures may suffer financial sanctions. If a state fails to meet its performance levels for 1 year, Labor provides technical assistance, if requested. If a state fails to meet its performance levels for 2 consecutive years, it may be subject to up to a 5 percent reduction in its annual WIA grant. In fiscal year 2000—the latest for which data are available—states reported spending $121.6 million in federal TANF funds specifically for education and training. Prior to WIA, welfare reform legislation created the TANF block grant, which provided flexibility to states to focus on helping needy adults with children find and retain employment. The TANF block grant is a fixed amount block grant of approximately $16.7 billion annually. Although the TANF program was not required to be part of WIA’s one-stop system, states and localities have the option to include TANF programs. As we have previously reported, many are working to bring together their TANF and WIA services. The TANF block grants allow states the flexibility to decide how to use their funds—for example, states may decide eligibility requirements for recipients, how to allocate funds to a variety of services, and what types of assistance to provide. Work-related activities that can be funded under TANF encompass a broad range of activities including subsidized work, community service programs, work readiness and job search efforts, as well as education and training activities such as on-the-job training, vocational education, and job skills training related to employment. TANF funds available to states can be used for both pre- and postemployment services. Because of the increased emphasis on work resulting from welfare reform and time limits for receiving cash assistance, state offices responsible for TANF funds may focus largely on helping their clients address and solve problems that interfere with employment, such as finding reliable transportation and affordable child care, especially for those in low-paying jobs. In recent years, several federal demonstration or competitive grants were available for training employed workers. For example, the Department of Labor’s Welfare-to-Work state and competitive grants were authorized by the Congress in 1997 to focus on moving the hardest-to-employ welfare recipients and noncustodial parents of children on welfare to work and economic self-sufficiency. Overall, welfare-to-work program services were intended to help individuals get and keep unsubsidized employment. Allowable activities included on-the-job training, postemployment services financed through vouchers or contracts, and job retention and support services. In addition, shortly after WIA was enacted, Labor gave all states an opportunity to apply for $50,000 planning grants for employed worker training. States were instructed to develop policies and program infrastructures for training employed workers and to indicate their available resources, anticipated needs, and plans for measuring success. The Secretary of Labor also awarded larger, 2-year competitive demonstration grants, operating from July 1, 1999, to June 20, 2001, for training employed workers. In addition, HHS is supporting the Employment Retention and Advancement (ERA) study of programs that promote stable employment and career progression for welfare recipients and low-income workers. In 1998, for the planning phase of this project, HHS awarded 13 planning grants to states to develop innovative strategies. HHS has contracted with the Manpower Demonstration Research Corporation to evaluate 15 ERA projects in eight states, comparing the outcomes of those who received services with a control group that did not. About the same time as the enactment of WIA, the Congress passed the American Competitiveness and Workforce Improvement Act of 1998, which authorized some funding for technical skills training grants as part of an effort to increase the skills of American workers. This legislation raised limits on the number of high-skilled workers entering the United States with temporary work visas, imposing a $500 fee on employers— later raised to $1,000—for each foreign worker for whom they applied. Most of the money collected is to be spent on training that improves the skills of U.S. workers. Labor awards the skill grants to local workforce investment boards, thereby linking the skill grant program with the workforce system. The workforce boards may use the funds to provide training to both employed and unemployed individuals. In a previous GAO report on these grants, we reported that, for grantees that collected participant employment data (39 of 43 grantees), approximately three- fourths of the skills training grant participants are employed workers upgrading their skills. In addition to being able to use WIA state set-aside funds for different activities including training employed workers, states can authorize funds from other available sources, such as state general revenue funds or funds related to unemployment insurance trust funds. States can also fund such training in conjunction with other federal funding grants, such as the Department of Housing and Urban Development’s Community Development Block Grant. This grant can be used for economic development activities that expand job and business opportunities for lower-income persons and neighborhoods. These state training programs serve primarily to help businesses address a variety of issues including skill development, competitiveness, economic development, and technological changes. States can fund training for employed workers through various offices. Workforce development offices have historically focused on training for unemployed and economically disadvantaged individuals, while economic development offices have typically focused on helping employers foster economic growth for states. Economic development offices may also provide employment and training opportunities to local communities, generally by working with employers to meet skill shortages and long-term needs for qualified workers. States have more often subsidized training tailored for businesses through their economic development offices, according to reports published by the National Governors’ Association. Most of the local workforce boards reported that they provided assistance to train employed workers, including funding training, as did all 16 states that we contacted. Two-thirds of the workforce boards responding to our survey provided assistance to train employed workers in a variety of ways, and nearly 40 percent of the workforce boards specifically targeted funds on training for these workers. Furthermore, a greater percentage of workforce boards reported funding employed worker training in program year 2001 than in program year 2000. The 16 states we contacted all funded training for employed workers and most of these states funded and coordinated this training from two or more offices. Few states and local workforce boards were able to provide information on the number of low- wage workers who participated in training because many did not categorize training participants by wage or employment status. Generally, local areas and states funded training for employed workers with various federal, state, local, or other resources, although WIA and other federal funds were the most common sources of funding for this training. Two-thirds of the local workforce boards reported performing tasks that facilitated the provision of employed worker training, such as partnering with employers to develop training proposals and providing individual services to employed workers. For example, one workforce board helped a local manufacturer obtain a state grant to retrain its employees through a project to upgrade skills. Another workforce board helped a local company by arranging English as a Second Language (ESL) classes for its employees through a community college. Other workforce boards helped employed workers establish individual training accounts with eligible training providers. However, some workforce boards responded that they did not specifically target training for employed workers because their overall funds were so limited that such training was not a priority. Several respondents explained that their clients were served based on need and that individuals with jobs were not a priority for services because of the sizeable unemployed population served by the workforce boards. Nearly 40 percent of the local workforce boards responding to our survey specifically targeted funds for employed worker training. The number of boards that reported budgeting or spending funds on such training in program years 2000 or 2001 varied by state. (See fig. 1.) Most states had at least one workforce board that targeted funds for such training. Furthermore, a greater percentage of workforce boards reported funding such training in program year 2001 than in the previous year. Of all the workforce boards responding to our survey, 22 percent reported spending funds specifically for training employed workers in 2000 and 31 percent reported spending funds on training these workers in 2001. When they funded training for employed workers, local workforce boards reported doing so in a variety of ways. For example, in cooperation with the economic development office, one workforce board in West Virginia worked with local businesses to identify and fund training programs to meet their business needs. At a workforce board we visited in Texas, officials received a competitive state grant to fund employed worker training to meet critical statewide industry needs in health care, advanced technology, and teaching. Some local workforce boards that had not specifically targeted training for employed workers were planning to become involved in such training or had begun discussions about developing policies for this type of training. For example, a workforce official in California cited plans to use $95,000 from a federal grant to train employed workers in information technology. Another workforce board, in Minnesota, planned to open a training center for employed workers that would focus on business needs within the local community, such as health care, and provide training through a local community college. All of the 16 states we contacted funded training for employed workers. In most of the 16 states, training for employed workers was not limited to the efforts of a single state office, but was funded by two or more state offices with training responsibilities. In fact, in 8 states, all three offices we contacted funded training for employed workers. In addition to offices responsible for workforce development, economic development, and TANF funds used for education and training, state officials also identified education departments—including those of higher education—within their states as important funding sources for training employed workers. In New York, for example, training funds were spread across about 20 state agencies, according to one state official. When more than one office within a state funded training for employed workers, most state offices reported coordinating their training efforts both formally and informally. Formal coordination methods that state officials cited included workgroups and advisory boards (15 states), memoranda of understanding or mutual referral agreements between offices (12 states), or coordinated planning (12 states). For example, Indiana’s economic development office noted that it had formal linkages with the workforce office and that they collaborated on a lifelong learning project. Offices in 9 of the 16 states also cited other means of coordination, such as having common performance measures. For example, Oregon’s workforce development office reported that state agencies were held to a set of statewide performance measures. In addition to these formal methods of coordination, all states cited informal information sharing as a key means of coordination among offices within their state. For example, an economic development official in one state said he used his telephone speed dial to contact his workforce development colleague, and a workforce development official in another state told us she had frequent working lunches with the state official responsible for TANF funds used for education and training. In addition, in a few states, offices jointly administered training programs within their states. In New York, for example, workforce development and economic development offices comanaged a high-skill training grant program for new and employed workers using $34 million in state general revenue funds over 3 years. For this training program, begun in July 2001, both offices reviewed training proposals, and the workforce department created contracts and reimbursed companies for part of the training costs. Similarly, in Pennsylvania, five departments—Labor and Industry, Public Welfare, Community and Economic Development, Education, and Aging—jointly administered an industry-specific training grant initiative that primarily funded training for low-wage health care workers. This joint effort represented a new approach for Pennsylvania, because previously the economic development office was responsible for training that was tailored, or customized, to employers. Under this joint program, a state committee with representatives from each of the five departments reviewed grant proposals and each agency funded a portion of approved grants. Finally, several states had reorganized their workforce responsibilities and funding, either by consolidating workforce development and economic development responsibilities or combining responsibilities for WIA and TANF funds. For example, Montana and West Virginia transferred WIA responsibilities and funding from the workforce office to the economic development office. According to state officials, this approach was intended to better align and integrate workforce and economic development goals for the state. In Texas, the workforce commission— which was created in 1995 to consolidate 10 agencies and 28 programs— was responsible for WIA and TANF block grants, among others. In Florida, a public-private partnership, governed by the state’s workforce board, became responsible in October 2000 for all workforce programs and funds in the state, including WIA, TANF, and Welfare-to-Work grant funds; this shift was intended to create a better link between workforce systems and businesses in the state. Few state officials or local workforce boards were able to report the number of low-wage workers who participated in training, for various reasons. For example, some officials told us they did not categorize training participants by wage. Other officials reported that, although they targeted low-wage workers for training, they did not categorize training participants by employment status. Although states we contacted could not always provide us with the number of low-wage workers participating in training, 13 of 16 states we contacted reported that they funded training targeted to low-wage workers. Additionally, when WIA funds are limited, states and local areas must give priority for adult intensive and training services to recipients of public assistance and other low-income individuals. Local workforce boards reported that WIA and other federal funds were the most common source of funds used to support employed worker training. Federal funding for these training efforts included WIA funding— both local and the state set-aside portion—TANF funds, and local Welfare- to-Work funds. (See fig. 2.) In addition, local boards described various other important funding sources such as Labor’s demonstration grants for training employed workers and the federal skills training grants intended to train workers in high-demand occupations. For those local workforce boards spending funds specifically for training employed workers, their allocation of local WIA funds most often paid for these training efforts, and more reported using local WIA funds in program year 2001 than in the previous year. However, while nearly all workforce boards responding to our survey were aware that WIA allowed funds to be used for training employed workers, some reported that there were too many priorities competing for the WIA funds. Two local officials also noted that the federal funds allocated to states under WIA—the state set- aside funds—in their states were awarded competitively, which made it difficult to consistently serve employed workers because they were uncertain that they would receive these grants in the future. Local workforce boards also combined funding from several sources— including federal, state, local and foundation support—to train employed workers. For example, one workforce board in Pennsylvania combined $50,000 in funds from the state WIA set-aside with about $1.8 million from the state’s community and economic development department to fund such training. Although financial support from local entities or foundations was available to a lesser extent, some workforce boards were able to mix these with funds from other sources. For example, in California, one workforce board funded training for employed workers with a combination of foundation grants and fees for services from training for employers in addition to TANF funds, Welfare-to-Work and other competitive grants from Labor, and state funds. States reported that WIA and other federal funds were the most common sources of funding used for training employed workers. (See fig. 3.) Twelve of the 16 states we contacted used three or more sources of funds for this purpose. Of the 16 states we contacted, 13 used their WIA state set- aside funds for training employed workers. For example, in Texas, nearly $11 million was awarded competitively to 10 local workforce boards, and the state projected that over 9,000 employed workers would receive training. Eleven states also used TANF funds to train employed workers. States also reported using state general revenue funds, funds related to Unemployment Insurance (UI) trust funds, such as penalty and interest funds or add-ons to UI taxes, and funds from other sources such as community development block grants or state lottery funds. (See table in app. III.) In their training initiatives for employed workers, states and local workforce boards focused on training that addressed specific business needs and emphasized certain workplace skills. States and local workforce boards gave priority to economic sectors and occupations in demand, considered economic factors when awarding grants, and funded training that was tailored or customized to specific employers. States and local workforce boards focused most often on training provided by community or technical colleges that emphasized occupational skills and basic skills. Most of the 16 states we contacted focused on certain economic sectors or occupations in which there was a demand for skilled workers. Twelve states had at least one office, usually the economic development office, which targeted the manufacturing sector for training initiatives. States also targeted the health care and social assistance sector (which includes hospitals, residential care facilities, and services such as community food services) and the information sector (which includes data processing, publishing, broadcasting, and telecommunications). New York took a sector-based approach to training by funding grants to enable employees to obtain national industry-recognized certifications or credentials, such as those offered through the computer software or plastics industries. Other training programs focused on occupations in demand. For example, in Louisiana, two state offices funded training that gave preference to occupations with a shortage of skilled workers, such as computer scientists, systems analysts, locomotive engineers, financial analysts, home health aides, and medical assistants. Of the 148 local workforce boards that specifically funded training for employed workers in 2001, the majority of workforce boards targeted particular economic sectors for training these workers. As with the states, most often these sectors were health care or manufacturing. (See fig. 4.) For example, workforce boards we visited in Florida, Minnesota, Oregon, and Texas became involved in funding or obtaining funding for local initiatives to train health care workers, such as radiographers and certified nursing assistants, that hospitals needed. Some states considered local economic conditions, such as unemployment rates, in their grant award criteria in addition to, or instead of, giving priority to certain economic sectors and occupations. For example, California’s Employment Training Panel must set aside at least $15 million each year for areas of high unemployment. Similarly, in Illinois and Indiana, the state economic development offices considered county unemployment or community needs in awarding training funds. Florida’s workforce training grants gave priority to distressed rural areas and urban enterprise zones in addition to targeting economic sectors. In addition, most state economic development offices (13 of 16) and more than half of the state workforce development offices (9 of 16) we contacted funded training that was tailored or customized to specific employers’ workforce needs. For economic development offices, such customized training was not new: these offices have typically funded training for specific companies as a means of encouraging economic growth within their states, and in some cases have done so for a long time. For example, California has funded training tailored to specific employers’ needs since 1983 through its Employment Training Panel. This program spent $86.4 million in program year 2000 to train about 70,000 workers; nearly all of them were employed workers according to state officials. However, for many state workforce development offices, funding customized training was a shift in their approach to workforce training, one that could strengthen the links between employees and jobs. With customized training, local employers or industry associations typically proposed the type of training needed when they applied for funding and often selected the training providers. Examples of customized training initiatives sponsored by workforce development offices include the following: In Indiana, the state workforce office has sponsored a high-skills, high- wage training initiative since 1998 to meet employers’ specific needs for skilled workers in information technology, manufacturing, and health. This effort is part of a statewide initiative for lifelong learning for the existing workforce. In Hawaii, the workforce office established a grant program for employer consortiums to develop new training that did not previously exist in the state. In Louisiana, the workforce office has funded a training program customized for employers who had been in business for at least 3 years. It required that the company provide evidence of its long-term commitment to employee training. In the states we contacted, many customized training programs required that grant applicants—usually employers—create partnerships with other industry or educational organizations. For example, Oregon’s workforce development office required local businesses to work with educational partners in developing grant proposals. One local workforce board we visited in Oregon collaborated with a large teaching hospital and its union to obtain funding for training hospital employees, and local one-stop staff partnered with nursery consortia and community colleges to obtain funds to upgrade the skills of agricultural workers. Similarly, in its high-skill training grant program, New York’s workforce development office required employers to form partnerships with labor organizations, a consortium of employers, or local workforce investment boards. In at least 11 of the 16 states we contacted, the programs also required employers to provide matching funds for training employed workers, which can help offset costs to the state for training as well as indicate the strength of the employers’ commitment to training. States that had requirements for matching funds—often a one-for-one match—included Indiana, Minnesota, Montana, New Hampshire, New York, Oregon, Pennsylvania, Tennessee, Texas, Utah, and West Virginia. Utah’s economic development office required a lower match from rural employers, and Indiana’s match varied case-by-case. Sometimes states required other kinds of corporate investments as a condition for obtaining funds for training employees. For example, in Tennessee, companies participating in a job skills training program for high technology jobs were required to make a substantial investment in new technology. In addition, several states included certain requirements in their eligibility criteria to address potential concerns about whether public funds were being used to fund training that businesses might otherwise have funded themselves. For example, in Louisiana and West Virginia, the workforce office requires employers to provide evidence satisfactory to the office that funds shall be used to supplement and not supplant existing training efforts. Although states reported funding many types of training for employed workers, occupational skills training and basic skills training were the most prevalent. Fifteen of the 16 states we contacted funded occupational skills training—such as learning new computer applications—for employed workers. In Tennessee, for example, the economic development office spent more than $27 million of state funds in program years 2000 and 2001 on a job skills training initiative for workers in high-skill, high-technology jobs, according to a state official. Nearly all states also reported funding basic skills training, including in basic math skills and ESL, for employed workers with low levels of education. For example, Texas funded ESL training in workplace literacy primarily for Vietnamese and Spanish speaking workers participating in health care training. Local workforce boards also reported funding many types of training; however, occupational skills training was most frequently provided to employed workers. (See fig. 5.) For example, of the local workforce boards that spent funds to train employed workers, in program year 2001, 90 percent funded occupational training to improve and upgrade workers’ skills. Forty-seven percent of the local workforce boards also funded, in program year 2001, basic skills training for employed workers. The next most prevalent type of training funded for employed workers was in soft skills, such as being on time for work, and 34 percent of local workforce boards funded this type of training in program year 2001. Community or technical colleges were often used to train employed workers, according to both state and local officials we contacted. For example, 78 percent of local workforce boards that spent funds to train employed workers reported that community or technical colleges were training providers in program year 2001. (See fig. 6.) State and local workforce officials also cited using private training instructors and employer-provided trainers, such as in-house trainers. In targeting training to low-wage workers, state and local officials addressed several challenges that hindered individuals’ and employers’ participation in training. Workforce officials developed ways to address the personal challenges low-wage workers faced that made participating in training difficult. In addition, workforce officials we visited identified ways to address employer reluctance to support training efforts. Despite attempts to address these issues, however, challenges to implementing successful training still exist. For example, state and local officials reported that the WIA performance measure that tracks adult earnings gain and certain funding requirements that accompany some federally funded training programs, may limit training opportunities for some low- wage workers. State and local officials developed a number of approaches to overcome some of the challenges faced by low-wage workers. They noted that many low-wage workers have a range of personal challenges—such as limited English and literacy skills, childcare and transportation needs, scheduling conflicts and financial constraints, and limited work maturity skills—that made participating in training difficult. However, many officials also reported several approaches to training low-wage workers. Offering workplace ESL and literacy programs were some approaches used by officials to address limited English and literacy skills among low- wage workers. For example, one workforce board in Minnesota used a computer software program to develop literacy among immigrant populations. Another state workforce official in Oregon reported customizing ESL to teach language skills needed on the job. In addition, some of the employers we visited provided training to their employees in their native language or taught them vocational ESL. Officials we visited in Texas offered a 5-week vocational ESL course before the start of the certified nursing assistant training program primarily to help prepare Vietnamese and Spanish speaking students who were not fluent in English. Many low-wage workers faced challenges securing reliable transportation and childcare, particularly in rural areas and during evening hours. Several state and local officials noted that assisting low-wage workers with transportation and childcare enabled them to participate in training. One program in Florida provided childcare and transportation to TANF-eligible clients. In Minnesota, local officials told us that they provided transportation for program participants. Participants used the agency’s shuttle bus free-of-charge until they received their second paycheck from their employer. After the second paycheck, the individual paid a fee for the shuttle and was encouraged and supported in finding transportation on their own. Providing on-site, paid, or flexible training were methods used to address scheduling conflicts and financial constraints experienced by low-wage workers. Many workforce boards that identified approaches on our survey cited various methods of providing training to low-wage workers that helped officials address some of the challenges faced by low-wage workers. These methods included offering training at one-stops or through distance learning and teleconferencing courses. For example, an employer in California paid employees for 40 hours of work, but allowed 20 hours of on-site training during that time. In addition, some hospitals permitted flexible schedules for employees who sought additional training for career advancement. Offering additional assistance and incentives were approaches identified by officials for improving low-wage workers’ limited work maturity skills such as punctuality and appropriate dress. Officials we visited in Texas reported that they helped low-wage workers develop better skills for workplace behavior. For example, they helped clients understand the need to call their employer if something unexpected happens, like a flat tire, that prevents them from coming to work. In addition, another workforce board in West Virginia reported that they provided a $50.00 incentive to the employee for perfect attendance during the first 6 weeks of work. State and local officials developed a number of ways to address the concerns of employers who were reluctant to participate in low-wage worker training. According to state and local officials, employers’ reservations about participation stemmed from different concerns, including the fears that better trained employees would find jobs elsewhere. Officials reported that other employers were hesitant to participate in low-wage worker training because of paperwork requirements or the time and expertise they believed were involved in applying for state training grants. Despite these concerns, state and local officials identified approaches to encourage employer participation. According to officials we contacted, some employers said that if their employees participated in training, they would seek jobs elsewhere. Officials addressed this perception by forming partnerships with employers and educators and offering training that corresponded to specific career paths within a company. For example, a workforce board we visited in Oregon partnered with a local nursery, a landscaping business, and a community college to train entry-level workers in agriculture and landscaping to move up into higher-skilled and better paying positions at the same company. These career paths also addressed the concern, expressed by some employers, that too few employees were qualified to fill positions beyond the entry level. Officials found other ways to alleviate employers’ fears. Officials in Oregon encouraged trainees at a hospital to stay with their current employer by requiring them to sign a statement of intent regarding training. The hospital trained employees after they signed an agreement that asked for a commitment that they remain with the employer for a specific amount of time in return for training. State and local officials noted that some employers were also reluctant to have their employees participate in government-funded training programs because they believed that certain data collection and reporting requirements were cumbersome. For example, state workforce officials we contacted reported that some employers found it difficult to get employees to fill out a one-page form regarding income as required to determine eligibility for certain funds, such as TANF. In an effort to ease the funding paperwork burden, state officials we contacted in West Virginia were working towards reducing the application paperwork required for employers to obtain worker-training dollars. Workforce officials also reported that some employers were hesitant to apply for federally funded training grants because they believed that they did not have the time or the expertise to apply for such grants. To address this, workforce officials we visited in Oregon worked with union representatives and training providers to co-write training grant proposals. The workforce officials we visited told us that the involvement of the union was a key factor in the training initiative’s success. Prior to this cooperative effort, the employer had not been responsive to workers’ needs and the involvement of the union helped to bridge the gap between worker and employer needs. State and federal funding requirements—such as WIA performance measures, time limits, and participant eligibility—may limit training opportunities for some low-wage workers. Under WIA, performance measures hold states accountable for the effectiveness of the training program. If states fail to meet their expected performance levels, they may suffer financial sanctions. State funding regulations for some training initiatives, such as TANF-funded projects, required the funds to be used within a specific time period. Because local areas must wait for states to allocate and disburse the funding, local officials sometimes had less than 1 year to use the funding. Finally, individuals are sometimes eligible for services based on their income, especially for TANF or WIA local funds. Depending on the level at which local areas set eligibility requirements, some low-wage workers may earn salaries that are still too high to be eligible for services provided by these training funds. WIA established performance measures to provide greater accountability and to demonstrate program effectiveness. These performance measures gauge program results in areas such as job placement, employment retention, and earnings change. (See table 1.) Labor holds states accountable for meeting specific performance outcomes. If states fail to meet their expected performance levels, they may suffer financial sanctions; if states meet or exceed their levels, they may be eligible to receive additional funds. A prior GAO report noted that the WIA performance levels are of particular concern to state and local officials. If a state fails to meet its performance levels for one year, Labor provides technical assistance, if requested. If a state fails to meet its performance levels for two consecutive years, it may be subject to up to a five percent reduction in its annual WIA formula grant. Conversely, if a state exceeds performance levels it may be eligible for incentive funds. State and local officials reported that the WIA performance measure that tracks the change in adult earnings after six months could limit training opportunities for employed workers, including low-wage workers. Some workforce officials were reluctant to register employed workers for training because the wage gain from unemployment to employment tended to be greater than the wage gain for employed workers receiving a wage increase or promotion as a result of skills upgrade training. For example, a state official from Indiana noted that upgrading from a certified nursing assistant to the next tier of the nursing field might only increase a worker’s earnings by 25 cents per hour. Yet, for the purposes of performance measures, workforce boards may need to indicate a change in earnings larger than this in order to avoid penalties. For example, one workforce official from Michigan reported that the performance measure requires the region to show an increase that equates to a $3.00 per hour raise. In a previous GAO study, states reported that the need to meet these performance measures may lead local staff to focus WIA-funded services on unemployed job seekers who are most likely to succeed in their job search or who are most able to make wage gains instead of employed workers. Time limits for some funding sources were a challenge for some officials trying to implement training programs, according to some state and local workforce officials. In Florida, for example, officials we visited reported that they had a state-imposed one-year time limit for using TANF funds for education and training, which made it difficult for officials to plan a training initiative, recruit eligible participants, and successfully implement the training program. Similarly, state and local officials we contacted in Oregon expressed frustration with the amount of effort required to ensure the continuation of funding for the length of their training initiative. They noted that funding for a one-year training grant for certified medical assistants and radiographers expired seven months before the training program ended. The local workforce board identified an approach to fund the training for the remainder of the program by using other funding sources. Although this workforce board was able to leverage other funds, this solution is not always feasible. Finally, several officials reported that eligibility requirements for the WIA local funds are a challenge because they might exclude some low-wage workers from training opportunities. States or local areas set the income limit for certain employment and training activities by determining the wage level required for individuals to be able to support themselves. When funds are limited, states and local areas must give priority for adult intensive and training services to recipients of public assistance and other low-income individuals. Officials on several workforce boards said that these eligibility guidelines for their local areas, particularly the income limit, made it challenging to serve some low-wage workers. For example, local workforce board officials from California indicated that they would like more flexibility than currently allowed under state WIA eligibility requirements to serve clients who may earn salaries above the income limit. The officials noted that some workers in need of skills upgrade could not be served under WIA because they did not qualify based on their income. To address this challenge, officials we visited at a local workforce board in East Texas told us that they set the income limit high enough so that they can serve most low-wage workers in their area. As of program year 2001, many states and local workforce boards were beginning to make use of the flexibility allowed under WIA and welfare reform to fund training for employed workers, including low-wage workers. They used WIA state set-aside funds and local funds, as well as TANF and state funds, as the basis for publicly funded training for employed workers. In addition, they considered business needs in determining how these funds were used to train employed workers. Consequently, training for employed workers could better reflect the skills that employers need from their workforce in a rapidly changing economy. In addition, such skills may help employees better perform in their jobs and advance in their careers. Training for employed workers is particularly critical for workers with limited education and work skills, especially those earning low wages. For such workers, obtaining training while employed may be critical to their ability to retain their jobs or become economically self-sufficient. While training low-wage workers involves particular challenges, workforce and other officials have developed ways to implement training initiatives for low-wage workers that may help mitigate some of these challenges. This is especially necessary in the economic downturn following the boom in the 1990s when TANF and WIA were created. However, WIA’s performance measure for the change in average earnings may create a disincentive for states and local workforce boards to fund training for employed workers because employed workers, particularly low-wage workers, may be less likely than unemployed workers to significantly increase their earnings after training. To the extent that state and local workforce investment areas focus on unemployed workers to ensure that they meet WIA’s performance measure for earnings change— and thereby avoid penalties—employed workers, and especially low-wage workers, may have a more difficult time obtaining training that could help them remain or advance in their jobs. As currently formulated, this performance measure supports earlier federal programs’ focus on training unemployed workers and does not fully reflect WIA’s new provision to allow federally funded training for employed workers. To improve the use of WIA funds for employed worker training, we recommend that the Secretary of Labor review the current WIA performance measure for change in adult average earnings to ensure that this measure does not provide disincentives for serving employed workers. For example, Labor might consider having separate average earnings gains measures for employed workers and unemployed workers. We provided the Departments of Labor and Health and Human Services with the opportunity to comment on a draft of this report. Formal comments from these agencies appear in appendixes IV and V. Labor agreed with our findings and recommendation to review the current WIA performance measure for change in the adult average earnings to ensure that the measure does not provide disincentives for serving employed workers. Labor stated that, in May 2002, the department contracted for an evaluation of the WIA performance measurement system and noted that one of the objectives of the evaluation is to determine the intended and unintended consequences of the system. Labor believes that GAO’s suggestion to have separate measures on earnings gains for employed workers would be an option to consider for improving WIA performance. HHS also agreed with the findings presented in our report and noted that the information in GAO’s report would help states develop and enhance appropriate worker training programs, and provide services and supports that address the barriers to such training. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time we will send copies of this report to relevant congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Other GAO contacts and staff acknowledgments are listed in appendix IV. To provide the Congress with a better understanding of how states and local areas were training employed workers, including low-wage workers, we were asked to determine (1) the extent to which local areas and states provide assistance to train employed workers, including funding training; (2) the focus of such training efforts and the kind of training provided; and (3) when targeting training to low-wage workers, the approaches state and local officials identified to address the challenges in training this population. To obtain this information, we conducted a nationwide mail survey of all local workforce investment boards, conducted semistructured telephone interviews with state officials, and visited four states. We conducted a literature search and obtained reports and other documents on employed worker training from researchers and federal, state, and local officials. To obtain information about the federal role in employed worker training, we met with officials from the departments of Labor, Health and Human Services (HHS), and Education. In addition, we interviewed researchers and other workforce development training experts from associations such as the National Governors’ Association, National Association of Workforce Investment Boards, U. S. Chamber of Commerce, and American Society for Training and Development. To document local efforts to train employed workers, we conducted a nationwide mail survey, sending questionnaires to all 595 local workforce boards. We received responses from 470 boards, giving us a 79 percent response rate. Forty-five states had response rates of 60 percent or more, and 17 states, including all states with a single workforce board, had response rates of 100 percent. The mailing list of local workforce boards was compiled using information from a previous GAO study of local youth councils, and directories from the National Association of Workforce Investment Boards and the National Association of Counties. The survey questionnaire was pretested with 6 local workforce boards and revised based on their comments. Surveys were mailed on April 24, 2002, follow- ups were conducted by mail and phone, and the survey closing date was August 16, 2002. We reviewed survey questionnaire responses for consistency and in several cases contacted the workforce boards to resolve inconsistencies but we did not otherwise verify the information provided in the responses. In the survey, we collected data for the WIA program years 2000 (from July 1, 2000—June 30, 2001) and 2001 (from July 1, 2001-June 30, 2002) so that we could compare and perceive trends. We analyzed these data by calculating simple statistics and by performing a content analysis in which we coded responses to open-ended questions for further analysis. Because our national mail survey did not use probability sampling, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as non-sampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the characteristics of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages to minimize such non-sampling errors. For example, survey specialists in combination with subject matter specialists designed our questionnaire; we pretested the questionnaire to ensure that questions were clear and were understood by respondents; and to increase our response rate for the mail survey, we made a follow-up mailing and called local workforce investment boards that did not respond by a specified date. To determine state efforts to train employed workers, including low-wage workers, we conducted semistructured telephone interviews in 16 judgmentally selected states with state officials responsible for workforce development, economic development, and TANF funds used for education and training. We selected these states in part because they were geographically dispersed and represented about one-half of the U.S. population. In addition, we selected these states because between 1998 and 2001, most of them used federal funds available for training employed workers, including demonstration and planning grants, which potentially indicated the state’s interest in training these workers. Thirteen of the selected states received States’ Incumbent Worker System Building Demonstration Grants in 1998 from the Department of Labor; 10 of the selected states were identified in previous GAO work as having used WIA state set-aside funds for current worker training, and 8 of the selected states were among those receiving Employment Retention and Advancement (ERA) demonstration grants from the Department of Health and Human Services. (See table 2.) In each state, we interviewed state officials responsible for workforce development and economic development. We also interviewed state officials responsible for TANF funds used for education and training to obtain information about training for low-wage workers. To identify these state officials, we initially called the state contact for the WIA program. These officials then provided us with the names of officials or their designees who represented workforce development and economic development perspectives in their state. We similarly identified state officials responsible for TANF funds used for education and training. Since states structure their programs and funding differently, sometimes state officials we interviewed were located in different agencies while others were located in different offices within the same agency. For this reason we used the term “office” throughout the report to represent their different perspectives. We used survey specialists in designing our interview questions and pretested them in several states to ensure that they were clear and could be understood by those we interviewed. In our interviews, we asked state officials for information about training efforts for the program year 2000, which ended on June 30, 2001, and asked if there were any significant changes in program year 2001, which ended June 30, 2002. Our interviews with state officials were conducted between March and October 2002. In analyzing our interview responses from state officials, we calculated frequencies in various ways for all close-ended questions and arrayed and analyzed narrative responses thematically for further interpretation. We did not independently verify data, although we reviewed the interview responses for inconsistencies. To obtain in-depth information about the challenges that local officials have experienced in developing and implementing training programs specifically for low-wage workers, and promising approaches they identified to address these challenges, we made site visits to four states– Florida, Minnesota, Oregon, and Texas. We selected these four states for site visits to provide geographic dispersion and because federal and state officials and other experts had identified these states as having specific efforts for training employed workers, especially initiatives to help low- wage workers retain employment and advance in their jobs. Furthermore, each of the four states received federal HHS Employment Retention and Advancement grants. In our view, these demonstration grants served as indications of the state’s interest in supporting job retention and advancement, including training, for low-wage workers. We visited a minimum of two localities in each state, representing a mix of urban and rural areas in most cases. We chose local sites in each state on the basis of recommendations from state officials about training initiatives with a low- wage focus. Teams of at least three people spent from 2 to 4 days in each state. Typically, we interviewed local officials, including employers, one- stop staff, local workforce board staff, and training providers such as community colleges and private training organizations. We toured training facilities and observed workers and students receiving training. We also obtained and reviewed relevant documents from those we interviewed. (See table 3.) We reviewed surveys and telephone interview responses for consistency but we did not otherwise verify the information provided in the responses. Our work was conducted between October 2001 and December 2002 in accordance with generally accepted government auditing standards. Appendix III: Information on State Funding Sources While these states were awarded Employment Retention and Advancement grants from HHS, state officials we contacted did not identify these grants as sources of funding for employed worker training. Natalie S. Britton, Ramona L. Burton, Betty S. Clark, Anne Kidd, and Deborah A. Signer made significant contributions to this report, in all aspects of the work throughout the assignment. In addition, Elizabeth Kaufman and Janet McKelvey assisted during the information-gathering segment of the assignment. Jessica Botsford, Carolyn Boyce, Stuart M. Kaufman, Corinna A. Nicolaou, and Susan B. Wallace also provided key technical assistance. Older Workers: Employment Assistance Focuses on Subsidized Jobs and Job Search, but Revised Performance Measures Could Improve Access to Other Services. GAO-03-350. Washington, D.C.: January 24, 2003. High-Skill Training: Grants from H-1B Visa Fees Meet Specific Workforce Needs, but at Varying Skill Levels. GAO-02-881. Washington, D.C.: September 20, 2002. Workforce Investment Act: States and Localities Increasingly Coordinate Services for TANF Clients, but Better Information Needed on Effective Approaches. GAO-02-696. Washington, D.C.: July 3, 2002. Workforce Investment Act: Coordination between TANF Programs and One-Stop Centers Is Increasing, but Challenges Remain. GAO-02-500T. Washington, D.C.: March 12, 2002. Workforce Investment Act: Better Guidance and Revised Funding Formula Would Enhance Dislocated Worker Program. GAO-02-274. Washington, D.C.: February 11, 2002. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Workforce Investment Act: Better Guidance Needed to Address Concerns Over New Requirements. GAO-02-72. Washington, D.C.: October 4, 2001. Workforce Investment Act: Implementation Status and the Integration of TANF Services. GAO-/T-HEHS-00-145. Washington, D.C.: June 29, 2000. Welfare Reform: Status of Awards and Selected States’ Use of Welfare-to- Work Grants. GAO/HEHS-99-40. Washington, D.C.: February 5, 1999.
Although training for employed workers is largely the responsibility of employers and individuals, the Workforce Investment Act (WIA) allowed state and local entities to use federal funds for training employed workers. Similarly, welfare reform legislation created Temporary Assistance for Needy Families (TANF) block grants and gave states greater flexibility to design training services for TANF clients to help them obtain and retain jobs. To better understand how the training needs of employed workers, including low-wage workers, is publicly supported, GAO was asked to determine (1) the extent to which local areas and states provide assistance to train employed workers, including funding training; (2) the focus of such training efforts and the kind of training provided; and (3) when targeting training to low-wage workers, the approaches state and local officials identified to address challenges in training this population. Nationwide, two-thirds of the 470 local workforce boards responding to our survey provided assistance to train employed workers, such as partnering with employers to develop training proposals or funding training. Nearly 40 percent specifically budgeted or spent funds on training these workers. The number of boards that reported funding training for employed workers varied by state, but most states had at least one workforce board that targeted funds on such training. At the state level, all 16 states that GAO contacted also funded training for employed workers. These states and local workforce boards reported funding training that addressed specific business and economic needs. Although many types of training for employed workers were funded, most often occupational training to upgrade skills, such as learning new computer applications, and basic skills training, such as in English and math, were emphasized and community or technical colleges were most frequently used to provide these services. In targeting training specifically for low-wage workers, state and local officials identified approaches to challenges that hindered individuals' and employers' participation in training. Officials developed approaches to address some of the personal issues that low-wage workers face that made participating in training difficult. They also developed ways to gain support from employers who were reluctant to participate in low-wage worker training, such as by partnering with employers to develop career paths that help retain employees within companies. However, officials reported that challenges to implementing successful training still exist. For example, they explained that the WIA performance measure that tracks the change in adult earnings after 6 months could limit training opportunities for employed workers, including low-wage workers. The wage gain for employed workers would not likely be as great as that for unemployed job seekers, and this might provide a disincentive to enrolling employed workers into training because their wage gain may negatively affect program performance.
You are an expert at summarizing long articles. Proceed to summarize the following text: An NSA is a customized contract between USPS and a specific entity— often a mailer or foreign postal operator—typically lasting a year or more. NSAs provide customer-specific rates—generally lower prices on specific mail products—in exchange for meeting volume targets and mail preparation requirements. The goal of these agreements is generally to encourage additional mail volume and revenue. For example, an NSA may provide a postage rate discount, paid to the mailer as a rebate at the end of a fiscal year, for all mail volume above a specific threshold. The Postal Accountability and Enhancement Act (PAEA) authorized USPS to create NSAs for two discrete categories of mail products, market dominant and competitive, as outlined in table 1. The market dominant category includes products for which USPS has a monopoly or would be able to exercise substantial market power, such as First-Class Mail and Standard Mail. Competitive products are all other types of mail, and include primarily shipping services such as Priority Mail, Express Mail, and Parcel Select. The legal requirements for NSAs differ based on whether the postal products are market dominant or competitive. PAEA requires market dominant NSAs to improve the net financial position or enhance the performance of operational functions of USPS so long as the agreement does not cause unreasonable harm to the marketplace. Also, market dominant product NSAs must be made available to “similarly situated mailers.” PAEA requires competitive NSAs, as well as competitive products in general, to cover their attributable costs, meaning they must generate more revenue than the costs attributable to delivery of the products, such as the labor involved in handling that mail. Further, competitive products overall, including NSAs, must contribute at least 5.5 percent of USPS’s institutional costs—that is, overhead costs not directly related to the delivery of products. As directed by PAEA, PRC issued final regulations in 2007 that established procedures for its reviews of competitive and market dominant NSAs, as summarized in table 1. As with all postal rate changes, USPS must obtain approval from PRC prior to implementing NSAs. PRC has approved all NSAs proposed by USPS through fiscal year 2012. PRC also reviews NSAs after implementation for compliance with regulatory criteria, in its Annual Compliance Determination Report. To increase or sustain mail volume and revenue, USPS has also provided short-term discounts, called sales or promotions, on specific mail products for groups of mailers, in contrast to NSAs, which are agreements with individual mailers for longer periods. Sales—often called price incentive programs—have sought to increase, or curb the decline of, mail volume by temporarily offering a discount (paid through a rebate) to mailers whose mail volume exceeds a predetermined volume threshold during a specific period. Some sales were offered in the summer, when USPS stated it had excess capacity in its system. These sales were designed to generate revenue during the sale period and not necessarily to have long-term benefits. After 2010, USPS began offering promotions instead of sales, which also provide temporary discounts, but seek to increase the long-term value of mail by, for example, integrating mobile technology into mailers’ advertising campaigns. As with all postal rate changes, USPS must obtain approval from PRC prior to implementing sales and promotions. PRC reviews whether the proposed sales and promotions meet postal rate regulations that include several qualifying factors such as whether sales and promotions help assure adequate revenues for USPS. See appendix II for a full list of these objectives and factors of postal rate regulation. The number of NSAs, sales, and promotions has increased in most years since the enactment of PAEA. There were no new NSAs approved following the enactment of PAEA until PRC regulations governing NSAs were issued in October 2007. As seen in table 2, the majority of NSAs have been with competitive products. Starting in fiscal year 2011, USPS began using “umbrella” products that allow multiple mailers to agree to similar NSAs. As a result, the total number of NSA-product requests for approval in the table below appears to decline in 2011 and 2012, when in fact the total number of individual contracts with mailers has continued to grow. The first sales were offered in fiscal year 2009, in part as a response to the decline in mail volume resulting from the recession, and USPS has since offered a variety of promotions (see table 3 below). USPS data show that revenue generated from NSAs, sales, and promotions has generally increased each year since the enactment of PAEA, with most of the revenue generated by competitive product NSAs. We cannot report the specific revenue generated by competitive NSAs because of the proprietary nature of data related to competitive products. However, the total revenue generated as part of all NSAs increased over 240 percent from fiscal year 2009 to 2012, though it remains a small portion of USPS’s total revenue (see fig. 1). Market dominant NSAs generated a relatively small portion of this revenue, partly because there have been few such agreements. Beyond NSAs, sales and promotions have also generated limited revenue since the first sale in 2009. As discussed below, it is not clear how much net revenue USPS has generated from market dominant NSAs or sales and promotions. Since the enactment of PAEA, the number of competitive NSAs has grown substantially. PRC has approved 327 domestic and international competitive NSA product requests through fiscal year 2012. A number of these NSA requests are actually “umbrella” products that include numerous individual contracts, all with similar terms. Counting these individual contracts separately illustrates the substantial number of NSAs, with 446 domestic and international competitive NSAs active in fiscal year 2012 alone. According to USPS officials and a mailer we spoke with, the increased number of competitive NSAs was due mainly to increased experience with NSA contracts and product enhancements. For example, according to these officials, USPS and PRC processes associated with developing competitive NSAs have become more efficient as a result of improved costing techniques and additional experience developing contracts with mailers. USPS and PRC also worked together to develop umbrella products that allow multiple mailers to agree to similar discounts for related mail products. Product enhancements may have also increased USPS’s ability to attract more business with NSAs. For example, officials from USPS and a mailer we spoke with noted that USPS’s ability to track packages’ transit times and its delivery performance improvements for parcels have made USPS products more attractive to customers. The financial results of competitive product NSAs are not reported publicly, but according to PRC, most such agreements have covered their costs, and according to USPS, these agreements have generally been successful in enhancing revenue. According to PRC, all domestic competitive NSAs have complied with the legal requirements, including that they generate revenue that covers their attributable costs. Four international competitive NSAs in fiscal year 2012, however, did not cover their costs. According to PRC, the international competitive NSAs that did not cover costs were projected to cover costs when USPS filed its request. Although competitive NSAs are collectively profitable, these agreements generate a small portion of USPS’s total revenue and help cover less of USPS’s institutional costs than market dominant products. Competitive products overall, including NSAs, generate a relatively small part of USPS’s total revenue because they generally involve much lower mail volumes than market dominant products. Additionally, total revenue from competitive products covers less of USPS’s institutional costs than the revenue from the two major market dominant products, First-Class Mail and Standard Mail. USPS has implemented few market dominant NSAs. USPS has been granted approval by PRC to implement two domestic, market dominant NSAs since the enactment of PAEA, though only one of these was active, as of May 2013. In fiscal year 2012, there was one active market dominant domestic NSA, and eight active market dominant international NSAs. USPS has implemented few such NSAs in part because of the decline in demand for market dominant mail products, as discussed further below. Domestic, market dominant NSAs have likely generated limited, if any, net revenue (see fig. 2). Most of these agreements were implemented prior to the enactment of PAEA. According to USPS, all domestic market dominant NSAs have generated net revenue of $68.5 million dollars to date. However, PRC, using a different methodology that is discussed below, estimates a net loss of $11.8 million for all domestic market dominant NSAs. For example, the USPS and PRC estimates of net revenue for the Discover NSA approved in fiscal year 2011 differ substantially. USPS’s estimate of net revenue assumes that all volume greater than the projected volume is because of the rebate. USPS developed its estimates of projected volume based on Discover’s mail volume history as well as other qualitative factors. PRC used a quantitative methodology based on product elasticities—that is, the estimated sensitivity of total product mail volume to price changes—associated with the mail product involved. As a result, USPS estimated net revenue of about $24 million in the first year of the NSA with Discover, while PRC estimated USPS lost over $4 million. As discussed further below, PRC has encouraged USPS to identify a more reliable method for evaluating the impact of market dominant NSAs. International market dominant NSAs implemented since the enactment of PAEA consist mainly of agreements with foreign posts and are estimated to have lost approximately $25 million in net revenue in fiscal year 2012, according to USPS. However, PRC has noted that the volume sent under the NSAs generated smaller losses than what would have occurred if the volumes were sent under Universal Postal Union (UPU) international postal rates. According to USPS officials, agreements with foreign posts are governed by UPU rates, which are developed based on domestic postal rates. The U.S. has low domestic postal rates compared to other countries, and as a result, its UPU-established inbound mail rates do not allow some international NSAs to cover their costs. As PRC explained in its 2012 Annual Compliance Determination Report, the “current UPU formula adversely affects the financial performance of inbound mail .” Similar to competitive domestic NSAs, the first international market dominant NSAs after PAEA were active in fiscal year 2009. USPS has implemented six sales and three promotions, all of which offered temporary discounts to mailers to sustain and grow mail volume. USPS estimates that these sales and promotions have earned a maximum net revenue of about $184 million (see table 4). According to USPS data, some sales and promotions are estimated to have generated little to no net revenue during the program periods. However, according to USPS officials, these incentives have generally been successful in that they will eventually help sustain mail volume. Officials said that mailers who have taken advantage of sales and promotions have increased their overall mail volume, while those who have not participated in these programs have kept their volume steady or reduced it. Further, USPS’s long-term goal for promotions is that they will enhance the value of the mail for mailers and therefore help to keep mailers in the mail beyond the program period. It is unclear the extent to which sales and promotions are accomplishing this goal, as discussed below. PRC approves sales and promotions under the requirements for setting rates and has conducted after the fact reviews of two sales. According to PRC officials, they were unable to evaluate the results of the other sales or promotions because USPS did not provide sufficient data to PRC. USPS offered sales on First-Class Mail and Standard Mail products to encourage additional mail volume and revenue during a historically low- volume period. For example, USPS’s first sale was held during 4 months in the summer of 2009, offering a 30 percent discount for Standard Mail on incremental volume above a threshold volume tailored to each participating mailer. USPS stated that it had the ability to offer a steep discount on any mail volume sent above what the customer mailed during the same four month period in the summer of 2009 because it had significant excess capacity and, as a result, there was little incremental cost for USPS to mail the additional volume. USPS conducted a similar sale in the summer of 2010 because of the estimated profits from the first sale, as well as continued excess capacity. USPS estimated maximum net revenue of $126 million for the sales it has conducted, though PRC has estimated different results than USPS in every case where PRC has examined the net revenue. Specifically, PRC estimated that the 2009 sales programs—Standard Mail Volume Incentive Pricing Program and First-Class Mail Incentive Program—lost money for USPS during the time in which it offered the discount. PRC estimated a $7 million net loss for the First-Class Mail Incentive Program, and an approximately $37 million net loss for the Standard Mail Volume Incentive Pricing Program. As with domestic, market dominant NSAs, PRC used a different methodology than USPS to estimate the net revenue generated by these sales. The different methodologies used by PRC and USPS to evaluate discount programs for market dominant products are discussed further below. According to PRC, they have not evaluated the results of all sales because of corrupt or missing USPS data. USPS estimated a maximum net value of about $58 million for promotions conducted to date (see table above). USPS has offered promotions as temporary discounts on First-Class Mail and Standard Mail products to help connect physical mail to technology, which USPS assumed would increase the value of mail for mailers and help sustain mail volume and revenue in the future. For example, USPS promotions encourage retailers to print Quick Response (QR) codes on physical mail pieces, which allow the consumer to scan the QR code with their mobile device, directing them to the retailers’ website, as illustrated in figure 3. USPS has used promotions to increase the value of the mail for mailers so that they sustain their use of mail. Specifically, USPS uses promotions as tools to develop innovative products, such as the use of discounts as incentives for mailers to invest in technology that may increase the value of mail over the long term. For instance, USPS implemented the 2012 holiday mobile shopping promotion for 2 weeks and gave a 2 percent discount to First-Class Mail and Standard Mail cards, letters, and flats that included a QR code. With this promotion, USPS sought to provide incentives for more mailers to use mobile barcodes to direct consumers to their websites for more information on sales. Officials told us that USPS believed direct mail that included such barcodes is more valuable because it makes the mail a multi-media experience. USPS noted that by increasing the value of its mail products it can retain as much advertising revenue as possible. An additional component of this promotion was the potential for customers to earn an additional 1 percent discount if their volume exceeded specified Priority Mail thresholds. Although USPS has estimated the financial result of promotions for the program period, it has not provided any estimates of the long-term financial results to PRC, as discussed in more detail later in this report. Opportunities exist to generate additional revenue through competitive product NSAs primarily because of merchandise shipments associated with the continued growth in e-commerce. USPS projects that total shipping and package volume will grow by about 33 percent by the end of fiscal year 2017, after increasing about 7.5 percent in fiscal year 2012. Expansion of e-commerce has been a key factor in the growth of these products, most of which are competitive. Moreover, e-commerce continues to grow and has not reached its full potential because of accessibility, returns, payment, and security concerns. Companies that can solve these shortcomings may garner additional business, and USPS may be able to develop NSAs with these companies. Other factors may allow USPS to continue taking advantage of the growth in e-commerce and generate additional revenue through competitive product NSAs. First, even if USPS moves to a 5-day delivery schedule, it has proposed that it would continue to deliver packages on Saturday to maintain its advantage of delivering to every household 6 days a week without a surcharge. Second, although USPS faces private-sector competitors with entrenched market share of the package delivery business, USPS has certain competitive advantages. Although FedEx and UPS lead the high-volume business-to-business package delivery market, it can be very expensive for them to deliver single items to residential addresses, particularly in rural areas. Along with such “last mile” delivery advantages, USPS also has special access to some large residential buildings. While the growth and opportunities associated with competitive products are substantial, additional growth is not likely to offset declines in other products. Competitive products taken as a whole are a modest piece of USPS’s total revenue, and generate relatively low profits, compared to the most profitable market dominant products, First-Class Mail and Standard Mail (see fig. 4). Even with robust growth in competitive products, including NSAs involving those products, it is extremely unlikely that this additional revenue will offset the projected declines in First-Class Mail and other products. USPS’s ability to generate additional revenue from competitive product NSAs may also face challenges because of the length of the process to develop NSAs. The USPS Office of Inspector General (OIG) reported in 2011 that, despite improvements, the preparation and review process for new product approvals puts USPS at a competitive disadvantage in terms of speed to market. Three mailers we spoke with that had NSAs with USPS said that the time it took to develop and obtain approval for NSAs was long when compared to negotiating contracts with USPS’s competitors, and three other mailers also described the process as lengthy (see fig. 5). Four of the mailers we spoke with that had not developed NSAs with USPS also told us that they perceive the process of developing NSAs as burdensome, which deters them from pursuing such agreements. USPS officials said that they employ a “risk-based” process for evaluating proposed NSAs, which involves differing levels of scrutiny depending on the size of the proposed agreement. Specifically, USPS has a multi-step internal process for developing and approving competitive NSAs. First, the agreement is generally negotiated by sales representatives using costing templates, which allows them to develop agreements that are estimated to cover the costs of the particular product involved. USPS’s finance office also examines each agreement to ensure that it is projected to cover its costs. USPS conducts a “business evaluation” to ensure that the agreement is likely to generate profit for USPS. USPS’s Law Department reviews an agreement throughout its development. USPS officials noted that carrying out business evaluations can be difficult because of the availability of the data and the ability to turn the analysis around quickly. Competitive NSAs are also authorized by the USPS Board of Governors, subject to internal USPS review, as well as review by PRC. The officials said that the competitive, dynamic, nature of the marketplace requires them to “go to market” quickly, which has to be balanced with the review process to ensure agreements generate profit for USPS. USPS officials noted that the PRC review process for competitive product NSAs offers competitors the opportunity to undercut USPS’s price. In 2011, PRC noted that “mailers have expressed concerns about the time and expense associated with NSAs” but concluded that “xperience suggests that the time and effort required to put an NSA into effect is due, in greater part, to negotiating with the Postal Service and internal Postal Service review and approval rather than to the Commission’s limited regulatory review.” Two mailers we spoke with also noted that they spent the majority of the time developing NSAs with USPS, not waiting for a PRC review. USPS officials noted that the time and effort that they spend on internal review and approval for international NSAs in particular are largely a result of the PRC’s Rules of Practice. If the regulatory review were further streamlined, according to USPS, the time and effort needed to develop NSAs would be substantially reduced. PRC officials noted that as USPS and PRC have gained experience with competitive NSAs and streamlined the process, the average time has steadily decreased. USPS has taken actions to streamline the process for developing competitive NSAs. First, to expedite and simplify the review of some competitive NSAs, USPS and PRC developed “umbrella” products. These products allow USPS to enter into NSAs that fall within a range of prices. A mailer may enter into such an NSA without pre-implementation review by PRC. According to USPS officials and PRC, this structure has facilitated the development of many NSAs, while maintaining an appropriate level of oversight. Second, USPS officials employ risk-based internal reviews, submitting NSAs with larger potential revenues to greater internal scrutiny than those with more limited potential before they are provided to PRC for review. Finally, USPS has developed costing “templates” for its sales force. These templates include mail products’ attributable costs, facilitating the sales forces’ ability to offer discounts to mailers that allow USPS to at least cover its costs with the NSA. To further streamline the process for developing NSAs, USPS has advocated for “after the fact” reviews of all competitive product NSAs. These reviews could improve NSAs’ speed-to-market, as is currently done with the “umbrella” products discussed above. According to PRC officials, they have determined that it is best to allow after the fact reviews of certain types of NSAs only after USPS and PRC have gained experience with those types of NSAs and determined how to best improve data quality and collection. Opportunities to generate net revenue through market dominant product NSAs are limited. USPS’s current estimates, as well as those of Christensen Associates on behalf of the USPS’s OIG, suggest that First- Class Mail has low price elasticity. These estimates mean that First- Class Mail volume is relatively insensitive to price changes and that recent volume declines are not related to the price of the postal products but to other factors, such as the lower cost of electronic communication. As a result, many mailers are not likely to respond to price decreases, such as discounts in NSAs, with additional mail volume, or to price increases with less volume. USPS’s estimates also suggest that Standard Mail has relatively low price elasticity. As a result of these price elasticities, price increases for some market dominant products may actually generate more revenue than discounts in market dominant NSAs. Indeed, it is likely that First-Class Mail as a whole could weather higher prices, according to the USPS OIG. In theory, an attractive target for price increases would be products with low price elasticities, as modest price changes would likely have relatively minor effects on volume. As USPS commented in 2011, it might be rational, in some cases, to increase prices of profitable products with low elasticities. First-Class Mail, and to a lesser extent Standard Mail, are highly profitable for USPS and have low price elasticities. There may therefore be additional revenue potential in the remaining First-Class Mail and Standard Mail volume, and capturing this intrinsic value by increasing prices is a common business practice. However, there may also be a point at which rate increases are self-defeating, potentially triggering large, permanent declines in mail volume. Also, market dominant products are subject to a price cap for each class of mail, limiting the extent to which USPS can increase the prices on, for example, First- Class Mail. USPS faces the difficulty of determining whether market dominant NSAs will increase volume and revenue. To show that market dominant NSAs improve the net financial position of USPS (i.e., create net revenue), PRC requires USPS to provide details about the expected improvements in USPS revenue resulting from any proposed NSA. Estimating the net revenue generated by an NSA depends on accurately estimating how much mailers would mail in the absence of an agreement. Accordingly, PRC has directed that USPS provide it with details of projected mailer- specific costs, volumes, and revenues absent the NSA and as a result of the NSA. To satisfy this requirement, USPS has generally used mail volume data, as well as expectations of future economic conditions, to develop projections of mailers’ future volumes. However, USPS has not described to the PRC its precise methods for using past mail volume data and other qualitative factors used to develop these projections. While PRC has noted that “it is incumbent upon the Postal Service to develop a quantitative approach that incorporates the factors it is using to estimate volumes,” this approach can be challenging because of data limitations. One possible quantitative approach, suggested by PRC, to estimate mailer-specific volumes both in the absence of and as a result of an NSA is an elasticity model, using mailer-specific elasticities (that is, a measure of the mailer’s sensitivity to price for a specific product). Using mailer-specific elasticities would allow for a precise estimation of volumes in the absence and as a result of an NSA. However, developing mailer- specific elasticities can be very difficult. According to PRC officials, estimation of mailer-specific price elasticities depends on having many observations of a mailer’s volumes at different prices, in order to use statistical models to isolate the effect of price from all other factors that influence a mailer’s volume. PRC has reported that when it is not possible to develop a mailer-specific elasticity, “the system-wide average for products will generally provide useable proxies.” However, as USPS has noted, the use of average product elasticities to estimate the results of NSAs, rather than mailer-specific elasticities, can be problematic, particularly when the response of individual mailers to NSAs are very different than the average response. Further, according to a mailer we spoke with, USPS has little choice but to evaluate the projected net value of an NSA based on historical mail volumes and qualitative factors. The mailer explained that the projection of future mail volumes is inherently uncertain; even the mailer did not know how much it was going to mail in the next year. In a 2010 proceeding, PRC “sought suggestions from interested persons for new methods to estimate volume changes resulting from pricing- incentive programs of the Postal Service.” After comments from stakeholders, PRC concluded that “it is not persuaded that the alternatives offer a demonstrable improvement over the current method.” PRC encouraged USPS to identify a more reliable method for evaluating the impact of NSAs, sales, and promotions and to continue collecting data that could be used for that purpose. PRC said that the accuracy of analysis could be improved by USPS’s willingness to collect mailer-specific, or even industry-specific, information. The lack of this data has frustrated PRC efforts to evaluate their financial impact. Ultimately, though, the case for pursuing NSAs must be a matter of business judgment by USPS management, according to USPS. Although these data limitations may increase the risk that market dominant NSAs will lose money for USPS, they are partially mitigated by provisions in recent NSAs. USPS has implemented early-out clauses in NSAs to mitigate the risk posed by unclear projections of net revenue. For instance, the recent Discover NSA may be canceled at the end of any contract year, by either party, should the experience prove to be at odds with the parties’ expectations. Further, when approving the Discover NSA, PRC estimated that the NSA was unlikely to improve USPS’s net financial position but stated that “allowing this negotiated service agreement to proceed will allow management to enhance its knowledge of potential tools to slow the overall declining trend for First-Class Mail volume.” However, in the 2012 Annual Compliance Determination Report, PRC recommended that USPS re-evaluate the benefits and costs of continuing the NSA if it is not realizing a net benefit. As of May 2013, USPS had not canceled the contract. Another challenge to generating additional revenue from market dominant product NSAs is that the process for developing these agreements can hinder the development of new agreements. Market dominant product NSAs are reviewed by PRC as part of a public proceeding. According to four mailers we spoke with, the fact that such NSAs go through a public process is a disincentive to developing such agreements. These mailers are sensitive about allowing any company-specific information to become public through the PRC review process. According to a 2011 USPS OIG report, transparency requirements, although prudent, make operating in a competitive marketplace difficult. PRC officials told us, though, that the transparency of these proceedings helps balance the increased pricing flexibility granted to USPS under PAEA with the need for USPS accountability. Further, PRC regulations allow USPS to file confidential or proprietary information under seal, so that it remains nonpublic. Beyond the transparency concerns, many mailers are concerned about the time and resources needed to obtain a market dominant NSA. Many mailers we spoke with expressed concern about the length of time it took to develop NSAs. Officials from Valassis told us that they spent about 2 years negotiating its recent market dominant NSA, and officials for Discover said their negotiation lasted about a year. According to another mailer we spoke with, such long negotiations can hinder the ability to agree on an NSA because during these time periods the marketplace can shift, changing the incentives for mailers. PRC review times for these NSAs can also be substantial, though this time has decreased. For domestic, market dominant NSAs approved since the enactment of PAEA, the average review time was 88 days, whereas the average review time for those NSAs prior to PAEA was 214 days. A major reason for the substantial time and resources needed to develop market dominant NSAs is that many such agreements have faced substantial opposition from mailers and stakeholders. In particular, some mailers and mailer industry associations have claimed that some proposed market dominant NSAs would harm the marketplace. By statute, market dominant NSAs “may not cause unreasonable harm to the marketplace.” In the proceeding for the most recent such NSA, with Valassis, a majority of commenters opposed the agreement because they claimed it would create an unfair competitive advantage for Valassis and harm the marketplace. Commenters said that the agreement would prevent other direct mail companies from competing on a level playing field, since Valassis would have a discount on its mail as part of the agreement. Some commenters also expressed concern that the NSA could negatively affect local newspapers by replacing the Sunday or weekend newspaper’s preprinted advertising package—a crucial source of income, according to newspapers—with stand-alone direct mail from Valassis. Despite such opposition, though, PRC has approved both market dominant NSAs proposed since enactment of PAEA. Opportunities to generate substantial additional revenue through sales and promotions are limited because of changes in the use of mail. Sales and promotions have been used for the market dominant products First- Class Mail and Standard Mail. As noted above, though, estimates indicate that demand for these market dominant products is low, and the volume for these products continues to decline. Further, the small scale of sales and promotions—often with short time frames and relatively small discounts—limit their impact for large mailers, since they may be unlikely to change their mailing patterns in response to a relatively small incentive. Two mailers we spoke with maintained that USPS’s sales and promotions are most effective for small and medium sized mailers. USPS has noted that by encouraging additional mail volume and revenue, sales and promotions provide pricing flexibility to mailers and help assure adequate revenues for USPS. By statute, sales and promotions must “help achieve” several objectives, such as assuring adequate revenues, to maintain financial stability. Additionally, the system for regulating rates must take into account several factors, such as the requirement that each class of mail bear the direct and indirect costs attributable to that class of mail. (See app. II for a list of all objectives and factors.) In support of sales and promotions when filing for approval with PRC, USPS has provided estimates of the financial result during the program time period. A few of these estimates for recent promotions have projected USPS to lose money during the program period. For example, USPS estimated that its 2011 mobile barcode promotion would reduce revenue by as much as $4.63 million. USPS has maintained, though, that promotions in particular can have value after the program period ends, so evaluating the financial effect based solely on mailer performance during the program period does not accurately reflect the true value of these programs. Indeed, USPS has stated that the long-term goal of promotions is to enhance the value of the mail for mailers, thereby helping sustain mail volume. USPS continues to implement a variety of such promotions, with six new domestic promotions planned for calendar year 2013. According to USPS, it continues to refine the methodologies used to measure the long-term financial effects of sales and promotions, including tracking mailer behavior and surveying customers, but further data collection and analysis can be difficult. For example, without knowledge of mailers’ planned mail volumes, USPS cannot precisely measure volume that would have been sent by mailers absent the sale or promotion. Further, attempts to gather data to estimate mailers’ planned mail volumes can be difficult, as with market dominant NSAs. Nevertheless, USPS monitors the performance of promotion participants after the promotion period, and benchmarks their performance (i.e., mail volume) against their past performance, expected performance, and non-program participant performance. Although not required to when filing for approval, USPS has not provided details to PRC on the long-term goals, the information it plans to collect in support of those goals, and the analysis it plans to perform to assess whether the long-term financial results of promotions met the intended goals. As a result, PRC has not assessed USPS methodologies for evaluating the long-term financial results of promotions. As USPS has noted, its financial challenge leaves little margin for error. Providing detailed data collection and analysis plans to PRC before implementation of promotions would allow USPS to better justify how these incentives help assure adequate revenues. PRC’s assessment of these plans, as part of its approval decisions for promotions, would also help ensure that USPS promotions have positive financial results. To achieve financial sustainability, USPS has been working to generate additional revenue to cover its costs. NSAs, sales, and promotions may help achieve this goal. Since enactment of PAEA in late 2006, USPS has made significant progress in using its increased pricing flexibility and generated billions of dollars in revenue through domestic and international NSAs. It is very unlikely, though, that additional net revenue created by NSAs will offset the revenue declines in other product areas. Additionally, the benefits, including long-term financial results, of promotions are not well understood by PRC and other postal stakeholders because USPS does not provide detailed information on its data collection and analysis plans to PRC before implementation. As a result, PRC has not had an opportunity to evaluate USPS’s long-term goals and analysis plans for promotions. Though it can be difficult to collect and analyze data on the impact of promotions, given USPS’s dire financial situation, demonstrating how promotions may achieve positive long-term financial results can help USPS maximize the revenue generated by those postage rate discounts. Because USPS faces a deteriorating financial situation, we recommend that the following two actions be taken to help ensure that future promotions generate net revenue for USPS: The Postmaster General should direct staff to provide specific data- collection methods and analytical processes for estimating the net financial results of promotions to PRC as part of USPS’s request for PRC approval of all promotions. The Chairman of the PRC should direct staff to evaluate USPS’s data- collection and analysis plans for USPS’s proposed mail promotions and discuss these evaluations in the PRC decisions for those mail promotions. We provided a draft of this report to USPS and PRC for review and comment. USPS and PRC both provided written comments in response, which are summarized below and included in their entirety in appendixes III and IV, respectively. In USPS’s written response, USPS disagreed with the first recommendation and noted concerns regarding the characterizations of promotions, sales, and NSAs in the report. In separate correspondence, USPS also provided technical comments, which we incorporated as appropriate. In PRC’s written response, PRC agreed with both recommendations and provided comments on NSAs. USPS stated that it disagreed with the first recommendation that USPS should provide specific data-collection methods and analytical processes for estimating the net financial results of promotions to PRC as part of USPS’s request for PRC approval of all promotions. USPS stated that it does not believe the recommendation will significantly affect the PRC’s review process, improve the quality of USPS’s business decisions, or assure that promotions yield positive financial results. USPS noted that PRC has concluded that past promotions proposed by USPS comply with the relevant requirements, which emphasize the importance of pricing flexibility. PRC stated that it agreed with both recommendations and that it welcomes the opportunity to evaluate USPS’s data-collection and analysis efforts for promotions. We continue to believe that providing additional information to PRC on the potential long-term results would allow USPS to better justify promotions, and provide PRC with valuable additional information for its evaluation. Promotions for market dominant products must comply with several statutory objectives and factors, including that they help assure adequate revenues. When filing for approval, USPS has provided information to PRC estimating that some promotions may lose money during the program period. However, as USPS has noted, promotions are designed to increase the long-term value of mail, thereby helping to sustain mail volume and revenue. Given its dire financial situation, USPS should be commended for using its pricing flexibility to try and enhance its revenue. However, USPS has not provided information to PRC demonstrating how promotions could achieve these long-term goals. Providing information about the potential long-term financial results of promotions could help PRC better evaluate whether the proposed promotions help assure adequate revenues and comply with the other objectives and factors. USPS cannot afford to implement promotions without demonstrating how they can achieve positive long-term results. USPS also stated in its letter that the report does not articulate how USPS could improve upon the methodologies it is using to conduct evaluations of promotions. We agree. We did not intend for the report to proscribe the methodologies that USPS should use to evaluate these long-term effects. Rather, the report concludes, though, that USPS’s methodologies should be made available for PRC’s evaluation prior to the implementation of promotions. This review would allow PRC to better evaluate the extent to which the promotions satisfy requirements. USPS also provided additional comments related to promotions: In USPS’s letter, USPS stated that the draft report concluded that promotions should not be offered unless USPS had “assurance” that promotions will achieve positive financial results. USPS correctly notes that no business decision is ever accompanied by a guarantee of success, and we have revised the relevant statement in our conclusions. However, as USPS agreed, sound analysis should accompany every business decision, including the implementation of promotions, particularly given USPS’s financial situation. In its letter and technical comments, USPS stated that the report should more clearly delineate the differences between promotions and sales. In particular, USPS noted that promotions are designed to help sustain mail volume and revenue over the long-term. Sales, though, are designed to generate additional mail volumes, but only during the sale period itself. We have revised the relevant text throughout this report to better distinguish between the different goals of sales and promotions. In USPS’s letter, USPS also notes that some of the costs for recent promotions have been recovered through the creation of additional price cap authority, mitigating the risk of financial losses from the most recent promotions. USPS is to be commended for seeking ways to mitigate the financial risks of any postage rate discount, as it has done with early-out clauses in market dominant NSAs. However, in a recent decision approving a promotion, PRC did not accept the price cap treatment proposed by USPS. In its letter, USPS also requested that we characterize the data provided by USPS to PRC on some sales not as “corrupt” but “incomplete” or “insufficient.” The term “corrupt” is not meant to imply that USPS intended to make data provided to PRC unusable. However, according to PRC, data provided to it by USPS on an early sale was “corrupt” and hindered PRC’s ability to evaluate the financial results of that effort. We did not modify the report related to this issue. USPS also disagreed with the statement in the report that many mailers are not likely to respond to price decreases, such as discounts in market dominant NSAs, with additional mail volume. USPS stated that market dominant NSAs can provide promising opportunities to increase mail volumes and revenues in the future. Our conclusion about market dominant NSAs is based primarily on USPS’s estimates indicating that market dominant mail products have low elasticities. These estimates are a measure of the degree to which mailers respond to price changes and alter their demand for products and services. However, these estimates are product-wide averages. To the extent individual mailers have elasticities different from the average, it may be possible to incentivize additional mail volume from those mailers through price decreases. Few mailers, though, are likely to have an elasticity different enough from the average to warrant such an agreement. PRC also clarified two points in the report related to market dominant NSAs. First, PRC noted that USPS’s methodologies for assessing the financial impact of market dominant NSAs are not considered authoritative under statutory and regulatory requirements unless and until such methodologies are accepted by PRC as “accepted analytical principles” under sections 3050.10 and 3050.1(a) of the Code of Federal Regulations, Title 39. Second, PRC noted that after-the-fact review of NSAs works well for Non-Published Rate contracts, but is not applicable for agreements not subject to established, specific limitations that assure consistency with applicable statutory requirements. No change to the report was necessary based on these comments. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees, Postmaster General, Chairman of PRC, USPS OIG, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-2834 or stjamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To describe the NSAs, sales and promotions U.S. Postal Service (USPS) has developed, as well as their reported financial results, we reviewed public and non-public documents as well as additional USPS data. To summarize the financial results of NSAs, we examined non-public versions of USPS’s Cost and Revenue Analysis reports for fiscal years 2009 through 2012. We also reviewed non-public documents that included the volume, cost, and revenue of active competitive, domestic NSAs, for fiscal years 2009 through 2012. There were no active domestic competitive NSAs in fiscal years 2007 and 2008. We also reviewed non- public documents that included the volume, cost, and revenue of active competitive, international NSAs, for fiscal years 2007 through 2012. We also reviewed the data collection reports for market dominant NSAs. To confirm our summaries of the number and results of NSAs, we also obtained additional data from USPS on the number of active NSAs, by fiscal year, as well as the estimated financial results of all market dominant NSAs. We also reviewed Postal Regulatory Commission’s (PRC) Annual Compliance Determination Report for fiscal years 2007 through 2012, to identify, where applicable, which competitive NSAs PRC determined covered their attributable costs. We also reviewed PRC’s conclusions in these reports about the financial results of market dominant NSAs. To summarize the number of, and results from, USPS sales and promotions to date, we reviewed USPS documents filed with PRC requesting approval for sales and promotions as well as PRC’s Annual Compliance Determination Reports. We also obtained additional data from USPS on the estimated results of sales and promotions. To put the financial results of incentives into context of USPS’s overall financial situation, we also examined other USPS documents. These included the Revenue, Pieces, and Weights reports for fiscal years 2007 through 2012 and the fiscal year 2012 Form 10-K filing. Further, we also obtained USPS projections of future mail revenue and volume. We assessed the reliability of these data sources by interviewing USPS officials. Based on this information, we determined that the data provided to us were sufficiently reliable for our reporting purposes. We also conducted interviews with USPS and PRC officials, as well as 15 mailers that have participated in NSAs, sales, and promotions, on the financial condition of USPS and the results of those incentives, in order to enhance our understanding of the circumstances in which these incentives are developed and implemented. In these interviews we also discussed potential limitations to USPS and PRC analyses of incentives’ results. See below for information on how we selected mailers to interview. To identify and assess any opportunities to generate additional revenue from NSAs, sales, and promotions, as well as challenges, if any, that could the hinder their development and implementation, we conducted interviews with a variety of stakeholders. We conducted interviews with officials from USPS, PRC, and 15 mailers that both have and have not participated in USPS incentives (see list below). In order to obtain a range of perspectives on the opportunities and challenges related to NSAs, sales, and promotions, we identified mailers that have participated in NSAs involving a variety of competitive and market dominant products, and both international and domestic mail. We also identified mailers that had not participated in NSAs. Among these mailers, we interviewed both large and small mailers, defined as whether the mailer had more or less than $250 million in annual revenue in the most recent fiscal year for which data or estimates were available, as well as some that were recommended to us by a mailer association. Finally, we interviewed industry associations that represent major mailers in order to gather additional perspectives on the opportunities and challenges associated with NSAs, sales, and promotions, including the Association for Postal Commerce, Direct Marketing Association, National Newspaper Association, and Newspaper Association of America. The views of mailers and industry associations cannot be generalized to all mailers and industry associations because they were selected as part of a nonprobability sample. 4imprint Amazon AT&T Barnes & Noble Canada Post Discover Financial Services FedEx SmartPost Gardens Alive! Harriet Carter Gifts Highlights for Children, Inc. Pitney Bowes Quad/Graphics UPS Valassis Valpak To further identify and assess opportunities and challenges related to NSAs, sales, and promotions, we also reviewed a variety of documents. First, we reviewed the 2007 PRC regulations governing NSAs. Second, we reviewed a variety of PRC proceedings, including its 2010 proceeding to investigate methodologies for estimating volume changes due to pricing incentive programs, and its recommended decisions for all domestic, market dominant NSAs approved to date. We also examined PRC approval decisions for other NSAs in order to document the length of the PRC review process. We also reviewed the internal business evaluations conducted by USPS for a variety of domestic, competitive NSAs. Finally, we reviewed findings from relevant USPS Office of Inspector General reports. We determined that the methodologies of these reports were sufficiently reliable for our purposes. As part of the review for a proposed sale or promotion or market dominant products, PRC evaluates whether the sale or promotion satisfies the requirements of postal rate regulation. As listed below, these requirements include several objectives that the sale or promotion must be designed to achieve and several factors that PRC must take into account. (1) To maximize incentives to reduce costs and increase efficiency. (2) To create predictability and stability in rates. (3) To maintain high quality service standards established under section 3691. (4) To allow the Postal Service pricing flexibility. (5) To assure adequate revenues, including retained earnings, to maintain financial stability. (6) To reduce the administrative burden and increase the transparency of the ratemaking process. (7) To enhance mail security and deter terrorism. (8) To establish and maintain a just and reasonable schedule for rates and classifications, however the objective under this paragraph shall not be construed to prohibit the Postal Service from making changes of unequal magnitude within, between, or among classes of mail. (9) To allocate the total institutional costs of the Postal Service appropriately between market-dominant and competitive products. (1) the value of the mail service actually provided each class or type of mail service to both the sender and the recipient, including but not limited to the collection, mode of transportation, and priority of delivery; (2) the requirement that each class of mail or type of mail service bear the direct and indirect postal costs attributable to each class or type of mail service through reliably identified causal relationships plus that portion of all other costs of the Postal Service reasonably assignable to such class or type; (3) the effect of rate increases upon the general public, business mail users, and enterprises in the private sector of the economy engaged in the delivery of mail matter other than letters; (4) the available alternative means of sending and receiving letters and other mail matter at reasonable costs; (5) the degree of preparation of mail for delivery into the postal system performed by the mailer and its effect upon reducing costs to the Postal Service; (6) simplicity of structure for the entire schedule and simple, identifiable relationships between the rates or fees charged the various classes of mail for postal services; (7) the importance of pricing flexibility to encourage increased mail volume and operational efficiency; (8) the relative value to the people of the kinds of mail matter entered into the postal system and the desirability and justification for special classifications and services of mail; (9) the importance of providing classifications with extremely high degrees of reliability and speed of delivery and of providing those that do not require high degrees of reliability and speed of delivery; (10) the desirability of special classifications for both postal users and the Postal Service in accordance with the policies of this title, including agreements between the Postal Service and postal users, when available on public and reasonable terms to similarly situated mailers, that— (i) improve the net financial position of the Postal Service through reducing Postal Service costs or increasing the overall contribution to the institutional costs of the Postal Service; or (ii) enhance the performance of mail preparation, processing, transportation, or other functions; and (B) do not cause unreasonable harm to the marketplace. (11) the educational, cultural, scientific, and informational value to the recipient of mail matter; (12) the need for the Postal Service to increase its efficiency and reduce its costs, including infrastructure costs, to help maintain high quality, affordable postal services; (13) the value to the Postal Service and postal users of promoting intelligent mail and of secure, sender-identified mail; and (14) the policies of this title as well as such other factors as the Commission determines appropriate. Lorelei St. James, (202) 512-2834 or stjamesl@gao.gov. In addition to the contact named above, Teresa Anderson (Assistant Director), Ken Bombara, Kyle Browning, Colin Fallon, Imoni Hampton, Josh Ormond, Sara Ann Moessbauer, and Crystal Wesco made key contributions to this report.
For several years USPS has not generated sufficient revenues to cover its expenses. Although much focus has been on USPS's costs as a way to close the gap between its revenues and expenses, generating additional revenue is also needed. To increase mail volume and revenue, USPS has implemented NSAs, sales, and promotions with a variety of products. As requested, GAO reviewed (1) the trends and reported results of USPS's sales, promotions, and NSAs, as well as (2) any opportunities and challenges related to generating additional revenue from them. GAO reviewed USPS documents, PRC decisions, and annual reports, and interviewed officials from USPS and PRC. GAO also interviewed mailers, which were selected in part based on participation in NSAs, sales, and promotions. Their views cannot be generalized to all mailers. The U.S. Postal Service (USPS) has developed numerous negotiated service agreements (NSA), sales, and promotions since the enactment of the Postal Accountability and Enhancement Act (PAEA) in 2006, and they generate a small but growing portion of USPS total revenue. PAEA established two categories of products: "market dominant," where USPS has a monopoly, and "competitive," which includes all other products, such as shipping services. NSAs, sales, and promotions are generally designed to encourage additional mail volume and revenue through temporary discounts on specific mail products. For example, USPS has offered promotions to incentivize mailers to invest in technology that may increase the value of mail for those mailers over the long-term. No NSAs, sales, or promotions followed the enactment of PAEA until regulations were issued in late 2007. The number of NSAs, sales, and promotions has increased most years since. The revenue generated from NSAs, sales, and promotions has also increased overall. The most revenue was generated by competitive NSAs. Financial results of competitive NSAs are not reported publicly. According to the Postal Regulatory Commission (PRC), which exercises regulatory oversight over USPS, nearly all competitive NSAs have covered their costs. Market dominant NSAs generated little revenue, in part because few were done. Sales and promotions have also generated little revenue. Opportunities for increasing revenue from NSAs, sales, and promotions are primarily with competitive NSAs, though challenges may limit revenue, and it will likely not offset declines from other products. Continued growth in e-commerce is creating opportunities to generate additional revenue through competitive NSAs. Opportunities to generate additional revenue through market dominant NSAs are limited by low demand for those products. Also, it is difficult for USPS to determine whether any volume and revenue increases directly result from market dominant NSAs because it is difficult to accurately estimate mailers' future mail volume. In addition, USPS and some mailers we spoke with noted that the process for developing both market-dominant and competitive NSAs can be burdensome, hindering the development of new agreements. USPS has taken actions, though, to streamline the process for developing competitive NSAs. Opportunities for generating revenue from sales and promotions are also limited by low demand as well as limited review of the long-term financial results before implementation. USPS has noted that promotions satisfy rate requirements by, for example, helping to generate revenues for USPS. In particular, promotions are used to encourage mail volume over the long term. However, USPS does not provide data and analysis about the potential long-term financial results when submitting promotions to PRC for its approval. As a result, PRC does not assess the methodologies for evaluating the long-term financial results of promotions before implementation. Given USPS's financial situation, USPS should demonstrate how promotions may achieve positive long-term financial results, in order to help maximize the revenue generated by those postage rate discounts. GAO recommends that when filing for approval, USPS provide information to PRC about USPS's data collection and analysis plans for estimating the longterm financial results of promotions. GAO also recommends that PRC evaluate USPS's data collection and analysis plans for promotions as part of its review. In commenting on the report, USPS disagreed with the first recommendation, and PRC agreed with both recommendations. USPS stated it does not believe the recommendation will significantly affect the PRC's review process or improve the quality of USPS's business decisions. GAO continues to believe this recommendation has merit, as discussed in this report.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Livestock Mandatory Reporting Act of 1999 amended the Agricultural Marketing Act of 1946. The act established a livestock marketing information program to (1) provide producers, packers and other industry participants with market information that can be readily understood; (2) improve USDA price and supply reporting services; and (3) encourage more competition in these markets. Under the act, packers were required to report livestock market information that had previously been voluntarily reported and new information not previously reported to the public—such as information about contract livestock purchases. Under the voluntary program, USDA employees, referred to as reporters, gathered information daily by talking directly with producers, packers, feedlot operators, retailers, and other industry participants; by attending public livestock auctions, visiting feedlots and packing plants; and taking other actions. Under the Livestock Mandatory Reporting Act, packers were instead required to report on their cattle and hog purchases, and their sales of beef. The act also authorized USDA to require that packers report on lambs. USDA implemented the Livestock Mandatory Reporting Act by establishing a livestock mandatory reporting program to collect packers’ marketing information and disseminate it to the public through daily, weekly, monthly, and annual reports. Packers were required to electronically report hog purchases three times each day, cattle purchases twice each day, lamb purchases once daily, domestic and export sales of beef cuts twice daily, and sales of lamb carcasses and lamb cuts once daily. As of June 2005, 116 packers and importers were required to provide information under the Livestock Mandatory Reporting Act. Two branches of USDA’s AMS administered the livestock mandatory reporting program—Market News and the Audit, Review, and Compliance Branch (ARC). Market News was responsible for collecting and generating market news reports from information supplied by packers. Market News reporters gathered and reviewed this data, contacted packers to resolve any questions they had, and prepared reports. Reporters were required to ensure that they did not breach the confidentiality of packers by providing information that would allow the public to identify an individual packer. In addition to preparing reports, Market News personnel interacted with any packers that AMS believed needed to make changes in reporting to comply with the Livestock Mandatory Reporting Act. To identify compliance problems, ARC personnel audited the transaction data of packing plants three times a year. When ARC found packers that were reporting incorrectly, ARC notified the Market News reporters, who were responsible for notifying and following up with packers until the packers reported correctly. The Secretary of Agriculture was authorized to assess a civil penalty of up to $10,000 a day per violation on a packer that violated the act. AMS designed its livestock mandatory market news reporting program with elements intended to ensure the quality of its news reports. USDA officials, for example, developed a Web-based reporting system with automated and manual screening of packer transaction data and established an audit surveillance program to ensure packers reported accurately. However, we found that while AMS had made progress, its livestock market news program fell short of ensuring reliability because AMS reporting was not fully transparent, and AMS audits of packers revealed some problems with the quality of packers’ transaction data. AMS developed a mandatory livestock market news reporting program incorporating a number of features to ensure quality. More specifically, AMS took the following steps to ensure the quality of its livestock mandatory market news reports: AMS hired two contractors to assist in developing a rapid and reliable reporting system: Computer & Hi-Tech Management, Inc. was hired to assess the capability of the packing companies to provide electronic data; and PEC Solutions developed the computer software processes upon which the mandatory livestock reporting system is now based. AMS and PEC Solutions developed a software system that allows packers to provide their transaction data on web-based forms or to upload completed files into the reporting system data base. PEC Solutions prepared an industry guide to give packers instructions for correctly submitting transaction data. PEC Solutions used programmers who did not participate in developing the systems to test the functioning of the system. AMS further tested the system using simulated production data, because packers had not started reporting actual data. As a further validation step, AMS staff manually calculated data for several reports and compared that data with data generated by the system. AMS established computer based data security controls and computerized screening of packer transaction data to ensure it is being correctly reported. AMS established an audit function to periodically test the accuracy of transaction data that packers submit to AMS by visiting packer facilities, checking documentation in support of reported transactions and testing the completeness of packers’ reports. In addition, in May 2001, the Secretary of Agriculture appointed a top level USDA team—the Livestock Mandatory Price Reporting Review Team—to review problems in its calculations of certain boxed-beef prices. In addition to reviewing that problem and making related recommendations, most of which AMS adopted, the team assessed the overall integrity and accuracy of the program. This team found that for the most part, AMS had succeeded in gathering and reporting accurate data in a timely fashion. The team’s major criticism was that AMS had not adequately tested its system to ensure it was accurately calculating data that packers had reported. Subsequently, AMS initiated further testing to ensure the accuracy of its reports. The team also found that AMS’s plan for audit surveillance of packers was behind schedule due to difficulties in hiring qualified auditors. At that time AMS had conducted audits at only 19 of the 119 packer facilities it planned to reach. Since then, AMS has overcome these problems and conducted over 1,100 audits at packers’ facilities. The Livestock Mandatory Reporting Act was intended to provide producers with readily understandable market information on a significant portion of livestock industry transactions. The quality of this information is especially important because livestock transactions negotiated each day may be influenced by AMS reported prices, and some contracts between packers and producers rely on the weighted average prices that AMS reports. AMS was authorized to make reasonable adjustments in information reported by packers to reflect price aberrations or other unusual or unique occurrences that the Secretary determined would distort the published information to the detriment of producers, packers, or other market participants. In addition, AMS should have adhered to the Office of Management and Budget and USDA guidelines for disseminating influential statistical and financial information with a high degree of transparency about the data sources and methods, while maintaining the confidentiality of the underlying information. In addition, AMS has recognized the usefulness of providing the public with information about the preparation of its market reports. We found that AMS reporters adjusted the transaction data that packers report in an effort to report market conditions, but this practice has not been made transparent. We observed that AMS reporters sometimes eliminated small numbers of apparent erroneous transactions, as would be expected. Significantly, however, we found that AMS reporters eliminated numerous low- and some high-priced transactions that they believed did not reflect market conditions, particularly when reporting on cattle. Our analysis shows that from April through June 2005, when livestock prices were declining somewhat, AMS reporters excluded about 9 percent of the cattle transactions that packers had reported to AMS, about 3 percent of the reported beef transactions, and 0.2 percent of the reported hog transactions. Excluding small percentages of livestock or meat transactions may have had a small effect on the range of prices that AMS reported and a negligible effect on weighted average prices. However, as the percent of transactions excluded increased, so too did the possibility that AMS weighted average prices would be changed from what AMS would otherwise report. Table 1 provides more details about the transactions excluded during this period. In addition, our analysis shows that from May through October 2003, when cattle prices were rising and changing to greater extents, AMS reporters excluded about 23 percent of cattle transactions packers reported to AMS. Concerning hogs, during a period of rising prices between October 2003 and March 2004, we found that 0.1 percent of hog transactions were excluded from AMS reports. Because AMS reports excluded significantly more cattle transactions, we performed further analyses on them. Tables 2 and 3 show (1) information about the cattle transactions that AMS excluded from certain livestock mandatory market news reports from May through October 2003, and (2) examples of 12 days from this period showing the effects of the transactions that AMS excluded on the reported price ranges, and weighted average prices. During the period, AMS reporters’ decisions to exclude transactions had some effect on the cattle data we analyzed in AMS reports on about one third of the days and almost no effects on the others. Further details of our analyses are discussed in appendix I and shown in appendix II. AMS guidance for its reporters on eliminating transactions is limited, lacking clarity and precision. These instructions advise AMS reporters to review transactions which packers have reported each day, and to eliminate certain low- and high-priced transactions. AMS’s varying instructions for reporters are described in table 4. Senior AMS supervisors review reporters’ decisions to eliminate transactions, and AMS headquarters officials monitor the number of—and reasons why—transactions are being excluded by reporters. AMS officials explained, in general, their reviews and adjustments are intended to exclude transactions that are outside the prevailing market price ranges, and to avoid reporting ranges of prices that appear overly broad. Furthermore, Market News officials explained that this process is conducted because they believe that livestock market reports are intended to convey overall market conditions rather than precise statistics. Also, an AMS official noted that AMS Market News reporters mostly exclude low- price transactions involving small quantities, because those transactions often are lower quality animals or products. Concerning hogs, AMS’s reporters of hog transactions said that they were verbally instructed to exclude few hog transactions by headquarters officials soon after the start of the program. AMS headquarters officials said that these verbal instructions were provided after one or more large packers complained that it appeared AMS was excluding transactions because of price alone. Given that AMS reporters’ decisions to exclude transactions modified the prices they reported, AMS has not well-explained this practice to readers of AMS livestock market news. AMS’s Web site does not address the subject, and AMS livestock mandatory market news reports are unqualified. Some agricultural economists who study the livestock market and other industry experts we interviewed said that they were not aware of the extent of adjustments that AMS made. An AMS official explained that AMS has not previously provided public information on this process because it would be difficult to capture the nuances of AMS’s report preparation in a public document. Nevertheless, AMS previously acknowledged that it may be useful to provide information to the public about types of adjustments that it makes to its livestock mandatory market news reports. AMS officials also recognized that it would be desirable for AMS to improve its instructions for reporters and disclose more about its reporting practices to livestock market news report readers. Our review of AMS’s database indicates that further analyses could provide AMS with more information about the reasons why reporters eliminate transactions, the consistency of reporting, as well as the extent of changes in AMS’s presentation of prices. AMS’s Livestock and Seed Program Deputy Administrator said that, as a result of the information we brought to his attention, he had started to improve the reporters’ instructions. Since AMS reports help provide the industry with signals about when, where, and at what price to buy and sell livestock and meats, some industry participants may have been guided to somewhat different decisions on certain days if they had a greater understanding of AMS report content. In addition, the lack of transparency over the content and preparation of the livestock mandatory market reports may have also limited the confidence that some readers place in AMS reports. ARC regularly audited packers to provide assurance that the packers reported all of their transactions accurately and in compliance with AMS’s regulations. The quality of AMS reports depends on packers submitting correct transaction information. Once every 4 months, ARC auditors visited each of the 116 packers’ plants, or associated company headquarters, to review livestock transaction data. These audits usually included: (1) a test of the completeness of the packer’s reports, and (2) a detailed review of a sample of transactions to determine that each transaction in the sample was reported accurately and was supported by appropriate documentation. ARC has conducted over 1,100 audits at packers’ facilities since 2001. Detailed information was available for 844 of these audits conducted over the 36 months ending in April 2005. Table 5 contains additional information about the content of ARC audits. Of the 844 AMS audits for which data were available, 540—64 percent— identified one or more instances when it appeared that packers did not meet AMS reporting standards. The other 304 audits, or about 36 percent, did not identify any such instances. AMS audits detected a wide variety of packer reporting inaccuracies such as the omission of livestock slaughtered, underreporting of purchases, delayed reporting of livestock purchases and meat sales, price inaccuracies, and the misclassification of transactions. While noting the frequency of AMS audit findings, AMS officials commented that packers’ reporting errors were of concern. AMS officials also said that its audit results should be considered in the context of the volume of transactions that AMS reports—compared to the hundreds of thousands of pieces of transaction data that packers reported daily, the errors identified by AMS audits were relatively few. However, our review shows that AMS findings are based on audits of a small portion of packers' transactions, and it is likely that there have also been errors in packers’ unaudited transactions. Furthermore, a closer look at 86 AMS audits completed from June through September 2004 shows that AMS identified 46 instances when 22 packers submitted incorrect transaction data that AMS classified as possibly affecting the accuracy of AMS reports. Table 6 provides examples of AMS audit findings. AMS officials said many ARC audit findings were minor and usually had little effect, if any, on the accuracy of AMS reports. In addition, they also said that since 2001, packers had clearly improved their reporting of transactions. AMS officials said that because of the overall improvement in packers’ reporting of transactions, they reduced the frequency of audits at each packer from four to three times a year. Our review provides some support for AMS officials’ view that packers were reporting better than at the outset of the program. From May 2002 through April 2005, the number of AMS audits with findings as a percent of total audits decreased each year, from 76 percent in 2002 to 55 percent in 2005. In addition, the average number of audit findings per audit decreased from 1.8 to 1.4 over that period. Moreover, in the first quarter of 2005, AMS audits did not identify any problems that rose to its highest level of concern. Nevertheless, AMS classified 22 percent of the problems it identified in the first quarter of 2005 as possibly having some adverse effect on the accuracy of its reports. In addition, follow-up was sometimes lengthy on problems ARC auditors identified. Our analysis of follow-up efforts by AMS on the 86 audits it conducted between June through September 2004, showed that, on average, about 85 days elapsed between the date of an AMS audit and the date AMS recorded that the packer had made the needed corrections. AMS reporters frequently contacted packers to convey information about the correct way for packers to report. Their outreach was prompted both by audit findings and by reporters’ reviews of the packers’ data. When recurring reporting problems arose, headquarters officials issued internal guidance to clarify proper reporting procedures for both auditors and reporters. On at least two occasions, AMS reporters provided information from this internal guidance to packers to clarify proper reporting procedures. However, some packers, including three of the largest packers, did not promptly correct reporting problems that AMS identified. Since 2002, AMS sent 11 packers 21 letters to call to the packers’ attention apparent delays in correcting reporting issues and warning the packers that penalties might be applied should there be further delays in addressing these issues. Of these, AMS sent 8 letters to 6 packers between January 2004 and September 2005, with 6 letters involving cattle and 2 involving hogs. In addition, twice AMS levied fines on packers of $10,000, although these fines were suspended provided these packers went a year without additional violations of the Livestock Mandatory Reporting Act. As of September 2005, AMS had continuing issues with 2 of 11 the packers that received AMS warning letters. Appendix III contains additional information on the issues leading to AMS warning letters to packers. While AMS audit reports identified many problems in packers’ reporting of transactions, there are two reasons why the reports do not provide a clear basis for assessing the overall accuracy of packers’ data which underlie AMS livestock mandatory market news reports. First, AMS did not select transactions for audit in a manner that would enable AMS to project the overall accuracy of packers’ transaction data. Second, AMS did not develop analyses that demonstrate the overall accuracy of information in its reports. We explored two approaches with AMS officials to (1) obtain better indications of the overall accuracy of packers’ transaction data, and (2) better direct future AMS audits. First, AMS audits did not provide a basis for projecting the overall accuracy of packers’ transaction data. Another approach, in which AMS would periodically audit a statistical sample of transactions, might provide a basis for projecting the overall accuracy of the transactions. Second, AMS could analyze its audit results, focusing on findings of consequence and its follow-up efforts to address those findings. Such analyses could be useful for identifying the relative frequency of concerns with packers’ transaction data, the types of recurring errors, the timeliness and consistency of auditor and market news follow-up on packer’s actions to address reporting issues, and the overall effectiveness of AMS efforts to quickly resolve reporting issues. AMS officials indicated that these suggestions appeared to be reasonable and that they would consider taking both steps. AMS data show that from April through June 2005, 4 percent, 5 percent and 7 percent of selected cattle, beef and hog data, respectively, were received from packers by AMS after the deadlines set by the Livestock Mandatory Reporting Act. Nevertheless, AMS officials said that while some packers missed the reporting deadlines, most usually submitted their transaction data within minutes thereafter—giving AMS reporters enough time to include almost all transaction data in market news reports. In addition, AMS officials said that if some reporting deadlines and publication times set in the Livestock Mandatory Reporting Act were changed, this would help packers working on the west coast meet the reporting schedule and help AMS meet changing market conditions. GIPSA and AMS coordination has been limited, primarily due to the legal authority within which each operates. AMS implemented and enforced the Livestock Mandatory Reporting Act. While the Livestock Mandatory Reporting Act called for the establishment of a mandatory reporting program, it required information be made available to the public in a manner that ensured the confidentiality of the identity of persons and proprietary business information. Such information could not be disclosed except (1) to USDA agents or employees in the course of their duties under the Livestock Mandatory Reporting Act, (2) as directed by the Secretary or the Attorney General for enforcement purposes, or (3) by a court. AMS officials said that they have shared packer transaction data with GIPSA when requested for specific investigations. GIPSA implements and enforces the Packers and Stockyards Act. GIPSA monitors livestock markets and investigates when it has reason to believe there have been violations of the act. Since 1999 when the Livestock Mandatory Reporting Act was adopted, there have been two cases where GIPSA formally requested access to a packer’s transaction data from AMS for specific investigations. AMS provided access as GIPSA requested. One investigation involved hogs, and the other, lamb. In one case, opened in October 2002, GIPSA investigated whether a packer was manipulating reported prices in AMS’s livestock mandatory reporting program to reduce its procurement costs. GIPSA did not identify a violation of the Packers and Stockyards Act, and closed this investigation in 2005. However, GIPSA identified instances in which the packer’s reports of negotiated livestock purchases met the documentation standards of the Packers and Stockyards Act, but may not have met the standards of the Livestock Mandatory Reporting Act. In September 2005, GIPSA officials briefed AMS officials on their investigation, and suggested that AMS consider whether the packer was complying with the Livestock Mandatory Reporting Act. In response to our further questions about this case, officials of AMS and GIPSA said that they would consider additional inquiry or investigation under both statutes to determine if there have been repeated transactions reported to AMS for which the packer lacks certain documentation. In the second case, GIPSA investigated the possibility that a packer paid less for livestock as a result of providing undue preference to a select group of producers. GIPSA initiated this case in May 2002 and closed this case in September 2005. GIPSA officials said that individual packer transaction data held by AMS would be useful for monitoring competitive behavior in livestock markets. However, because GIPSA could not obtain that confidential information unless the Attorney General or the Secretary directed disclosure of the information for enforcement purposes, GIPSA is making due with the publicly available AMS livestock market report data. This monitoring effort is limited because AMS reports do not include the company-specific transaction data that might reveal anti-competitive behavior. More specifically, GIPSA uses publicly available AMS report data in cattle and hog price monitoring programs to forecast market prices for comparison with actual prices. If there are notable differences, GIPSA officials attempt to assess whether economic conditions could be responsible. Should GIPSA find that a difference was not readily explained by economic conditions, then GIPSA would further investigate to determine if anti- competitive behavior of individual firms were involved. At such a point, GIPSA may request that AMS provide company specific livestock transaction data for GIPSA’s investigation. GIPSA officials said that while this monitoring effort is less informative than one that would rely on company specific transaction data, their monitoring programs are relatively new and they have not identified better alternatives at this point. AMS has not achieved the level of transparency needed for establishing the reliability of its livestock market news reports—a level that would more fully disclose to market participants and observers its practices in reviewing packers’ transactions, and the effects on AMS reports. Without further disclosure of its reporting practices, market participants are less informed than they should be about (1) AMS reporters’ reviews, (2) AMS decisions on presenting prevailing prices, and (3) the results of AMS audits of packers’ transactions. Also, the lack of precision and clarity in AMS’s varying instructions for its reporters has led to inconsistent reporting approaches, which could adversely affect readers’ confidence in AMS reports. AMS market news readers should have information that enables them to understand AMS’s approach to reporting prices, and to have confidence that the approaches are based on sound statistical, economic, and reporting guidance. In addition, the problems which AMS audits identified in packers’ transaction information warrant continued vigilance if the mandatory reporting program is renewed. Unless AMS takes some additional steps, it will not have information to (1) assess the overall accuracy of packers’ transaction data, (2) focus its audit efforts on recurring significant problems, and (3) ensure that prompt and consistent action on audit findings is being taken. Concerning the GIPSA investigation in which GIPSA raised questions about a packer's documentation of its transactions, unless AMS and GIPSA complete further investigative work, neither agency can have assurance of the accuracy and propriety of the packers’ transactions. Should Congress extend the Livestock Mandatory Reporting Act, we recommend that the Secretary of Agriculture direct the Administrator, Agricultural Marketing Service to: Increase transparency by (1) reporting to market news readers on its reporters’ instructions for making reporting decisions that reflect prevailing market conditions, (2) periodically reporting on the effects of reporters’ decisions on AMS reported prices, and (3) reporting the results of its audit efforts. Clarify AMS reporter’s instructions to make them more specific and consistent by (1) consulting with packers, producers, agricultural economists, and other interested stakeholders, and (2) undertaking revisions that consider economic analyses of past reporting trends, livestock and meat market variations, and federal statistical and information reporting guidance. Develop information about the overall accuracy of packers’ transaction data by auditing a statistical sample of packers’ transactions. Further develop AMS audit strategies to identify recurring significant problems. Address the timeliness and consistency of AMS reporters’ efforts to follow-up on audit findings. We also recommend that the Secretary of Agriculture direct the Administrators of the Agricultural Marketing Service and the Grain, Inspection, and Packers and Stockyards Administration to further investigate the reporting practices of one packer’s low-price purchases of livestock. We provided USDA with a draft of this report for review and comment. In a memorandum dated November 18, 2005, we received formal comments from USDA’s Acting Under Secretary for Marketing and Regulatory Programs. These comments are reprinted in appendix IV. We also received oral technical comments from AMS and GIPSA officials, which we incorporated into the report as appropriate. USDA generally agreed with our findings and recommendations, and discussed the actions it has taken, is taking, or plans to take to address our recommendations. Among other things, USDA stated that AMS would (1) prepare publicly available reports on the volume of transactions excluded by reporters and their effect on reported prices, and take steps to increase public awareness of reporting methods and processes; (2) clarify AMS reporters’ instructions while following federal and departmental statistical and information reporting guidance; (3) post quarterly audit information to its website and identify additional audit information to add in the future; (4) develop auditing methods to allow conclusions to be drawn about overall data accuracy; (5) review its auditing methods to increase the overall effectiveness of the compliance program; and (6) conduct further inquiry into the issues raised during one of GIPSA’s investigations. Concerning the transactions that AMS excluded from its market news reports, USDA agreed that 22.8 percent of cattle transactions were excluded from May to October 2003. USDA added that AMS reporters excluded some transactions during that period because its computer system could not differentiate between the base and net prices for certain cattle sales. Our review indicates that AMS exclusions for that reason were part of the story. More specifically, AMS reporters’ log entries showed that of the transactions AMS excluded from May to October 2003, about 24 percent were excluded for reasons relating to base prices, while about 34 percent of the transactions were excluded to narrow the range of prices that AMS reported, and the remainder were excluded for a variety of other reasons such as small head count, small lots, low weight, mixed lots, or grade of cattle. In addition, AMS suggested that its programming change to differentiate base and net prices led to fewer exclusions–8.8 percent-- during the April through June 2005 period. While we agree that is part of the explanation, we believe, if the livestock mandatory program is renewed, that AMS needs to focus on the bases and methods for excluding transactions, and especially the extent to which AMS will be excluding transactions when prices are again rapidly changing, such as they did in 2003. AMS also stated that care should be exercised when drawing conclusions about packer compliance because packers’ errors are relatively few compared to the 500,000 data elements packers may have submitted on some days. We believe insufficient information is available to assess the overall quality of packer data. AMS audits only focused on a small portion of the data submitted by packers, and it is likely that packers’ unaudited transactions contain errors as well. We continue to believe that packer reporting problems that AMS identified warrant continued vigilance should the program be renewed and recommend that AMS develop auditing methods to allow conclusions to be drawn about overall accuracy of packer’s data. As agreed with your staffs, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees; the Secretary of Agriculture; the Under Secretary for Marketing and Regulatory Programs; the Administrators of the Agricultural Marketing Service and the Grain Inspection, Packers and Stockyards Administration; and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge at GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or robinsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Our objectives were to review the extent to which (1) the U.S. Department of Agriculture’s (USDA) Agricultural Marketing Service (AMS) takes sufficient steps to ensure the quality of its livestock mandatory market news reports, and (2) AMS and the Grain Inspection, Packers and Stockyards Administration (GIPSA) coordinate efforts to encourage competition in livestock markets. To review AMS’s steps to ensure the quality of its reports, we visited the two Market News Branch (Market News) field offices in Des Moines, IA, and St. Joseph, MO, and spoke with AMS reporters about their responsibilities related to mandatory price reporting and observed them as they prepared livestock mandatory reports for cattle, beef, hogs, lamb and lamb meat. To test AMS’s computerized reporting system, we obtained and analyzed unpublished data from AMS’s livestock mandatory reporting database for beef, cattle, and swine. For this analysis, we used data reported by packers through the Live Cattle Daily Report (Current Established Prices) (LS-113), Swine Daily Report (LS-119), and Boxed Beef Daily Report (LS-126) contained in AMS’s livestock mandatory reporting database. We reviewed USDA documents on the report preparation and data storage system and analyzed the flow of data into and through the system. We performed electronic testing and validation of system data developed for us from data available in the AMS system. We found the data were sufficiently reliable to support our analyses. We also replicated elements of certain reports—the Five Area Daily Weighted Average Direct Slaughter Cattle Report and the National Daily Direct Morning Hog Report—that livestock experts told us were important to livestock producers. In addition, we examined transactions reporters excluded from AMS reports. First, we examined transactions made between April and June 2005. More specifically, we reviewed data packers submitted on the Live Cattle Daily Report (Current Established Prices) (LS-113), Swine Daily Report (LS-119), and Boxed Beef Daily Report (LS-126) and compared it with the reports published during this period. Second, we examined transactions AMS excluded from its reports during periods of rapidly rising cattle and hog prices—for cattle, transactions excluded by reporters for a key category of live and dressed cattle prices from May through October 2003; for hogs, those excluded from October 2003 to March 2004. To determine which transactions were eliminated for market reasons, we reviewed the reporter log field in the database. The logs identify transactions eliminated for various reasons, such as price, low price, high price, or lot size. We analyzed data from all days reported for this time period in the 35 to 65 percent choice steer grade of the Five Area Weighted Average Direct Slaughter Cattle Report. We then calculated the weighted average prices with and without the excluded transactions and the difference between these prices. In addition, we performed a statistical test to determine whether the difference between the prices, as a group, was statistically significant. We discussed how AMS performed audits to ensure packers were complying with the Livestock Mandatory Reporting Act provisions with AMS’s Audit Review and Compliance (ARC) officials in USDA headquarters, and with auditors in both Des Moines and St. Joseph. As part of this effort, we obtained and reviewed the mandatory price reporting audit reports that ARC conducted from May 2002 through April 2005. In particular, we used ARC’s database of audit reports to analyze the number of audits conducted over the time period, the number of findings related to those audits, and other information. ARC officials and our analysis indicated that the number of audit reports in the database closely approximated the number of audits conducted. We found this database to be sufficiently reliable for this purpose. Because this database did not provide specifics on the reasons AMS believed some companies were out of compliance, we performed a detailed review of all audit reports during one 4-month audit cycle from June through September 2004. We also obtained information from AMS Headquarters officials regarding the formal warning letters they sent packers and the penalties they assessed. We analyzed ARC’s audit methodology for sampling transactions and the extent to which that sample of transactions could provide information on packer compliance and the accuracy of the reported prices. In addition, we reviewed ARC policy and procedures, the audit report database, and had discussions with ARC officials and auditors. Specifically, we interviewed ARC officials regarding their audit methodology with emphasis on their sampling methodology, and we reviewed their documentation on sample selection. Furthermore, to analyze the agency’s sampling procedure, we compared the time between the audit field visit and the days selected for the audit of a full day’s transactions, and the audit of a sample of transactions over the 4-month audit cycle from June through September 2004. To determine the extent of coordination between GIPSA and AMS, we reviewed their legislative authority, identified activities and investigations involving both agencies, and reviewed GIPSA case file documentation from the competition-related investigations in which GIPSA obtained packers’ transaction data from AMS. We met with USDA Headquarters officials from AMS and GIPSA. In Des Moines, we met with GIPSA’s Packers and Stockyards Programs regional officials, and on separate occasions, spoke with GIPSA’s Denver Regional Office officials regarding GIPSA and AMS coordination. During the course of our review, we identified and obtained the views of several industry groups and associations representing packers and producers. We also interviewed several nationally recognized economic experts knowledgeable about mandatory price reporting and related market issues. We conducted our review between February and November 2005 in accordance with generally accepted government auditing standards. Overall, from April 2005 through June 2005, we found approximately 8.8 percent of cattle transactions, 0.2 percent of hog transactions, and 2.7 percent of boxed beef transactions were eliminated. From May 2003 to October 2003, a period of rapidly rising prices, we found that approximately 22.8 percent of all cattle transactions were excluded from AMS reports. Figure 1 shows that close to 95 percent of all excluded dressed weight cattle transactions from negotiated sales were smaller lots—groupings of cattle for sales purposes—of fewer than 25 cattle. However, as figure 2 shows, the proportion of negotiated live cattle transactions that were eliminated consisted of lots that were relatively larger than dressed cattle lots and more consistent in size; about 75 percent of lots were greater than the 0 to 25 lot category and over 10 percent were between 201 and 400 head of cattle. Information on the size distribution of excluded lots is relevant because excluding large lots could have a relatively greater impact on weighted average prices reported by AMS than smaller lots. Also, the effects of excluding large lots could be greater in daily reports when trade volume is light, and an accumulation of excluded large lots could affect weekly and monthly reports. Market News reporters of hog trade eliminated significantly fewer transactions than the cattle reporters early on in the livestock mandatory reporting program. For hogs, from October 2003 to March 2004, we found that approximately 0.1 percent of transactions were excluded, which was less than 0.1 percent of all hogs. Figure 3 shows that, for negotiated sales, while nearly 40 percent of excluded transactions were smaller lots of 50 hogs or less, the largest category of slaughtered swine excluded—over 35 percent—were somewhat larger lots, in the 151–200 head lot category. During a sample period of rapidly rising prices, our analysis of cattle and hog livestock data shows that the elimination of transactions from Market News reports narrowed price ranges while having a limited, but frequently positive, effect on the average reported price. To illustrate this process, figures 4 and 5 show the differences in the distributions of cattle prices for dressed steers from May through July 2003 and how reporters’ exclusion of cattle transactions eliminated outlying prices and narrowed the range of prices. During this same time period, reporters’ exclusions decreased the number of packer transactions from 4066 to 3334. Excluding these transactions narrowed the associated price range—the difference between the minimum and maximum price—from $117.95 to $16.50 per hundredweight. Market News reporters’ elimination of data for market reasons from reports between May and October 2003 had the effect of narrowing price spreads or ranges on a daily basis. For dressed steers, figure 6 shows the narrowing of the range of prices over this period before and after all excluded transactions, most of which were excluded for market reasons. As shown in the figure, price ranges before any excluded transactions during this period were from $2 to $20 per hundredweight while, after all market exclusions were made, the range decreased to between $0 and $12 per hundredweight. Market News reporters are instructed to exclude prices that are $5 above or below the market to narrow the range of reported prices and AMS record logs indicate that they do so. However, when prices are rising or falling rapidly, this practice may exclude some transactions that should reasonably be presented as reflecting the day-to- day variations in the market. Also, since these are national daily reports, price spreads tend to be larger since they encompass the full range of prices for all regions. During May to October 2003, a period of rapidly rising cattle prices, we estimate that the effect of eliminating transactions for market reasons was negligible about two-thirds of the time, while for the remaining third the reported average prices were generally higher than they would have been had these transactions not been eliminated. For live cattle sales, figure 7 displays the differences between the average weighted daily prices after AMS exclusions (as reported in Market News reports) and the average weighted prices based on including the transactions that AMS had excluded for market reasons for 35–65 percent choice steers from May through October 2003. The average weighted prices published by AMS for these dates were the same about 67 percent of the time, higher 31 percent of the time, and lower 2 percent of the time over this period. This suggests, and Market News record logs confirm, that during this period when Market News reporters were excluding transactions, they were predominantly excluding transactions for reasons of lower price rather than high price. We found that over twice as many transactions were excluded for low price as for high price during this period. For 35 to 65 percent choice steers, dressed weight, figure 8 shows the differences between the daily weighted average prices reported by AMS, and the average prices that AMS would have reported if AMS reporters had not eliminated transactions for market reasons. These differences display a trend similar to the one we identified for live cattle prices. When we compared our calculations of the weighted average prices with those AMS reported, about 32 percent of the price differences were higher than those AMS would have reported; about 67 percent were the same or about the same, and 1 percent were lower. This result indicates that market reporters of livestock were excluding a higher proportion of low prices during this period. AMS reporters may have excluded low prices more frequently during the period because prices were rising. What a reporter considered to be a high price during one week may have appeared to be a much lower price by the following week. Also, at the low end of the price ranges, transactions may have been excluded because the prices represented low- quality animals. The effect of an excluded transaction on any particular day is determined by how large that transaction is compared to the size and number of transactions that took place on that day or that week, and how far it is from the range of reported prices. While each transaction alone may be considered a small lot, the total effect of a number of excluded transactions for this reason can cumulatively have a large effect on the weighted average price. To determine if there was an overall statistical difference between our replications of AMS prices and the prices we determined would have been reported had reporters not eliminated transactions for market reasons, we tested the two average weighted price series for both live and dressed cattle. We found that for both live and dressed weight cattle, there was a statistically significant difference in the weighted averages between reported AMS prices and the prices that would have been reported if exclusions had not been made for market reasons. Our analysis of data from AMS’s daily hog reports from October 2003 to March 2004 showed that, for the reports that we examined, reporters frequently eliminated transactions that they believed to be errors that would potentially widen price ranges. However, unlike cattle, there were very few transactions eliminated from reports for market reasons. As a result, for hogs, price ranges with and without exclusions by market news reporters were more similar than for cattle. As illustrated in figure 9, the difference between prices reported by AMS and prices that would have been reported by Market News was notable on only 7 days for the National Daily Direct Morning Hog Report from October 2003 through March 2004. A similar analysis of the afternoon hog report shows the same pattern. Incorrectly rounded report sale prices to the nearest Did not report all sales required by Livestock Mandatory 3/18/03—Letter from Deputy Administrator (in response to 2/10/03 letter from packer) Issue pending. Market News reviewing results from 9/15/05 audit and will determine if further action is warranted. Issue pending. Market News reviewing current information provided by packer. In addition to the individual above Charles Adams, Assistant Director, Aldo Davila, Barbara El Osta, Paige Gilbreath, Kirk Menard, Lynn Musser, Karen O’Conor, Alison O’Neill, Vanessa Taylor and Amy Webbink made key contributions to this report.
Livestock producers, with gross income of $63 billion in 2004, depend on USDA's daily, weekly, and monthly livestock market news reports. These reports provide them and others in the industry with livestock and meat prices and volumes, which are helpful as they negotiate sales of cattle, hogs, lamb and meat products. Packers also use the average prices in these reports as a basis for paying some producers with whom the packers have contracts. In 1999, the Livestock Mandatory Reporting Act was passed to substantially increase the volume of industry sales transactions covered by USDA's market news reports and thereby encourage competition in the industry. In the context of ongoing discussions about the renewal of this act, GAO reviewed (1) USDA's efforts to ensure the quality of its livestock market news reports and (2) the coordination between two USDA agencies that are responsible for promoting competition in livestock markets. While the U.S. Department of Agriculture (USDA) took important actions to produce quality livestock market news reports, GAO found that USDA could improve the reports' transparency. Although packers with large plants must report all of their livestock transactions to USDA, GAO found that USDA market news reporters regularly excluded some transactions as they prepared USDA's reports. For example, GAO's analysis showed that from April through June 2005, USDA reporters excluded about 9 percent of the cattle transactions that packers had reported. When USDA excluded transactions, this sometimes changed the low, high, and average prices that USDA would have otherwise reported. However, USDA has not informed its readers of the extent of this practice. Moreover, USDA's instructions for guiding its market news reporters as they prepared their reports lacked clarity and precision, leading to inconsistency in their reporting decisions. In addition, GAO found the accuracy of USDA's livestock market news reports is not fully assured. About 64 percent of 844 USDA audits of packers--conducted over 36 months ending in April 2005--identified packers' transactions that were inaccurately reported, unsupported by documentation, or omitted from packers' reports. Moreover, some packers have not promptly corrected problems. Since 2002, USDA has sent 11 packers 21 letters urging the packers to correct longstanding problems and warning them of the consequences of delay. Twice USDA has levied $10,000 fines on packers, but suspended the fines when these packers agreed to comply. As of September 2005, USDA had continuing issues with 2 of the 11 packers. USDA officials noted that packers' errors are relatively few compared to the large volumes of data that packers report daily. However, USDA has not (1) assessed the overall quality of packers' data, (2) used its audit results to help focus future audit efforts, and (3) ensured that follow-up promptly resolves problems. Two USDA agencies have addressed competition in livestock markets--the Agricultural Marketing Service (AMS) and the Grain Inspection, Packers and Stockyards Administration (GIPSA). GAO found the coordination between these agencies to be limited, primarily due to the legal authority within which each operates. AMS has implemented the Livestock Mandatory Reporting Act. That act did not provide authority for AMS to share individual packer transaction data within USDA except for enforcement purposes. In two investigations, AMS provided packers' data to GIPSA. On the other hand, GIPSA enforces the Packers and Stockyards Act and is responsible for addressing unfair and anti-competitive practices in the marketing of livestock. Furthermore, GAO found that GIPSA monitors cattle and hog markets by analyzing publicly available livestock market news reports--an approach that has limitations because it lacks the company-specific information that would be useful for detecting anti-competitive behavior.
You are an expert at summarizing long articles. Proceed to summarize the following text: The United States has approximately 360 commercial sea and river ports that handle more than $1.3 trillion in cargo annually. A wide variety of goods, including automobiles, grain, and millions of cargo containers, travel through these ports each day. While no two ports are exactly alike, many share certain characteristics, like their size, general proximity to a metropolitan area, the volume of cargo being processed, and connections to complex transportation networks designed to move cargo and commerce as quickly as possible, that make them vulnerable to physical security threats. Entities within the maritime port environment are also vulnerable to cyber- based threats because maritime stakeholders rely on numerous types of information and communications technologies to manage the movement of cargo throughout ports. Examples of these technologies include the following: Terminal operating systems: These are information systems used by terminal operators to, among other things, control container movements and storage. For example, the terminal operating system is to support the logistical management of containers while in the terminal operator’s possession, including container movement and storage. To enhance the terminal operator’s operations, the system can also be integrated with other systems and technologies, such as financial systems, mobile computing, optical character recognition, and radio frequency identification systems. Industrial control systems: In maritime terminals, industrial control systems facilitate the movement of goods throughout the terminal using conveyor belts or pipelines to various structures (e.g., refineries, processing plants, and storage tanks). Business operations systems: These are information and communications technologies used to help support the business operations of the terminal, such as communicating with customers and preparing invoices and billing documentation. These systems can include e-mail and file servers, enterprise resource planning systems,networking equipment, phones, and fax machines. Access control and monitoring systems: Information and communication technology can also be used to support physical security operations at a port. For example, camera surveillance systems can be connected to information system networks to facilitate remote monitoring of port facilities, and electronically enabled physical access control devices can be used to protect sensitive areas of a port. See figure 1, an interactive graphic, for an overview of the technologies used in the maritime port environment. See appendix III for a printable version. Move mouse over blue system names to get descriptions of the systems. See appendix III for noninteractive version of this graphic. The location of the entity that manages these systems can also vary. Port facility officials we interviewed stated that some information technology systems used by their facilities are managed locally at the ports, while others are managed remotely from locations within and outside the United States. In addition, other types of automated infrastructure are used in the global maritime trade industry. For example, some ports in Europe use automated ground vehicles and stacking cranes to facilitate the movement of cargo throughout the ports. Like threats affecting other critical infrastructures, threats to the maritime information technology (IT) infrastructure can come from a wide array of sources. For example, advanced persistent threats—where adversaries possess sophisticated levels of expertise and significant resources to pursue their objectives—pose increasing risk. Threat sources include corrupt employees, criminal groups, hackers, and terrorists. These threat sources vary in terms of the capabilities of the actors, their willingness to act, and their motives, which can include monetary or political gain or mischief, among other things. Table 1 describes the sources of cyber- based threats in more detail. These sources of cyber threats may make use of various cyber techniques, or exploits, to adversely affect information and communications networks. Types of exploits include denial-of-service attacks, phishing, Trojan horses, viruses, worms, and attacks on the IT supply chains that support the communications networks. Table 2 describes the types of exploits in more detail. Similar to those in the United States, ports elsewhere in the world also rely on information and communications technology to facilitate their operations, and concerns about the potential impact of cybersecurity threats and vulnerabilities on these operations have been raised. For example, according to a 2011 report issued by the European Network and Information Security Agency, the maritime environment, like other sectors, increasingly relies on information and communications systems to optimize its operations, and the increased dependency on these systems, combined with the operational complexity and multiple stakeholders involved, make the environment vulnerable to cyber attacks. In addition, Australia’s Office of the Inspector of Transport Security reported in June 2012 that a cyber attack is probably the most serious threat to the integrity of offshore oil and gas facilities and land-based production. In addition, a recently reported incident highlights the risk that cybersecurity threats pose to the maritime port environment. Specifically, according to Europol’s European Cybercrime Center, a cyber incident was reported in 2013 (and corroborated by the Federal Bureau of Investigation) in which malware was installed on a computer at a foreign port. The reported goal of the attack was to track the movement of shipping containers for smuggling purposes. A criminal group used hackers to break into the terminal operating system to gain access to security and location information that was leveraged to remove the containers from the port. Port owners and operators are responsible for the cybersecurity of their operations, and federal plans and policies specify roles and responsibilities for federal agencies to support those efforts. In particular, the National Infrastructure Protection Plan (NIPP), a planning document originally developed pursuant to the Homeland Security Act of 2002 and Homeland Security Presidential Directive 7 (HSPD-7), sets forth a risk management framework to address the risks posed by cyber, human, and physical elements of critical infrastructure. It details the roles and responsibilities of DHS in protecting the nation’s critical infrastructures; identifies agencies that have lead responsibility for coordinating with the sectors (referred to as sector-specific agencies); and specifies how other federal, state, regional, local, tribal, territorial, and private-sector stakeholders should use risk management principles to prioritize protection activities within and across sectors. In addition, NIPP sets up a framework for operating and sharing information across and between federal and nonfederal stakeholders within each sector that includes the establishment of two types of councils: sector coordinating councils and government coordinating councils. The 2006 and 2009 NIPPs identified the U.S. Coast Guard as the sector-specific agency for the maritime mode of the transportation sector.and resilience strategies for the maritime environment. In this role, the Coast Guard is to coordinate protective programs Under NIPP, each critical infrastructure sector is also to develop a sector- specific plan to detail the application of its risk management framework for the sector. The 2010 Transportation Systems Sector-Specific Plan includes an annex for the maritime mode of transportation. The maritime annex is considered an implementation plan that details the individual characteristics of the maritime mode and how it will apply risk management, including a formal assessment of risk, to protect its systems, assets, people, and goods. In February 2013, the White House issued Presidential Policy Directive 21, which shifted the nation’s focus from protecting critical infrastructure against terrorism toward protecting and securing critical infrastructure and increasing its resilience against all hazards, including natural disasters, terrorism, and cyber incidents. The directive identified sector-specific agency roles and responsibilities to include, among other things, serving as a day-to-day federal interface for the prioritization and coordination of sector-specific activities. In December 2013, DHS released an updated version of NIPP. The 2013 NIPP reaffirms the role of various coordinating structures (such as sector coordinating councils and government coordinating councils) and integrates cyber and physical security and resilience efforts into an enterprise approach for risk management, among other things. The 2013 NIPP also reiterates the sector-specific agency roles and responsibilities as defined in Presidential Policy Directive 21. In addition, in February 2013 the President signed Executive Order 13636 for improving critical infrastructure cybersecurity. The executive order states that, among other things the National Institute of Standards and Technology shall lead the development of a cybersecurity framework that will provide technology-neutral guidance; the policy of the federal government is to increase the volume, timeliness, and quality of cyber threat information sharing with the U.S. private sector; agencies with responsibility to regulate the security of critical infrastructure shall consider prioritized actions to promote cyber security; and DHS shall identify critical infrastructure where a cybersecurity incident could have a catastrophic effect on public health or safety, economic security, or national security. The primary laws and regulations that establish DHS’s maritime security requirements include the Maritime Transportation Security Act of 2002 (MTSA), the Security and Accountability for Every Port Act of 2006 (SAFE Port Act),laws. and Coast Guard’s implementing regulations for these Enacted in November 2002, MTSA requires a wide range of security improvements for protecting our nation’s ports, waterways, and coastal areas. DHS is the lead agency for implementing the act’s provisions and relies on its component agencies, including the Coast Guard and FEMA, to help implement the act. The Coast Guard is responsible for security of U.S. maritime interests, including completion of security plans related to geographic areas around ports with input from port stakeholders. These plans are to assist the Coast Guard in the protection against transportation security incidents across the maritime port environment. The Coast Guard has designated a captain of the port within each of 43 geographically defined port areas across the nation who is responsible for overseeing the development of the security plans within his or her respective geographic region. The MTSA implementing regulations, developed by the Coast Guard, require the establishment of area maritime security committees across all port areas. The committees for each of the 43 identified port areas, which are organized by the Coast Guard, consist of key stakeholders who (1) may be affected by security policies and (2) share information and develop port security plans. Members of the committees can include a diverse array of port stakeholders, including federal, state, local, tribal, and territorial law enforcement agencies, as well as private sector entities such as terminal operators, yacht clubs, shipyards, marine exchanges, commercial fishermen, trucking and railroad companies, organized labor, and trade associations. These committees are to identify critical port infrastructure and risks to the port, develop mitigation strategies for these risks, and communicate appropriate security information to port stakeholders. The area maritime security committees, in consultation with applicable stakeholders within their geographic region, are to assist the Coast Guard in developing the port area maritime security plans. Each area maritime security plan is to describe the area and infrastructure covered by the plan, establish area response and recovery protocols for a transportation security incident, and include any other information DHS requires. In addition, during the development of each plan, the Coast Guard is to develop a risk-based security assessment that includes the identification of the critical infrastructure and operations in the port, a threat assessment, and a vulnerability and consequence assessment, among other things. The assessment is also to consider, among other things, physical security of infrastructure and operations of the port, existing security systems available to protect maritime personnel, and radio and telecommunication systems, including computer systems and networks as well as other areas that may, if damaged, pose a risk to people, infrastructure, or operations within the port. Upon completion of the assessment, a written report must be prepared that documents the assessment methodology that was employed, describes each vulnerability identified and the resulting consequences, and provides risk reduction strategies that could be used for continued operations in the port. MTSA and its associated regulations also require port facility owners and operators to develop facility security plans for the purpose of preparing certain maritime facilities, such as container terminals and chemical processing plants, to deter a transportation security incident. The plans are to be updated at least every 5 years and are expected to be consistent with the port’s area maritime security plan. The MTSA implementing regulations require that the facility security plans document information on security systems and communications, as well as facility vulnerability and security measures, among other things. The implementing regulations also require port facility owners and operators, as well as their designated facility security officers, to ensure that a facility security assessment is conducted and that, upon completion, a written report is included with the corresponding facility security plan submission for review and approval by the captain of the port. The facility security assessment report must include an analysis that considers measures to protect radio and telecommunications equipment, including computer systems and networks, among other things. Enacted in October 2006, the SAFE Port Act created and codified new programs and initiatives related to the security of the U.S. ports, and amended some of the original provisions of MTSA. For example, the SAFE Port Act required the Coast Guard to establish a port security exercise program. MTSA also codified the Port Security Grant Program, which is to help defray the costs of implementing security measures at domestic ports. According to MTSA, funding is to be directed towards the implementation of area maritime security plans and facility security plans among port authorities, facility operators, and state and local government agencies that are required to provide port security services. Port areas use funding from the grant program to improve port-wide risk management, enhance maritime domain awareness, and improve port recovery and resiliency efforts through developing security plans, purchasing security equipment, and providing security training to employees. FEMA is responsible for designing and operating the administrative mechanisms needed to implement and manage the grant program. Coast Guard officials provide subject matter expertise regarding the maritime industry to FEMA to inform grant award decisions. DHS and the other stakeholders have taken limited steps with respect to maritime cybersecurity. In particular, the Coast Guard did not address cybersecurity threats in a 2012 national-level risk assessment. In addition, area maritime security plans and facility security plans provide limited coverage of cybersecurity considerations. While the Coast Guard helped to establish mechanisms for sharing security-related information, the degree to which these mechanisms were active and facilitated the sharing of cybersecurity-related information varied. Also, FEMA had taken steps to address cybersecurity through the Port Security Grant Program, but it has not taken additional steps to help ensure cyber-related risks are effectively addressed. Other federal stakeholders have also taken some actions to address cybersecurity in the maritime environment. According to DHS officials, a primary reason for limited efforts in addressing cyber- related threats in the maritime environment is that the severity of cyber- related threats has only recently been recognized. Until the Coast Guard and FEMA take additional steps to more fully implement their efforts, the maritime port environment remains at risk of not adequately considering cyber-based threats in its mitigation efforts. While the Coast Guard has assessed risks associated with physical threats to port environments, these assessments have not considered risks related to cyber threats. NIPP recommends sector-specific agencies and critical infrastructure partners manage risks from significant threats and hazards to physical and cyber critical infrastructure for their respective sectors through, among other things, the identification and detection of threats and hazards to the nation’s critical infrastructure; reduction of vulnerabilities of critical assets, systems, and networks; and mitigation of potential consequences to critical infrastructure if incidents occur. The Coast Guard completes, on a biennial basis, the National Maritime Strategic Risk Assessment, which is to be an assessment of risk within the maritime environment and risk reduction based on the agency’s efforts. Its results are to provide a picture of the risk environment, including a description of the types of threats the Coast Guard is expected to encounter within its areas of responsibility, such as ensuring the security of port facilities, over the next 5 to 8 years. The risk assessment is also to be informed by numerous inputs, such as historical incident and performance data, the views of subject matter experts, and risk models, including the Maritime Security Risk Analysis Model. However, the Coast Guard did not address cybersecurity in the fourth and latest iteration of the National Maritime Strategic Risk Assessment, which was issued in 2012. While the assessment contained information regarding threats, vulnerabilities, and the mitigation of potential risks in the maritime environment, none of the information addressed cyber- related risks. The Coast Guard attributed this gap to its limited efforts to develop inputs related to cyber threats, vulnerabilities, and consequences to inform the assessment. Additionally, Coast Guard officials stated that the Maritime Security Risk Analysis Model, a key input to the risk assessment, did not contain information regarding cyber-related threats, vulnerabilities, and potential impacts of cyber incidents. The Coast Guard plans to address this deficiency in the next iteration of the assessment, which is expected to be completed by September 2014, but officials could provide no details on how cybersecurity would be specifically addressed. Without a thorough assessment of cyber-related threats, vulnerabilities, and potential consequences to the maritime subsector, the Coast Guard has limited assurance that the maritime mode is adequately protected against cyber-based threats. Assessments of cyber risk would help the Coast Guard and other maritime stakeholders understand the most likely and severe types of cyber-related incidents that could affect their operations and use this information to support planning and resource allocation to mitigate the risk in a coordinated manner. Until the Coast Guard completes a thorough assessment of cyber risks in the maritime environment, maritime stakeholders will be less able to appropriately plan and allocate resources to protect the maritime transportation mode. MTSA and the SAFE Port Act provide the statutory framework for preventing, protecting against, responding to, and recovering from a transportation security incident in the maritime environment. MTSA requires maritime stakeholders to develop security documentation, including area maritime security plans and facility security plans. These plans, however, do not fully address the cybersecurity of their respective ports and facilities. Area maritime security plans do not fully address cyber-related threats, vulnerabilities, and other considerations. The three area maritime security plans we reviewed from the three high-risk port areas we visited generally contained very limited, if any, information about cyber-related threats and mitigation activities. For example, the three plans reviewed included information about the types of information and communications technology systems that would be used to communicate security information to prevent, manage, and respond to a transportation security incident; the types of information that are considered to be Sensitive Security Information; and how to securely handle and transmit this information to those with a need to know. However, the MTSA-required plans did not identify or address any other potential cyber-related threats directed at or vulnerabilities in the information and communications systems or include cybersecurity measures that port area stakeholders should take to prevent, manage, and respond to cyber-related threats and vulnerabilities. Coast Guard officials we met with agreed that the current set of area maritime security plans, developed in 2009, do not include cybersecurity information. This occurred in part because, as Coast Guard officials stated, the guidance for developing area maritime security plans did not require the inclusion of a cyber component. As a result, port area stakeholders may not be adequately prepared to successfully manage the risk of cyber-related transportation security incidents. Coast Guard officials responsible for developing area maritime security plan guidance stated that the implementing policy and guidance for developing the next set of area maritime security plans includes basic considerations that maritime stakeholders should take into account to address cybersecurity. Currently, the area maritime security plans are formally reviewed and approved on a 5-year cycle, so the next updates will occur in 2014 and will be based on recently issued policy and guidance. Coast Guard officials stated that the policy and guidance for developing the area security plans was updated and promulgated in July 2013 and addressed inclusion of basic cyber components. Examples include guidance to identify how the Coast Guard will communicate with port stakeholders in a cyber-degraded environment, the process for reporting a cyber-related breach of security, and direction to take cyber into account when developing a port’s “all hazard”-compatible Marine Transportation System Recovery Plan. Our review of the guidance confirmed that it instructs preparers to generally consider cybersecurity issues related to information and communication technology systems when developing the plans. However, the guidance does not include any information related to the mitigation of cyber threats. Officials representing both the Coast Guard and nonfederal entities that we met with stated that the current facility security plans also do not contain cybersecurity information. Our review of nine facility security plans from the organizations we met with during site visits confirmed that those plans generally have very limited cybersecurity information. For example, two of the plans had generic references to potential cyber threats, but did not have any specific information on assets that were potentially vulnerable or associated mitigation strategies. According to federal and nonfederal entities, this is because, similar to the guidance for the area security plans, the current guidelines for facility security plans do not explicitly require entities to include cybersecurity information in the plans. Coast Guard officials stated that the next round of facility security plans, to be developed in 2014, will include cybersecurity provisions. Since the plans are currently in development, we were unable to determine the degree to which cybersecurity information will be included. Without the benefit of a national-level cyber-related risk assessment of the maritime infrastructure to inform the development of the plans, the Coast Guard has limited assurance that maritime-related security plans will appropriately address cyber-related threats and vulnerabilities associated with transportation security incidents. Although the Coast Guard helped to establish mechanisms for sharing security-related information, the degree to which these mechanisms were active and shared cybersecurity-related information varied. As the DHS agency responsible for maritime critical infrastructure protection-related efforts, the Coast Guard is responsible for establishing public-private partnerships and sharing information with federal and nonfederal entities in the maritime community. This information sharing is to occur through formalized mechanisms called for in federal plans and policy. Specifically, federal policy establishes a framework that includes government coordinating councils—composed of federal, state, local, or tribal agencies—and encourages the voluntary formation of sector coordinating councils, typically organized, governed by, and made up of nonfederal stakeholders. Further, federal policy also encourages sector-specific agencies to promote the formulation of information sharing and analysis centers (ISAC), which are to serve as voluntary mechanisms formed by owners and operators for gathering, analyzing, and disseminating information on infrastructure threats and vulnerabilities among owners and operators of the sectors and the federal government. The Maritime Modal Government Coordinating Council was established in 2006 to enable interagency coordination on maritime security issues. Coast Guard officials stated that the primary membership consisted of representatives from the Departments of Homeland Security, Transportation, Commerce, Defense, and Justice. Coast Guard officials stated that the council has met since 2006, but had only recently begun to discuss cybersecurity issues. For example, at its January 2013 annual meeting, the council discussed the implications of Executive Order 13636 for improving critical infrastructure cybersecurity for the maritime mode. In addition, during the January 2014 meeting, Coast Guard officials discussed efforts related to the development of a risk management framework that integrates cyber and physical security resilience efforts. In 2007, the Maritime Modal Sector Coordinating Council, consisting of owners, operators, and associations from within the sector, was established to enable coordination and information sharing within the sector and with government stakeholders. However, the council disbanded in March 2011 and is no longer active. Coast Guard officials attributed the demise of the council to a 2010 presidential memorandum that precluded the participation of registered lobbyists in advisory committees and other boards and commissions, which includes all Critical Infrastructure Partnership Advisory Council bodies, including the Critical Infrastructure Cross-Sector Council, and all sector coordinating councils, according to DHS. The former chair of the council stated that a majority of the members were registered lobbyists, and, as small trade associations, did not have non-lobbyist staff who could serve in this role. The Coast Guard has attempted to reestablish the sector coordinating council, but has faced challenges in doing so. According to Coast Guard officials, maritime stakeholders that would likely participate in such a council had viewed it as duplicative of statutorily authorized mechanisms, such as the National Maritime Security Advisory Committee and area maritime security committees. As a result, Coast Guard officials stated that there has been little stakeholder interest in reconstituting the council. While Coast Guard officials stated that these committees, in essence, meet the information-sharing requirements of NIPP and, to some extent, may expand the NIPP construct into real world “all hazards” response and recovery activities, these officials also stated that the committees do not fulfill all the functions of a sector coordinating council. For example, a key function of the council is to provide national-level information sharing and coordination of security-related activities within the sector. In contrast, the activities of the area maritime security committees are generally focused on individual port areas. In addition, while the National Maritime Security Advisory Committee is made up of maritime-related private-sector stakeholders, its primary purpose is to advise and make recommendations to the Secretary of Homeland Security so that the government can take actions related to securing the maritime port environment. Similarly, another primary function of the sector coordinating council may include identifying, developing, and sharing information concerning effective cybersecurity practices, such as cybersecurity working groups, risk assessments, strategies, and plans. Although Coast Guard officials stated that several of the area maritime security committees had addressed cybersecurity in some manner, the committees do not provide a national-level perspective on cybersecurity in the maritime mode. Coast Guard officials could not demonstrate that these committees had a national-level focus to improve the maritime port environment’s cybersecurity posture. In addition, the Maritime Information Sharing and Analysis Center was to serve as the focal point for gathering and disseminating information regarding maritime threats to interested stakeholders; however, Coast Guard officials could not provide evidence that the body was active or identify the types of cybersecurity information that was shared through it. They stated that they fulfill the role of the ISAC through the use of Homeport—a publicly accessible and secure Internet portal that supports port security functionality for operational use. According to the officials, Homeport serves as the Coast Guard’s primary communications tool to support the sharing, collection, and dissemination of information of various classification levels to maritime stakeholders. However, the Coast Guard could not show the extent to which cyber-related information was shared through the portal. Though the Coast Guard has established various mechanisms to coordinate and share information among government entities at a national level and between government and private stakeholders at the local level, it has not facilitated the establishment of a national-level council, as recommended by NIPP. The absence of a national-level sector coordinating council increases the risk that critical infrastructure owners and operators would not have a mechanism through which they can identify, develop, and share information concerning effective cybersecurity practices, such as cybersecurity working groups, risk assessments, strategies, and plans. As a result, the Coast Guard would not be aware of and thus not be able to mitigate cyber-based threats. Under the Port Security Grant Program, FEMA has taken steps to address cybersecurity in port areas by identifying enhancing cybersecurity capabilities as a funding priority in fiscal years 2013 and 2014 and by providing general guidance regarding the types of cybersecurity-related proposals eligible for funding. DHS annually produces guidance that provides the funding amounts available under the program for port areas and information about eligible applicants, the application process, and funding priorities for that fiscal year, among other things. Fiscal year 2013 and 2014 guidance stated that DHS identified enhancing cybersecurity capabilities as one of the six priorities for selection criteria for all grant proposals in these funding cycles. FEMA program managers stated that FEMA added projects that aim to enhance cybersecurity capabilities as a funding priority in response to the issuance of Presidential Policy Directive 21 in February 2013. Specifically, the 2013 guidance stated that grant funds may be used to invest in functions that support and enhance port-critical infrastructure and key resources in both physical space and cyberspace under Presidential Policy Directive 21. The 2014 guidance expanded on this guidance to encourage applicants to propose projects to aid in the implementation of the National Institute of Standards and Technology’s cybersecurity framework, established pursuant to Executive Order 13636, and provides a hyperlink to additional information about the framework. In addition, the guidance refers applicants to the just-established DHS Critical Infrastructure Cyber Community Voluntary Program for resources to assist critical infrastructure owners and operators in the adoption of the framework and managing cyber risks. While these actions are positive steps towards addressing cybersecurity in the port environment, FEMA has not consulted individuals with cybersecurity-related subject matter expertise to assist with the review of cybersecurity-related proposals. Program guidance states that grant applications are to undergo a multi-level review for final selection, including a review by a National Review Panel, comprised of subject matter experts drawn from the Departments of Homeland Security and Transportation. However, according to FEMA program managers, the fiscal year 2013 National Review Panel did not include subject matter experts from DHS cybersecurity and critical infrastructure agencies—such as the DHS Office of Cybersecurity and Communications, the DHS Office of Infrastructure Protection, or the Coast Guard’s Cyber Command. As a result, the National Review Panel had limited subject matter expertise to evaluate and prioritize cybersecurity-related grant proposals for funding. Specifically, according to FEMA guidance, the proposal review and selection process consists of three levels: an initial review, a field review, and a national-level review. During the initial review, FEMA officials review grant proposals for completion. During the field review, Coast Guard captains of the port, in coordination with officials of the Department of Transportation’s Maritime Administration, review and score proposals according to (1) the degree to which a proposal addresses program goals, including enhancing cybersecurity capabilities, and (2) the degree to which a proposal addresses one of the area maritime security plan priorities (e.g., transportation security incident scenarios), among other factors. The captains of the port provide a prioritized list of eligible projects for funding within each port area to FEMA, which coordinates the national review process. In March 2014, FEMA program managers stated that cybersecurity experts were not involved in the National Review Panel in part because the panel has been downsized in recent years. For the future, the officials stated that FEMA is considering revising the review process to identify cybersecurity proposals early on in the review process in order to obtain relevant experience and expertise from the Coast Guard and other subject matter experts to inform proposal reviews. However, FEMA has not documented this new process or its procedures for the Coast Guard and FEMA officials at the field and national review levels to follow for the fiscal year 2014 and future cycles. In addition, because the Coast Guard has not conducted a comprehensive risk assessment for the maritime environment that includes cyber-related threats, grant applicants and DHS officials have not been able to use the results of such an assessment to inform their grant proposals, project scoring, and risk-based funding decisions. MTSA states that, in administering the program, national economic and strategic defense concerns based on the most current risk assessments available shall be taken into account. Further, according to MTSA, Port Security Grant Program funding is to be used to address Coast Guard-identified vulnerabilities, among other purposes. FEMA officials stated that the agency considers port risk during the allocation and proposal review stages of the program funding cycle. However, FEMA program managers stated that the risk formula and risk-based analysis that FEMA uses in the allocation and proposal review stages do not assess cyber threats and vulnerabilities. Additionally, during the field-level review, captains of the port score grant proposals according to (1) the degree to which a proposal addresses program goals, including enhancing cybersecurity capabilities, and (2) the degree to which a proposal addresses one of the area maritime security plan priorities (e.g., transportation security incident scenarios), among other factors. However, as Coast Guard officials stated, and our review of area maritime security plans indicated, current area maritime security plans generally contain very limited, if any, information about cyber- related threats. Further, a FEMA Port Security Grant Program section chief stated that he was not aware of a risk assessment for the maritime mode that discusses cyber-related threats, vulnerabilities, and potential impact. Using the results of such a maritime risk assessment that fully addresses cyber-related threats, vulnerabilities, and consequences, which—as discussed previously—has not been conducted, to inform program guidance could help grant applicants and reviewers more effectively identify and select projects for funding that could enhance the cybersecurity of the nation’s maritime cyber infrastructure. Furthermore, FEMA has not developed or implemented outcome measures to evaluate the effectiveness of the Port Security Grant Program in achieving program goals, including enhancing cybersecurity capabilities. As we reported in November 2011, FEMA had not evaluated the effectiveness of the Port Security Grant Program in strengthening critical maritime infrastructure because it had not implemented measures to track progress toward achieving program goals. Therefore, we recommended that FEMA—in collaboration with the Coast Guard— develop time frames and related milestones for implementing performance measures to monitor the effectiveness of the program. In response, in February 2014 FEMA program managers stated that the agency developed and implemented four management and administrative measures in 2012 and two performance measures to track the amount of funds invested in building and sustaining capabilities in 2013. According to a FEMA program manager, FEMA did not design the two performance measures to evaluate the effectiveness of the program in addressing individual program goals, such as enhancing cybersecurity capabilities, but to gauge the program’s effectiveness in reducing overall maritime risk in a port area based on program funding. While these measures can help improve FEMA’s management of the program by tracking how funds are invested, they do not measure program outcomes. In addition, in February 2012, we found that FEMA had efforts under way to develop outcome measures for the four national preparedness grant programs, including the Port Security Grant Program, but that it had not completed these efforts. Therefore, we recommended that FEMA revise its plan in order to guide the timely completion of ongoing efforts to develop and implement outcome-based performance measures for all four grant programs. In January 2014, FEMA officials stated that they believe that the implementation of project-based grant application tracking and reporting functions within the Non-Disaster Grant Management System will address our February 2012 recommendation that the agency develop outcome measures to determine the effectiveness of the Port Security Grant Program. However, the officials did not provide details about how these functions will address the recommendation. While the development of the Non-Disaster Grant Management System is a positive step toward improving the management and administration of preparedness grants, FEMA officials stated that the deployment of these system functions has been delayed due to budget reductions, and the time frame for building the project-based applications and reporting functions is fiscal year 2016. Therefore, it is too early to determine how FEMA will use the system to evaluate the effectiveness of the Port Security Grant Program. Until FEMA develops outcome measures to evaluate the effectiveness of the program in meeting program goals, it cannot provide reasonable assurance that funds invested in port security grants, including those intended to enhance cybersecurity capabilities, are strengthening critical maritime infrastructure—including cyber-based infrastructure—against risks associated with potential terrorist attacks and other incidents. In addition to DHS, the 2010 Transportation Systems Sector-Specific Plan identified the Departments of Commerce, Defense, Justice, and Transportation as members of the Maritime Modal Government Coordinating Council. Many agencies, including others within DHS, had taken some actions with respect to the cybersecurity of the maritime subsector. For more details on these actions, see appendix II. Disruptions in the operations of our nation’s ports, which facilitate the import and export of over $1.3 trillion worth of goods annually, could be devastating to the national economy. While the impact of a physical event (natural or manmade) appears to have been better understood and addressed by maritime stakeholders than cyber-based events, the growing reliance on information and communications technology suggests the need for greater attention to potential cyber-based threats. Within the roles prescribed for them by federal law, plans, and policy, the Coast Guard and FEMA have begun to take action. In particular, the Coast Guard has taken action to address cyber-based threats in its guidance for required area and facility plans and has started to leverage existing information-sharing mechanisms. However, until a comprehensive risk assessment that includes cyber-based threats, vulnerabilities, and consequences of an incident is completed and used to inform the development of guidance and plans, the maritime port sector remains at risk of not adequately considering cyber-based risks in its mitigation efforts. In addition, the maritime sector coordinating council is currently defunct, which may limit efforts to share important information on threats affecting ports and facilities on a national level. Further, FEMA has taken actions to enhance cybersecurity through the Port Security Grant Program by making projects aimed at enhancing cybersecurity one of its funding priorities. However, until it develops procedures to instruct grant reviewers to consult cybersecurity-related subject matter experts and uses the results of a risk assessment that identifies any cyber-related threats and vulnerabilities to inform its funding guidance, FEMA will be limited in its ability to ensure that the program is effectively addressing cyber-related risks in the maritime environment. To enhance the cybersecurity of critical infrastructure in the maritime sector, we recommend that the Secretary of Homeland Security direct the Commandant of the Coast Guard to take the following actions: work with federal and nonfederal partners to ensure that the maritime risk assessment includes cyber-related threats, vulnerabilities, and potential consequences; use the results of the risk assessment to inform how guidance for area maritime security plans, facility security plans, and other security- related planning should address cyber-related risk for the maritime sector; and work with federal and nonfederal stakeholders to determine if the Maritime Modal Sector Coordinating Council should be reestablished to better facilitate stakeholder coordination and information sharing across the maritime environment at the national level. To help ensure the effective use of Port Security Grant Program funds to support the program’s stated mission of addressing vulnerabilities in the maritime port environment, we recommend that the Secretary of Homeland Security direct the FEMA Administrator to take the following actions: in coordination with the Coast Guard, develop procedures for officials at the field review level (i.e., captains of the port) and national review level (i.e., the National Review Panel and FEMA) to consult cybersecurity subject matter experts from the Coast Guard and other relevant DHS components, if applicable, during the review of cybersecurity grant proposals for funding and in coordination with the Coast Guard, use any information on cyber- related threats, vulnerabilities, and consequences identified in the maritime risk assessment to inform future versions of funding guidance for grant applicants and reviews at the field and national levels. We provided a draft of this report to the Departments of Homeland Security, Commerce, Defense, Justice, and Transportation for their review and comment. DHS provided written comments on our report (reprinted in app. IV). In its comments, DHS concurred with our recommendations. In addition, the department stated that the Coast Guard is working with a variety of partners to determine how cyber- related threats, vulnerabilities, and potential consequences are to be addressed in the maritime risk assessment, which the Coast Guard will use to inform security planning efforts (including area maritime security plans and facility security plans). DHS also stated that the Coast Guard will continue to promote the re-establishment of a sector coordinating council, and will also continue to use existing information-sharing mechanisms. However, DHS did not provide an estimated completion date for these efforts. In addition, DHS stated that FEMA will work with the Coast Guard to develop the recommended cyber consultation procedures for the Port Security Grant Program by the end of October 2014, and will use any information on cyber-related threats, vulnerabilities, and consequences from the maritime risk assessment in future program guidance, which is scheduled for publication in the first half of fiscal year 2015. Officials from DHS and the Department of Commerce also provided technical comments via e-mail. We incorporated these comments where appropriate. Officials from the Departments of Defense, Justice, and Transportation stated that they had no comments. We are sending copies of this report to interested congressional committees; the Secretaries of Commerce, Defense, Homeland Security, and Transportation; the Attorney General of the United States; the Director of Office of Management and Budget; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov or Stephen L. Caldwell at (202) 512-9610 or caldwells@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our objective was to identify the extent to which the Department of Homeland Security (DHS) and other stakeholders have taken steps to address cybersecurity in the maritime port environment. The scope of our audit focused on federal agencies that have a role or responsibilities in the security of the maritime port environment, to include port facilities. We focused on the information and communications technology used to operate port facilities. We did not include other aspects of the maritime environment such as vessels, off-shore platforms, inland waterways, intermodal connections, systems used to manage water-based portions of the port, and federally managed information and communication technology. To identify federal agency roles and select the organizations responsible for addressing cybersecurity in the maritime port environment, we reviewed relevant federal law, regulations, policy, and critical infrastructure protection-related strategies, including the following: Homeland Security Act of 2002; Maritime Transportation Security Act of 2002; Homeland Security Presidential Directive 7—Critical Infrastructure Identification, Prioritization, and Protection, December 2003; Security and Accountability for Every Port Act of 2006; 2006 National Infrastructure Protection Plan; 2009 National Infrastructure Protection Plan; 2013 National Infrastructure Protection Plan; 2010 Transportation Systems Sector-Specific Plan; Presidential Policy Directive 21—Critical Infrastructure Security and Resilience, February 12, 2013; Executive Order 13636—Improving Critical Infrastructure Title 33, Code of Federal Regulations, Chapter 1, Subchapter H. We analyzed these documents to identify federal agencies responsible for taking steps to address cybersecurity in the maritime environment, such as developing a risk assessment and information-sharing mechanisms, guiding the development of security plans in response to legal requirements, and providing financial assistance to support maritime port security activities. Based on our analysis, we determined that the U.S. Coast Guard (Coast Guard) and Federal Emergency Management Agency (FEMA), within DHS, were relevant to our objective. We also included the Departments of Transportation, Defense, Commerce, and Justice as they were identified as members of the Maritime Modal Government Coordinating Council in the 2010 Transportation Systems Sector-Specific Plan. We also included other DHS components, such as U.S. Customs and Border Protection, National Protection and Programs Directorate, Transportation Security Administration, and United States Secret Service, based on our prior cybersecurity and port security work and information learned from interviews during our engagement. To determine the extent to which the Coast Guard and FEMA have taken steps to address cybersecurity in the maritime port environment, we collected and analyzed relevant guidance and reports. For example, we analyzed the Coast Guard’s 2012 National Maritime Strategic Risk Assessment, Coast Guard guidance for developing area maritime security plans, the 2012 Annual Progress Report—National Strategy for Transportation Security, the Transportation Sector Security Risk Assessment, and FEMA guidance for applying for and reviewing proposals under the Port Security Grant Program. We also examined our November 2011 and February 2012 reports related to the Port Security Grant Program and our past work related to FEMA grants management for previously identified issues and context. In addition, we gathered and analyzed documents and interviewed officials from DHS’s Coast Guard, FEMA, U.S. Customs and Border Protection, Office of Cybersecurity and Communications, Office of Infrastructure Protection, Transportation Security Administration, and United States Secret Service; the Department of Commerce’s National Oceanic and Atmospheric Administration; the Department of Defense’s Transportation Command; the Department of Justice’s Federal Bureau of Investigation; and the Department of Transportation’s Maritime Administration, Office of Intelligence, Security and Emergency Response, and the Volpe Center. To gain an understanding of how information and communication technology is used in the maritime port environment and to better understand federal interactions with nonfederal entities on cybersecurity issues, we conducted site visits to three port areas—Houston, Texas; Los Angeles/Long Beach, California; and New Orleans, Louisiana. These ports were selected in a non-generalizable manner based on their identification as both high risk (Group I) ports by the Port Security Grant Program, and as national leaders in calls by specific types of vessels— oil and natural gas, containers, and dry bulk—in the Department of Transportation Maritime Administration’s March 2013 report, Vessel Calls Snapshot, 2011. For those port areas, we analyzed the appropriate area maritime security plans for any cybersecurity-related information. We also randomly selected facility owners from Coast Guard data on those facilities required to prepare facility security plans under the Maritime Transportation Security Act’s implementing regulations. For those facilities whose officials agreed to participate in our review, we interviewed staff familiar with Coast Guard facility security requirements or information technology security, and analyzed their facility security plans for any cybersecurity-related items. We also included additional nonfederal entities such as port authorities and facilities as part of our review. The results of our analysis of area maritime security plans and facility security plans at the selected ports cannot be projected to other facilities at the port areas we visited or other port areas in the country. We also met with other port stakeholders, such as port authorities and an oil storage and transportation facility. We met with the following organizations: APM Terminals Axiall Cargill Domino Sugar Company Harris County, Texas, Information Technology Center Louisiana Offshore Oil Port Magellan Terminals Holdings, L.P. Metropolitan Stevedoring Port of Houston Authority Port of Long Beach Port of Los Angeles Port of New Orleans SSA Marine St. Bernard Port Trans Pacific Container Service We determined that information provided by the federal and nonfederal entities, such as the type of information contained within the area maritime security plans and facility security plans, was sufficiently reliable for the purposes of our review. To arrive at this assessment, we corroborated the information by comparing the plans with statements from relevant agency officials. We conducted this performance audit from April 2013 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. This appendix summarizes cybersecurity-related actions, if any, taken by other agencies of the departments identified as members of the Government Coordinating Council of the Maritime Mode related to the nonfederally owned and operated maritime port environment. Under Executive Order 13636, the Secretary of Homeland Security is to use a risk-based approach to identify critical infrastructure where a cybersecurity incident could reasonably result in catastrophic regional or national effects on public health or safety, economic security, or national security. The Secretary is also to apply consistent, objective criteria in identifying such critical infrastructure. Sector-specific agencies were to provide the Secretary with information necessary to identify such critical infrastructure. To implement Executive Order 13636, DHS established an Integrated Task Force to, among other things, lead DHS implementation and coordinate interagency and public- and private-sector efforts. One of the eight working groups that made up the task force was assigned the responsibility for identifying cyber-dependent infrastructure. Officials from DHS’s Office of Infrastructure Protection who were responsible for the working group stated that, using the defined methodology, the task force examined the maritime mode as part of its efforts. Office of Cybersecurity and Communications The Office of Cybersecurity and Communications, among other things, is responsible for collaborating with public, private, and international partners to ensure the security and continuity of the nation’s cyber and communications infrastructures in the event of terrorist attacks, natural disasters, and catastrophic incidents. One division of the Office of Cybersecurity and Communications (Stakeholder Engagement and Cyber Infrastructure Resilience) offers to partner with critical infrastructure partners—including those in the maritime port environment—to conduct cyber resilience reviews. These reviews are voluntary and are based on the CERT® Resilience Management Model, a process improvement model for managing operational resilience. They are facilitated by field-based Cyber Security Advisors. The primary goal of this program is to evaluate how critical infrastructure and key resource providers manage the cybersecurity of significant information. In addition, the Industrial Control Systems Cyber Emergency Response Team——a branch of the National Cybersecurity and Communications Integration Center division within the Office of Cybersecurity and Communications—directed the development of the Cyber Security Evaluation Tool, which is a self-assessment tool that evaluates the cybersecurity of an automated industrial control or business system using a hybrid risk- and standards-based approach, and provides relevant recommendations for improvement. We observed one maritime port entity engage with Office of Cybersecurity and Communications staff members to conduct a cyber resilience review. According to data provided by Office of Cybersecurity and Communications officials, additional reviews have been conducted with maritime port entities. In addition, three maritime port entities informed us they conducted a self-assessment using the Cyber Security Evaluation Tool. The Office of Infrastructure Protection is responsible for working with public- and private-sector critical infrastructure partners and leads the coordinated national effort to mitigate risk to the nation’s critical infrastructure. Among other things, the Office of Infrastructure Protection has the overall responsibility for coordinating implementation of NIPP across 16 critical infrastructure sectors and overseeing the development of 16 sector-specific plans. Through its Protective Security Coordination Division, the Office of Infrastructure Protection also has a network of field-based protective security advisors, who are security experts that serve as a direct link between the department and critical infrastructure partners in the field. Two nonfederal port stakeholders identified protective security advisors as a resource for assistance in cybersecurity issues. Officials from Infrastructure Protection’s Strategy and Policy Office supported the Coast Guard in developing the sector-specific plan and annual report for the maritime mode. U.S. Customs and Border Protection (CBP) is responsible for securing America’s borders. This includes ensuring that all cargo enters the United States legally, safely, and efficiently through official sea ports of entry; preventing the illegal entry of contraband into the country at and between ports of entry; and enforcing trade, tariff, and intellectual property laws and regulations. In addition, CBP developed and administered the Customs-Trade Partnership Against Terrorism program, a voluntary program where officials work in partnership with private companies to review the security of their international supply chains and improve the security of their shipments to the United States. Under this program, CBP issued minimum security criteria for U.S.-based marine port authority and terminal operators that include information technology security practices (specifically, password protection, establishment of information technology security policies, employee training on information technology security, and developing a system to identify information technology abuse that includes improper access). Among other things, the Secret Service protects the President, Vice President, visiting heads of state and government, and National Special Security Events; safeguards U.S. payment and financial systems; and investigates cyber/electronic crimes. In support of these missions, the Secret Service has several programs that have touched on maritime port cybersecurity. The Electronic Crimes Task Force initiative is a network of task forces established in the USA PATRIOT Act for the purpose of preventing, detecting, and investigating various forms of electronic crimes, including potential terrorist attacks against critical infrastructure and financial payments systems. The Secret Service also conducts Critical Systems Protection advances for protective visits. This program identifies, assesses, and mitigates any risks posed by information systems to persons and facilities protected by the Secret Service. It also conducts protective advances to identify, assess, and mitigate any issues identified with networks or systems that could adversely affect the physical security plan or cause physical harm to a protectee. The advances support all of the Secret Service’s protective detail offices by implementing network monitoring, and applying cyber intelligence analysis. Additionally, the program supports full spectrum protective visits, events, or venues domestically, in foreign countries, special events, and national special security events. In addition, Secret Service personnel in Los Angeles have engaged with maritime port stakeholders in Los Angeles and Long Beach in several ways. For example, Secret Service staff gave a general cybersecurity threat presentation to port stakeholders, though no specific cyber threats to the maritime port environment were discussed. In addition, Secret Service was requested by a local governmental entity to assist in assessing the cyber aspects of critical infrastructure. Secret Service officials stated that they are still very early on in this process and are currently working with the entity to identify the critical assets/components of the cyber infrastructure. The process is still in the information-gathering phase, and officials do not expect to release any sort of summary product until mid-2014 at the earliest. Officials stated that the end product would detail any potential vulnerabilities identified during the assessment and make recommendations for mitigation that the stakeholder could implement if it chooses. Secret Service officials also stated that an evaluation was conducted under the Critical Systems Protection Program with a maritime port stakeholder in the Houston area, but did not provide details regarding this evaluation. The Transportation Security Administration (TSA) is the former lead sector-specific agency for the transportation systems sector. TSA currently co-leads the sector with the Department of Transportation and Coast Guard, and it supports, as needed, the Coast Guard’s lead for maritime security. TSA also uses the Transportation Sector Security Risk Assessment to determine relative risks for the transportation modes. However, according to TSA officials, Coast Guard and TSA agreed in 2009 that the maritime modal risk assessment would be addressed in a separate report. TSA also established the Transportation Systems Sector Cybersecurity Working Group, whose meetings (under the Critical Infrastructure Partnership Advisory Council framework) have discussed maritime cybersecurity issues. Although components of the Department of Commerce do have maritime- related efforts under way, none are directly related to the cybersecurity of the port environment. Further, the National Institute of Standards and Technology (NIST) has not developed any specific standards related to the cybersecurity of maritime facilities within our scope. NIST has started to work with private sector stakeholders from different critical infrastructure sectors to develop a voluntary framework for reducing cyber risks to critical infrastructure, as directed by Executive Order 13636. It is developing this voluntary framework in accordance with its mission to promote U.S. innovation and industrial competitiveness. The framework has been shaped through ongoing public engagement. According to officials, more than 3,000 people representing diverse stakeholders in industry, academia, and government have participated in the framework’s development through attendance at a series of public workshops and by providing comments on drafts. On February 12, 2014, NIST released the cybersecurity framework. Though representatives from numerous critical infrastructure sectors provided comments on the draft framework, only one maritime entity provided feedback, in October 2013. The entity stated that the framework provided a minimum level of cybersecurity information, but may not provide sufficient guidance to all relevant parties who choose to implement its provisions and suggestions. Additionally, the entity stated that it found the framework to be technical in nature and that it does not communicate at a level helpful for business executives. Department of Commerce officials stated that NIST worked to address these comments in the final version of the framework. The mission of the Department of Transportation is to serve the United States by ensuring a fast, safe, efficient, accessible, and convenient transportation system that meets our vital national interest and enhances the quality of life of the American people. The department is organized into several administrations, including the Research and Innovative Technology Administration, which coordinates the department’s research programs and is charged with advancing the deployment of cross-cutting technologies to improve the nation’s transportation networks. The administration includes the Volpe Center, which partners with public and private organizations to assess the needs of the transportation community, evaluate research and development endeavors, assist in the deployment of state-of-the-art transportation technologies, and inform decision- and policy-making through analyses. Volpe is funded by sponsoring organizations. In 2011, Volpe entered into a 2-year agreement with DHS’s Control Systems Security Program to evaluate the use of control systems in the transportation sector, including the maritime mode. Under this agreement, Volpe and DHS developed a road map to secure control systems in the transportation sector in August 2012. The document discussed the use of industrial control systems in the maritime mode, and described high-level threats. It also established several goals for the entire transportation sector with near- (0-2 years), mid- (2-5 years), and long-term (5-10 years) objectives, metrics, and milestones. Volpe and DHS also developed a cybersecurity standards strategy for transportation industrial control systems, which identified tasks for developing standards for port industrial control systems starting in 2015. Volpe also conducted outreach to various maritime entities. According to Volpe officials, this study was conducted mostly at international port facilities and vessels (though U.S. ports were visited under a different program). The officials stated that the agreement was canceled due to funding reductions resulting from the recent budget sequestration. DHS officials gave two reasons why funding for Volpe outreach was terminated after sequestration. First, as part of a reorganization of the Office of Cybersecurity and Communications, there is a heightened focus on “operational” activities, and DHS characterized Volpe’s assistance under the agreement as outreach and awareness. Second, the officials stated that because the demand for incident management and response continues to grow, a decision was made to stop funding Volpe to meet spending cuts resulting from sequestration and increase funding for cyber incident response for critical infrastructure asset owners and operators who use industrial control systems. Although components of the Department of Justice have some efforts under way, most of those efforts occur at the port level. Specifically, the department’s Federal Bureau of Investigation is involved in several initiatives at the local level, focused on interfacing with key port stakeholders as well as relevant entities with state and local governments. These initiatives are largely focused on passing threat information to partners. Additionally, the Bureau’s Infragard program provides a forum to share threat information with representatives from all critical infrastructure sectors, including maritime. While the Department of Defense has recognized the significance of cyber-related threats to maritime facilities, the department has no explicit role in the protection of critical infrastructure within the maritime sub- sector. Officials also said that the department had not supported maritime mode stakeholders regarding cybersecurity. In addition, though the Department of Defense was identified as a member of the Maritime Modal Government Coordinating Council in the 2010 Transportation Systems Sector-Specific Plan, the department was not listed as a participant in the 2013 or 2014 council meetings. Further, DHS, including the U.S. Coast Guard, had not requested support from Defense on cybersecurity of commercial maritime port operations and facilities. Figure 2 provides an overview of the technologies used in the maritime port environment (see interactive fig. 1) and includes the figure’s rollover information. In addition to the contacts named above, key contributions to this report were made by Michael W. Gilmore (Assistant Director), Christopher Conrad (Assistant Director), Bradley W. Becker, Jennifer L. Bryant, Franklin D. Jackson, Tracey L. King, Kush K. Malhotra, Lee McCracken, Umesh Thakkar, and Adam Vodraska. National Preparedness: FEMA Has Made Progress, but Additional Steps Are Needed to Improve Grant Management and Assess Capabilities. GAO-13-637T. Washington, D.C.: June 25, 2013. Communications Networks: Outcome-Based Measures Would Assist DHS in Assessing Effectiveness of Cybersecurity Efforts. GAO-13-275. Washington, D.C.: April 3, 2013. High Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. Cybersecurity: National Strategy, Roles, and Responsibilities Need to Be Better Defined and More Effectively Implemented. GAO-13-187. Washington, D.C.: February 14, 2013. Information Security: Better Implementation of Controls for Mobile Devices Should Be Encouraged. GAO-12-757. Washington, D.C.: September 18, 2012. Maritime Security: Progress and Challenges 10 Years after the Maritime Transportation Security Act. GAO-12-1009T. Washington, D.C.: September 11, 2012. Information Security: Cyber Threats Facilitate Ability to Commit Economic Espionage. GAO-12-876T. Washington, D.C.: June 28, 2012. IT Supply Chain: National Security-Related Agencies Need to Better Address Risks. GAO-12-361. Washington, D.C.: March 23, 2012. Homeland Security: DHS Needs Better Project Information and Coordination among Four Overlapping Grant Programs. GAO-12-303. Washington, D.C.: February 28, 2012. Critical Infrastructure Protection: Cybersecurity Guidance Is Available, but More Can Be Done to Promote Its Use. GAO-12-92. Washington, D.C.: December 9, 2011. Port Security Grant Program: Risk Model, Grant Management, and Effectiveness Measures Could Be Strengthened. GAO-12-47. Washington, D.C.: November 17, 2011. Coast Guard: Security Risk Model Meets DHS Criteria, but More Training Could Enhance Its Use for Managing Programs and Operations. GAO-12-14. Washington, D.C.: November 17, 2011. Information Security: Additional Guidance Needed to Address Cloud Computing Concerns. GAO-12-130T. Washington, D.C.: October 6, 2011. Cybersecurity: Continued Attention Needed to Protect Our Nation’s Critical Infrastructure. GAO-11-865T. Washington, D.C.: July 26, 2011. Critical Infrastructure Protection: Key Private and Public Cyber Expectations Need to Be Consistently Addressed. GAO-10-628. Washington, D.C.: July 15, 2010. Cyberspace: United States Faces Challenges in Addressing Global Cybersecurity and Governance. GAO-10-606. Washington, D.C.: July 2, 2010. Critical Infrastructure Protection: Current Cyber Sector-Specific Planning Approach Needs Reassessment. GAO-09-969. Washington, D.C.: September 24, 2009. Cyber Analysis and Warning: DHS Faces Challenges in Establishing a Comprehensive National Capability. GAO-08-588. Washington, D.C.: July 31, 2008. Homeland Security: DHS Improved its Risk-Based Grant Programs’ Allocation and Management Methods, But Measuring Programs’ Impact on National Capabilities Remains a Challenge. GAO-08-488T. Washington, D.C.: March 11, 2008. Maritime Security: Coast Guard Inspections Identify and Correct Facility Deficiencies, but More Analysis Needed of Program’s Staffing, Practices, and Data. GAO-08-12. Washington, D.C.: February 14, 2008. Cybercrime: Public and Private Entities Face Challenges in Addressing Cyber Threats. GAO-07-705. Washington, D.C.: June 22, 2007. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005.
U.S. maritime ports handle more than $1.3 trillion in cargo annually. The operations of these ports are supported by information and communication systems, which are susceptible to cyber-related threats. Failures in these systems could degrade or interrupt operations at ports, including the flow of commerce. Federal agencies—in particular DHS—and industry stakeholders have specific roles in protecting maritime facilities and ports from physical and cyber threats. GAO's objective was to identify the extent to which DHS and other stakeholders have taken steps to address cybersecurity in the maritime port environment. GAO examined relevant laws and regulations; analyzed federal cybersecurity-related policies and plans; observed operations at three U.S. ports selected based on being a high-risk port and a leader in calls by vessel type, e.g. container; and interviewed federal and nonfederal officials. Actions taken by the Department of Homeland Security (DHS) and two of its component agencies, the U.S. Coast Guard and Federal Emergency Management Agency (FEMA), as well as other federal agencies, to address cybersecurity in the maritime port environment have been limited. While the Coast Guard initiated a number of activities and coordinating strategies to improve physical security in specific ports, it has not conducted a risk assessment that fully addresses cyber-related threats, vulnerabilities, and consequences. Coast Guard officials stated that they intend to conduct such an assessment in the future, but did not provide details to show how it would address cybersecurity. Until the Coast Guard completes a thorough assessment of cyber risks in the maritime environment, the ability of stakeholders to appropriately plan and allocate resources to protect ports and other maritime facilities will be limited. Maritime security plans required by law and regulation generally did not identify or address potential cyber-related threats or vulnerabilities. This was because the guidance issued by Coast Guard for developing these plans did not require cyber elements to be addressed. Officials stated that guidance for the next set of updated plans, due for update in 2014, will include cybersecurity requirements. However, in the absence of a comprehensive risk assessment, the revised guidance may not adequately address cyber-related risks to the maritime environment. The degree to which information-sharing mechanisms (e.g., councils) were active and shared cybersecurity-related information varied. Specifically, the Coast Guard established a government coordinating council to share information among government entities, but it is unclear to what extent this body has shared information related to cybersecurity. In addition, a sector coordinating council for sharing information among nonfederal stakeholders is no longer active, and the Coast Guard has not convinced stakeholders to reestablish it. Until the Coast Guard improves these mechanisms, maritime stakeholders in different locations are at greater risk of not being aware of, and thus not mitigating, cyber-based threats. Under a program to provide security-related grants to ports, FEMA identified enhancing cybersecurity capabilities as a funding priority for the first time in fiscal year 2013 and has provided guidance for cybersecurity-related proposals. However, the agency has not consulted cybersecurity-related subject matter experts to inform the multi-level review of cyber-related proposals—partly because FEMA has downsized the expert panel that reviews grants. Also, because the Coast Guard has not assessed cyber-related risks in the maritime risk assessment, grant applicants and FEMA have not been able to use this information to inform funding proposals and decisions. As a result, FEMA is limited in its ability to ensure that the program is effectively addressing cyber-related risks in the maritime environment. GAO recommends that DHS direct the Coast Guard to (1) assess cyber-related risks, (2) use this assessment to inform maritime security guidance, and (3) determine whether the sector coordinating council should be reestablished. DHS should also direct FEMA to (1) develop procedures to consult DHS cybersecurity experts for assistance in reviewing grant proposals and (2) use the results of the cyber-risk assessment to inform its grant guidance. DHS concurred with GAO's recommendations.
You are an expert at summarizing long articles. Proceed to summarize the following text: VA operates one of the nation’s largest health care systems to provide care to approximately 5.2 million veterans who receive health care through 158 VA medical centers (VAMC) and almost 900 outpatient clinics nationwide. The VA health care system also consists of nursing homes, residential rehabilitation treatment programs, and readjustment counseling centers. In 1986 Congress authorized VA to collect payments from third-party health insurers for the treatment of veterans with nonservice-connected disabilities, and it also established copayments from veterans for this care. Funds collected were deposited into the U.S. Treasury as miscellaneous receipts and were not made specifically available to VA to supplement its medical care appropriations. The Balanced Budget Act of 1997 established a new fund in the U.S. Treasury, the Department of Veterans Affairs Medical Care Collections Fund, and authorized VA to use funds in this account to supplement its medical care appropriations. As part of VA’s 1997 strategic plan, VA expected that collections from first- and third-party payments would cover the majority of the cost of care provided to veterans for nonservice-connected disabilities. VA has determined that some of these veterans, about 25 percent of VA’s user population in fiscal year 2002, were required to pay a copayment because of their income levels. In September 1999, VA adopted a fee schedule, called “reasonable charges,” which are itemized fees based on diagnoses and procedures. This schedule allows VA to more accurately bill for the care provided. To implement this, VA created additional bill-processing functions— particularly in the areas of documenting care, coding the care, and processing bills for each episode of care. To collect from health insurers, VA uses a four-function process with 13 activities to manage the information needed to bill and collect third-party payments—also known as the MCCF revenue cycle (see fig. 1). First, the intake revenue cycle function involves gathering insurance information on the patient and verifying that information with the insurer as well as collecting demographic data on the veteran. Second, the utilization review function involves precertification of care in compliance with the veteran’s insurance policy, including continued stay reviews to obtain authorization from third-party insurers for payment. Third, the billing function involves properly documenting the health care provided to patients by physicians and other health care providers. Based on the physician documentation, the diagnoses and medical procedures performed are coded. VA then creates and sends bills to insurers based on the insurance and coding information obtained. Finally, the collections, or accounts receivable, function includes processing payments from insurers and following up on outstanding or denied bills. See appendix II for a description of the activities that take place within each of the four functions. VA’s Chief Business Office utilizes a performance measure—an efficiency rating it refers to as “cost to collect”—that reflects VA’s cost to collect one dollar from first- and third-parties. To calculate the efficiency rating VA divides the costs of generating a bill and collecting payments from veterans and private health insurers by the actual revenue received from these sources. To measure the cost, cost data are extracted from two financial accounts, or cost centers, which are intended to capture field office costs and central office costs. Specifically, cost centers are used for classifying costs related to each of the 13 functional activities and the organizations that support these activities. According to an official with the Healthcare Financial Management Association, because business practices differ among entities, there are many variables that entities include in their calculations of the cost for collecting funds from first and third parties. Thus, a comparison of collection efficiency—the cost to collect one dollar—between different entities would be difficult. However, according to the official, it is reasonable to expect that business practices within the same organization such as the VA can be standardized, which would facilitate such a comparison internally. The VA health care system has unique rules and regulations governing its billing practices. For instance, VA is generally not authorized to bill Medicare or Medicaid for care provided to Medicare- or Medicaid-eligible veterans. VA must pay for all inpatient and outpatient care associated with a service-connected disability—it cannot collect copayments or bill third- party insurers for this care. VA uses third-party collections to satisfy veterans’ first-party debt. Specifically, if VA treats an insured veteran for a nonservice-connected disability, and the veteran is also determined by VA to have copayment responsibilities, VA will apply each dollar collected from the insurer to satisfy the veteran’s copayment debt related to that treatment. As we stated in a previous report to Congress, the statutes governing VA recoveries from private health insurers and veteran copayments do not clearly specify the relationship between the two provisions. In the absence of definitive guidance in the law, VA’s General Counsel has determined that insurance recoveries should be used to satisfy veterans’ copayment debt. The law and the relevant legislative history are not clear on whether third-party collections can be used for this purpose. VA has not provided guidance to the Chief Business Office or VISNs for accounting for the costs associated with collecting payments from veterans and private health insurers. As a result, we found that VA’s Chief Business Office and VISNs did not allocate certain costs associated with activities related to collecting first- and third-party payments to the two cost centers used in the calculation of cost to collect. In addition, we found inconsistencies in the way VISNs allocated these costs to the field office cost center. Consequently, reported costs to collect are inaccurate. We found that some costs incurred by VA’s central office as part of its efforts for collecting first- and third-party payments were not allocated to the central office cost center. For example, the following activities are costs incurred by organizations that support the Chief Business Office, but are not included in the central office cost center: Staff at the Health Eligibility Center spend a portion of their time determining veterans’ copayment status. Staff at the Health Revenue Center processed first-party refunds resulting from a settlement with a third-party payer regarding claims submitted from January 1995 through December 2001. VA reported that about 15 full- time staff members are dedicated to this effort. Staff assigned to Health Informatics assisted with contractor-developed software to review third-party claims for accuracy. Some costs incurred by field locations also were not always allocated to the field office cost center. Cost allocation differences occur because VA does not provide guidance to its field locations on which costs to allocate to specific cost centers. Thus, each of VA’s health care VISNs makes a determination as to which cost center to use when allocating costs for specific revenue cycle functions—such as patient intake and registration and utilization review. Figure 2 shows inconsistencies among VISNs in the way they allocate costs to some of the activities within the revenue cycle functions. For example, for precertification and certification activities within the utilization review function, 13 VISNs allocated all of the cost, 3 VISNs allocated some costs, and 5 VISNs allocated none of the cost to the field office cost center. In addition, the following are examples of costs that are related to collection activities but were not included in the costs for collecting payments: A veteran call center in VISN 8 (Bay Pines, Florida)—staffing resources valued at about $635,000 designed to assist veterans with questions about bills they receive and, if necessary, the arrangement of payment plans. Two service contracts in VISN 2 (Albany, New York)—approximately $470,000 in contract expenses for collecting third-party payments and a service contract estimated at $104,000 for insurance verification. Two service contracts in VISN 10 (Cincinnati, Ohio)—approximately $100,000 in contract expenses to use a software package that reviews claims sent to third-party insurers for technical accuracy. Also not included was an estimated $425,000 to license the use of insurance identification and verification software. In an attempt to standardize how MCCF staff carry out the revenue cycle functions and to instill fiscal discipline throughout its entire health system, VA is piloting the Patient Financial Services System (PFSS) in VISN 10 (Cincinnati, Ohio). PFSS is a financial software package that contains individual patient accounts for billing purposes. According to the Chief Business Office, the system is a key element to standardize MCCF operations throughout the entire VA health care system. PFSS is expected to improve first- and third-party collections by capturing and consolidating inpatient and outpatient billing information. However, PFSS is not currently designed to capture the cost of staff time for these activities—a key element for assessing the efficiency of VA’s collection efforts. VA’s practice of satisfying veterans’ copayment debt with collections from third-party insurers has reduced overall collections and increased administrative expenses. VA does not quantify the lost revenue from veterans’ copayments that is not collected and could be used to supplement its medical care appropriation. Based on interviews with network officials and site visits to individual medical facilities, we did not discover any locations that track the volume of first-party debt that is not collected and its relative dollar value. Hence, the exact dollar value of first-party revenue that was not collected is unknown. Seventeen of the 21 network officials we interviewed stated that considerable administrative time is dedicated to the process required to satisfy first-party debt with third-party collections—resources that could be invested elsewhere if the practice did not exist. One facility official estimated that approximately 5 full-time equivalent staff are used to satisfy first-party debt. Furthermore, one VISN official estimated that its medical facilities use approximately 11 full-time equivalent staff on this process. Collections staff routinely receive insurance payments that include voluminous reports that detail each claim. For example, one medical center provided us with a report that contained approximately 1,000 line items, each representing a pharmaceutical reimbursement. Staff at the medical center must sort through each line item and manually match it to a claim in the veteran’s file to determine if the veteran was charged a copayment for the pharmaceutical. In those cases where VA receives a reimbursement and the veteran was charged a copayment, VA will issue a credit or refund to the veteran—in the case of pharmaceuticals this amount can be up to $7. VA will delay billing copayments to veterans with private health insurance for 90 days to allow time for the insurer to reimburse VA. However, when insurers reimburse VA after the 90-day period, VA must absorb the cost of additional staff time for processing a refund if the veteran has already paid the bill. In our 1997 report, we discussed that VA’s practice of satisfying copayment debt with recoveries made from third-party insurers has resulted in reduced overall cost recoveries and increased administrative expense. In the report we suggested that Congress consider clarifying the cost recovery provisions of title 38 of the U.S. Code to direct VA to collect copayments from patients regardless of any amounts recovered from private health insurance except in instances where the insurer pays the full cost of VA care. VA does not provide guidance to its Chief Business Office and VISNs for accounting for the costs associated with collecting payments from private health insurers and veterans. As a result, VA’s Chief Business Office and VISNs did not allocate certain costs associated with activities related to collecting first- and third-party payments to the two cost centers used by the Chief Business Office in its calculation of cost to collect. In addition, we found inconsistencies in the way VISNs allocated these costs to the field office cost center. Consequently, VA’s reported cost-to-collect measure is inaccurate. Furthermore, VA has determined that it should use collections from private health insurers to satisfy veteran copayment debt. The law is silent on this point. VA’s determination has resulted in increased administrative expenses and reduced overall collections, thus making fewer dollars available for veteran health care. We believe our previous suggestion to Congress—that it consider clarifying the cost recovery provisions of title 38 of the U.S. Code to direct VA to collect copayments from patients regardless of any amounts recovered from private health insurance except in instances where the insurer pays the full cost of VA care—is still valid. Such action would reduce the administrative burden on VA staff, reduce VA administrative expenses, and allow VA to maximize collections to help meet its costs for providing health care. To accurately determine and report the cost to collect first- and third-party payments, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to provide guidance for standardizing and consistently applying across VA the accounting of costs associated with collecting payments from veterans and private health insurers. We provided a draft of this report to VA for comment. In oral comments, an official in VA’s Office of Congressional and Legislative Affairs informed us that VA concurred with our recommendation. We are sending copies of this report to the Secretary of Veterans Affairs, interested congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov/. If you or your staff have any questions about this report, please call me at (202) 512-7101 or Michael T. Blair, Jr., at (404) 679-1944. Michael Tropauer and Aditi Shah Archer contributed to this report. To do our work, we reviewed our prior work and Department of Veterans Affairs (VA) Office of Inspector General reports on VA’s first- and third- party revenue collection for the Medical Care Collections Fund (MCCF). We obtained and reviewed copies of VA policies and regulations governing these collection activities. Also, we reviewed statements by the Federal Accounting Standards and Advisory Board on managerial cost accounting concepts and standards for the federal government. We interviewed officials at VA’s Chief Business Office, which provides policy guidance to MCCF field staff, and obtained information on what they consider to be direct expenses of collecting first- and third-party revenue and documentation on how they calculate the cost to collect first- and third-party revenue. This information was validated through telephone interviews of key officials at each of VA’s 21 networks and site visits to 4 medical facilities. Also, we obtained information on the organizational structure for each network and its medical facilities and obtained the views of VISN and medical facility officials on the accuracy of the Chief Business Office’s cost reporting. In addition, we obtained information from the Healthcare Financial Management Association on other health care industry practices for reporting the cost to collect first- and third- party payments. Regarding the practice of satisfying first-party debt with third-party revenue, we reviewed past opinions and decisions by VA’s Office of General Counsel, applicable laws and regulations, and existing GAO matters for consideration. We also discussed the implementation of VA’s Office of General Counsel’s decisions with staff from VA’s Chief Business Office and medical facilities. In 1986, Congress authorized VA to collect payments from third-party health insurers for the treatment of veterans with nonservice-connected disabilities, and it also established copayments for this care. Funds collected were deposited into the U.S. Treasury as miscellaneous receipts and not made specifically available to the VA to supplement its medical care appropriations. The Balanced Budget Act of 1997 established a new fund in the U.S. Treasury, the Department of Veterans Affairs Medical Care Collections Fund, and authorized VA to use funds in this account to supplement its medical care appropriations. To collect from health insurers, VA uses a four-function process with the following 13 activities to bill and collect third-party payments. 1. Patient Registration: Collecting patient demographic information, determining eligibility for health care benefits, ascertaining financial status, and obtaining consent for release of medical information. 2. Insurance Identification: Obtaining insurance information from veteran, spouse, or employer. 3. Insurance Verification: Confirming patient insurance information and contacting third-party insurer for verification of coverage and benefit structure. 4. Precertification and Certification: Contacting third-party insurer to obtain payment authorization for VA-provided care. 5. Continued Stay Reviews: Reviewing clinical information and obtaining payment authorization from third-party insurer for continuation of care. 6. Coding and Documentation: Reviewing and assigning appropriate codes to document diagnosis of patient ailment and treatment procedures and validating information documented by the physician. 7. Bill Creation: Gathering pertinent data for bills; authorizing and generating bills; and submitting bills to payers. 8. Claims Correspondence and Inquiries: Providing customer service for veterans, payers, Congress, and VA Regional Counsel. 9. Establishment of Receivables: Reviewing outstanding claims sent to third-party insurers and identifying amount of payment due to VA for collection follow-up work. 10. Payment Processing: Reviewing, posting, and reconciling payment received. 11. Collection Correspondence and Inquiries: Following up with payers; resolving first-party bankruptcies, hardships and waivers; processing refund requests, repayment plans, and returned checks; referring claims to utilization review; and generating probate action. 12. Referral of Indebtedness: Referring delinquent first-party debt to the U.S. Treasury for collection against any future government payment to the veteran, such as reducing an income tax refund by the amount of the first-party debt. 13. Appeals: Receiving notification of partial or nonpayment from the third-party insurer, reviewing documentation, initiating an appeal to the third-party insurer for payment, and following up for appropriate payment. VA Health Care: VA Increases Third-Party Collections as It Addresses Problems in Its Collections Operations. GAO-03-740T. Washington, D.C.: May 7, 2003. VA Health Care: Third-Party Collections Rising as VA Continues to Address Problems in Its Collections Operations. GAO-03-145. Washington, D.C.: January 31, 2003. VA Health Care: VA Has Not Sufficiently Explored Alternatives for Optimizing Third-Party Collections. GAO-01-1157T. Washington, D.C.: September 20, 2001. VA Health Care: Third-Party Charges Based on Sound Methodology; Implementation Challenges Remain. GAO/HEHS-99-124. Washington, D.C.: June 11, 1999. VA Medical Care: Increasing Recoveries From Private Health Insurers Will Prove Difficult. GAO/HEHS-98-4. Washington, D.C.: October 17, 1997. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
During a May 2003 congressional hearing, questions were raised about the accuracy of the Department of Veterans Affairs' (VA) reported costs for collecting payments from veterans and private health insurers for its Medical Care Collections Fund (MCCF). Congress also had questions about VA's practice of using third-party collections to satisfy veterans' first-party debt. GAO's objectives were to determine: (1) the accuracy of VA's reported cost for collecting first- and third-party payments from veterans and private health insurers, and (2) how VA's practice of satisfying first-party debt with third-party payments affects the collections process. VA has not provided guidance to its Chief Business Office and Veterans Integrated Service Networks (VISN) for accounting for the costs associated with collecting payments from veterans and private health insurers. As a result, GAO found that the Chief Business Office and VISNs excluded some costs associated with collecting first- and third-party payments. In addition, GAO found inconsistencies in the way VISNs allocate these costs. Consequently, VA's reported costs to collect are inaccurate. VA's practice of satisfying--or paying for--first-party, or veterans' copayment debt, with collections from third-party insurers has resulted in a reduction in overall collections and increased administrative expenses due to the reconciliation process. VA has taken the position that payments made from third-party insurers should be used to satisfy veterans' first-party debt. The law and legislative history are not clear on whether third-party collections can be used for this purpose.
You are an expert at summarizing long articles. Proceed to summarize the following text: ACIP, commonly referred to as flight pay, is intended as additional pay to attract and retain officers in a military aviation career. The amount of ACIP varies from $125 a month for an aviator with 2 years or less of aviation service to $650 a month for 6 years to 18 years of service. After 18 years, the amount gradually decreases from $585 a month to $250 a month through year 25. After 25 years, aviators do not receive ACIP unless they are in operational flying positions. ACP, which has existed for all services since 1989, is considered a bonus and is intended to entice aviators to remain in the service during the prime of their flying career. An ACP bonus can be given to aviators below the grade 0-6 with at least 6 years of aviation service and who have completed any active duty service commitment incurred for undergraduate aviator training. However, it cannot be paid beyond 14 years of commissioned service. The services believe that it is during the 9-year to 14-year period of service that aviators are most sought after by the private sector airlines. Therefore, to protect their aviation training investment, all services, except the Army, which is currently not using the ACP program, offer ACP contracts to experienced aviators. In fiscal year 1996, the Army, the Navy, the Marine Corps, and the Air Force designated 11,336 positions as nonflying positions to be filled by aviators. These nonflying positions represent about 25 percent of all authorized aviator positions. As shown in table 1, the total number of nonflying positions has decreased since fiscal year 1994 and is expected to continue to decrease slightly up through fiscal year 2001. Service officials told us that they have been able to reduce the number of nonflying positions primarily through force structure reductions and reorganization of major commands. The services, however, have not developed criteria for determining whether there are nonflying positions that could be filled by nonaviators. The officials said that a justification is prepared for each nonflying position explaining why an aviator is needed for the position. These justifications are then approved by higher supervisory levels. The officials believe that this process demonstrates that the position must be filled by an aviator. In our view, the preparation of a written justification for filling a particular position with an aviator does not, in and by itself, demonstrate that the duties of a position could not be performed by a nonaviator. Because the services’ position descriptions for nonflying positions do not show the specific duties of the positions, we could not determine whether all or some part of the duties of the nonflying positions can only be performed by aviators. Consequently, we could not determine whether the number of nonflying positions could be further reduced. In commenting on a draft of this report, an Air Force official said that the Air Force Chief of Staff has directed that all nonflying positions be reviewed and a determination made by July 1997 as to which positions can be filled by nonaviators. All aviators receive ACIP, regardless of whether they are in flying or nonflying positions, if they meet the following criteria. Eight years of operational flying during the first 12 years of aviation service entitles the aviator to receive ACIP for 18 years. Ten years of operational flying during the first 18 years of aviation service entitles the aviator to receive ACIP for 22 years. Twelve years of operational flying during the first 18 years of aviation service entitles the aviator to receive ACIP for 25 years. ACP criteria are more flexible than ACIP in deciding who receives it, the amount paid, and the length of the contract period. According to service officials, ACP is an added form of compensation that is needed to retain aviators during the prime of their flying career when the aviators are most attractive to private sector airlines. To protect their training investment, all the services believe it is necessary to offer ACP contracts. The Army does not offer ACP contracts because, according to Army officials, it has not had a pilot retention problem. For fiscal years 1994 through April 30, 1996, the Army, the Navy, the Marine Corps, and the Air Force made ACIP and ACP payments to their aviators totaling $909.1 million. Of this total amount, $211 million, or about 23 percent, was paid to aviators in nonflying positions by the Air Force, the Navy, and the Marine Corps. The following table shows ACIP and ACP payments by each service for each of the fiscal years. The services view ACP as a retention incentive for their experienced aviators. However, the way the services implement this incentive varies widely in terms of who receives ACP, the length of time over which it is paid, and how much is paid. To illustrate, The Army does not offer ACP to its aviators because it has not had a pilot retention problem that warrants the use of the ACP program. The Navy offers long-term ACP contracts of up to 5 years and a maximum of $12,000 a year to eligible pilots in aircraft types with a critical pilot shortage. The Marine Corps offered short-term ACP contracts of 1 or 2 years at $6,000 a year through fiscal year 1996. Beginning in fiscal year 1997, the Marine Corps plans to offer long-term ACP contracts of up to 5 years at $12,000 a year to its eligible pilots and navigators in aircraft types that have critical personnel shortages. The Air Force offers long-term ACP contracts of up to 5 years at a maximum of $12,000 a year to all eligible pilots if there is a pilot shortage for any fixed- or rotary-wing aircraft. Table 3 shows the number and dollar amount of ACP contracts awarded by the services for fiscal years 1994 through 1996. As shown above, the Air Force greatly exceeds the other services in the number of ACP contracts awarded as well as the value of the contracts. This is because the Air Force does not restrict ACP contracts just to pilots of particular aircraft that are experiencing critical pilot shortages. Instead, if there is an overall shortage in fixed-wing or rotary-wing pilots, all eligible pilots in those respective aircraft are offered ACP. According to Air Force officials, the reason for offering ACP contracts to all fixed-wing and/or rotary-wing pilots rather than specific aircraft is because they want to treat all their pilots equally and not differentiate between pilots based on the type of aircraft they fly. In their opinion, if they were to only offer ACP to pilots of certain aircraft types, morale could be adversely affected. The point in an aviator’s career at which ACP is offered generally coincides with completion of the aviator’s initial service obligation—generally around 9 years. By this time, the aviator has completed pilot or navigator training and is considered to be an experienced aviator, and according to service officials, is most sought after by private sector airlines. For this reason, the services believe that awarding an ACP contract is necessary to protect their training investment and retain their qualified aviators. For example, the Air Force estimates that by paying ACP to its pilots, it could retain an additional 662 experienced pilots between fiscal years 1995 and 2001. The issue of whether ACP is an effective or necessary retention tool has been brought into question. For example, an April 1996 Aviation Week and Space Technology article pointed out that in the previous 7 months, 32 percent of the 6,000 new pilots hired by private sector airlines were military trained pilots. This is in contrast with historical airline hiring patterns where 75 percent of the airline pilots were military pilots. The concern about military pilots being hired away by the airlines was also downplayed in a June 1995 Congressional Budget Office (CBO) report. The report stated that employment in the civilian airlines sector is far from certain. Airline mergers, strikes, or failures have made the commercial environment less stable than the military. Consequently, military aviators may be reluctant to leave the military for the less stable employment conditions of the airline industry. CBO concluded that short-term civilian sector demands for military pilots may not seriously affect the services’ ability to retain an adequate number of pilots. The services include nonflying positions in their aviator requirements for determining future aviator training needs. Therefore, aviator training requirements reflect the number of aviators needed to fill both flying and nonflying positions. As shown in table 4, of all the services, the Air Force plans the largest increase in the number of aviators it will train between fiscal years 1997 and 2001—a 60-percent increase. The reason for the large training increase in Air Force aviators is because it believes that the number of aviators trained in prior years was insufficient to meet future demands. Because nonflying positions are included in the total aviator requirements, the Navy and the Marine Corps project aviator shortages for fiscal years 1997-2001 and the Air Force projects aviator shortages for fiscal years 1998-2001. As shown in table 5, there are more than enough pilots and navigators available to meet all flying position requirements. Therefore, to the extent that the number of the nonflying positions filled by aviators could be reduced, the number of aviators that need to be trained, as shown in table 4, could also be reduced. This, in turn, would enable the Navy, the Marine Corps, and the Air Force to reduce their aviator training costs by as much as $5 million for each pilot and $2 million for each navigator that the services would not have to train. The savings to the Army would be less because its aviator training costs are about $366,000 for each pilot. We recommend that the Secretary of Defense direct the Secretaries of the Army, the Navy, and the Air Force to develop criteria and review the duties of each nonflying position to identify those that could be filled by nonaviators. This could allow the services to reduce total aviator training requirements. In view of the recent articles and studies that raise questions about the need to incentivize aviators to remain in the service, the abundance of aviators as compared to requirements for flying positions, and the value of ACP as a retention tool, we recommend that the Secretary of Defense direct the service secretaries to reevaluate the need for ACP. If, the reevaluation points out the need to continue ACP, we recommend that the Secretary of Defense determine whether the services should apply a consistent definition in deciding what groups of aviators can receive ACP. In commenting on a draft of this report, Department of Defense (DOD) officials said that it partially agreed with the report and the recommendations. However, DOD also said that the report raises a number of concerns. DOD said that it did not agree that only flying positions should be considered in determining total aviator requirements. In its opinion, operational readiness dictates the need for aviator expertise in nonflying positions, and nonflying positions do not appreciably increase aviator training requirements. The report does not say or imply that only flying positions should be considered in determining total aviator requirements. The purpose of comparing the inventory of aviators to flying positions was to illustrate that there are sufficient pilots and navigators to meet all current and projected flying requirements through fiscal year 2001. We agree with DOD that those nonflying positions that require aviator expertise should be filled with aviators. The point, however, is that the services have not determined that all the nonflying positions require aviator expertise. Furthermore, to the extent that nonflying positions could be filled by nonaviators, the aviator training requirements could be reduced accordingly. DOD also said that the report, in its opinion, does not acknowledge the effectiveness of the processes used for determining aviator training requirements or the use of ACP in improving pilot retention. The issue is not whether ACP has improved retention—obviously it has—but whether ACP is needed in view of the data showing that the civilian airline sector is becoming less dependent on the need for military trained pilots and that military pilots are becoming less likely to leave the service to join the civilian sector. DOD further commented that the articles cited in the report as pointing to a decrease in civilian sector demand for military trained pilots contain information that contradicts this conclusion. DOD believes that the fact that the airlines are currently hiring a smaller percentage of military trained pilots is an indication of a decrease in pilot inventory and the effectiveness of ACP as a retention incentive. The articles cited in our report—Aviation Weekly and Space Technology and the June 1995 CBO report—do not contain information that contradicts a decreasing dependence on military trained pilots. The Aviation Weekly and Space Technology article points out that about 70 percent of the recent pilot hires by the civilian airlines have been pilots with exclusively civilian flying backgrounds. This contrasts to previous hiring practices where about 75 percent were military trained pilots. The CBO report also discusses expected long-term hiring practices in the civilian airline sector. The report points out that while the number of new hires is expected to double (from 1,700 annually to 3,500 annually) between 1997 and 2000, the Air Force’s efforts to retain its pilots may not be affected because the industry’s new pilots could be drawn from an existing pool of Federal Aviation Agency qualified aviators. Furthermore, the issue is not whether the pilot inventory is decreasing and whether ACP is an effective retention tool. The point of the CBO report was that because of private sector airline mergers, strikes, or failures, the commercial environment is less stable than the military. As a result, there is a ready supply of pilots in the civilian sector and the short-term demands for military pilots may be such that the Air Force’s quest to retain an adequate number of pilots is not seriously affected. In commenting on why the Air Force’s method of offering ACP contracts differs from the Navy’s and the Marine Corps’ methods, DOD stated that while morale and equity are vital to any retention effort, it is not the primary determinant in developing ACP eligibility. We agree and the report is not meant to imply that morale and equity is the primary determinant for developing ACP eligibility. The report states that the reason cited by Air Force officials for not restricting ACP contracts to just those pilots in aircraft that have personnel shortages, as do the Navy and the Marine Corps, is because of the morale and equity issue. Another reason cited by Air Force officials was the interchangability of its pilots. However, the Navy and the Marine Corps also have pilot interchangability. Therefore, interchangability is not a unique feature of the Air Force. DOD agreed with the recommendation that the services review the criteria and duties of nonflying aviator positions. However, DOD did not agree that the nonflying positions should be filled with nonaviators or that doing so would appreciably reduce aviator training requirements. DOD also agreed with the recommendation that the services need to continually review and reevaluate the need for ACP, including whether there should be a consistent definition in deciding what groups of aviators can receive ACP. In DOD’s opinion, however, this review and affirmation of the continued need for ACP is already being done as part of the services’ response to a congressional legislative report requirement. We agree that the services report annually on why they believe ACP is an effective retention tool. However, the reports do not address the essence of our recommendation that the need for ACP—a protection against losing trained pilots to the private sector—should be reevaluated in view of recent studies and reports that show that private sector airlines are becoming less dependent on military trained pilots as a primary source of new hires. The annual reports to Congress also do not address the issue of why the Air Force, unlike the Navy and the Marine Corps, does not restrict ACP to those aviators in aircraft that have aviator personnel shortages. A complete text of DOD’s comments is in appendix II. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Director, Office of Management and Budget; and the Chairmen and the Ranking Minority Members, House Committee on Government Reform and Oversight, Senate Committee on Governmental Affairs, House and Senate Committees on Appropriations, House Committee on National Security, Senate Committee on Armed Services, and House and Senate Committees on the Budget. Please contact me on (202) 512-5140 if you have any questions concerning this report. Major contributors to this report are listed in appendix III. To accomplish our objectives, we reviewed legislation, studies, regulations, and held discussions with service officials responsible for managing aviator requirements. Additionally, we obtained data from each of the services’ manpower databases to determine their flying and nonflying position requirements. Using this information, we developed trend analyses comparing the total number of aviator positions to the nonflying positions for fiscal years 1994-2001. The Army was not able to provide requirements data for fiscal years 1994 and 1995. To determine the benefits paid to aviators serving in nonflying positions, we obtained an automated listing of social security numbers for all aviators and, except for the Army, the services identified the aviators serving in nonflying positions. The data were submitted to the appropriate Defense Financial Accounting System offices for the Army, the Air Force, and the Marine Corps to identify the amounts of aviation career incentive pay (ACIP) and aviation continuation pay (ACP) paid to each aviator. The Navy’s financial data was provided by Defense Manpower Data Center. To assess whether the services implement ACIP and ACP uniformly, we obtained copies of legislation addressing how ACIP and ACP should be implemented and held discussions with service officials to obtain and compare the methodology each service used to implement ACIP and ACP. To determine how the services compute aviator requirements and the impact their flying and nonflying requirements have on training requirements, we held discussions with service officials to identify the methodology used to compute their aviator and training requirements. We also obtained flying and nonflying position requirements, available inventory, and training requirements from the services’ manpower databases. We then compared the flying and nonflying requirements to the respective services’ available aviator inventory to identify the extent that the available inventory of aviators could satisfy aviator requirements. We performed our work at the following locations. Defense Personnel and Readiness Military Personnel Policy Office, Defense Financial Accounting System, Kansas City, Missouri; Denver, Colorado; and Indianapolis, Indiana; Defense Manpower Data Center, Seaside, California; Air Force Directorate of Operations Training Division, Washington, D.C.; Air Force Personnel Center, Randolph Air Force Base, Texas; Air Force Directorate of Personnel Military Compensation and Legislation Division and Rated Management Division, Washington, D.C.; Air Combat Command, Langley Air Force Base, Virginia; Bureau of Naval Personnel, Office of Aviation Community Management, Navy Total Force Programming, Manpower and Information Resource Management Division, Washington, D.C.; Navy Manpower Analysis Team, Commander in Chief U.S. Atlantic Fleet, Marine Corps Combat Development Command, Force Structure Division, Marine Corps Deputy Chief of Staff for Manpower and Reserve Affairs Department, Washington, D.C.; Army Office of the Deputy Chief of Staff for Plans Force Integration and Analysis, Alexandria, Virginia; Army Office of the Deputy Chief of Staff for Personnel, Washington, D.C.; Congressional Budget Office, Washington, D.C. We performed our review from March 1996 to December 1996 in accordance with generally accepted government auditing standards. Norman L. Jessup, Jr. Patricia F. Blowe Patricia W. Lentini The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed certain Department of Defense (DOD) nonflying positions, focusing on: (1) the number of aviators (pilots and navigators) that are assigned to nonflying positions in the Army, Navy, Marine Corps, and Air Force; (2) the amount of aviation career incentive pay (ACIP) and aviation continuation pay (ACP) paid to aviators in nonflying positions; (3) whether the services implement ACIP and ACP uniformly; and (4) whether the nonflying positions affect the number of aviators the services plan to train to meet future requirements. GAO found that: (1) for fiscal year (FY) 1996, the Army, Navy, Marine Corps, and Air Force designated 11,336 positions, or about 25 percent of all aviator positions, as nonflying positions to be filled by aviators; (2) since FY 1994, the number of nonflying positions has decreased and this decrease is expected to continue through 2001 when the number of such positions is estimated to be 10,553; (3) for fiscal years 1994 through April 30, 1996, the Army, Navy, Marine Corps, and Air Force paid $739.7 million in ACIP, of which $179.1 million was paid to aviators in nonflying positions; (4) additionally, the Navy, Marine Corps, and Air Force paid $169.4 million in ACP, of which $31.9 million was paid to aviators in nonflying positions; (5) the Army does not pay ACP; (6) ACIP is payable to all aviators who meet certain flying requirements and all the services implement it in a consistent fashion; (7) with ACP, however, the services have a great deal of latitude in deciding who receives it, the length of time it is paid and the amount that is paid; (8) in determining their aviator training requirements, the services consider both flying and nonflying positions; (9) including nonflying positions increases the total aviator requirements and results in the services projecting aviator shortages in the upcoming fiscal years; (10) however, GAO's analysis showed that there are more than enough aviators available to satisfy all flying position requirements; (11) to the extent that the number of nonflying positions filled by aviators can be reduced, the number of aviators that need to be trained also could be reduced, saving training costs of about $5 million for each Navy, Marine Corps, and Air Force pilot candidate and about $2 million for each navigator candidate; and (12) the savings to the Army would be about $366,000 for each pilot training requirement eliminated.
You are an expert at summarizing long articles. Proceed to summarize the following text: Coastal properties in the United States that lie on the Atlantic Ocean and the Gulf of Mexico are at risk of both flood and wind damage from hurricanes. One study put the estimated insured value of coastal property in states on these coasts at $7.2 trillion as of December 2004, and populations in these areas are growing. Property owners can obtain insurance against losses from wind damage through private insurance markets or, in high-risk coastal areas in some states, through state wind insurance programs. Flood insurance is generally excluded from such coverage, but property owners can obtain insurance against losses from flood damage through NFIP, which was established by the National Flood Insurance Act of 1968. As we have reported, insurance coverage gaps and claims uncertainties can arise when coverage for hurricane damage is divided among multiple policies because the extent of coverage under each policy depends on the cause of the damages, as determined through the claims adjustment process and the policy terms that cover a particular type of damage. This adjustment process is complicated when a damaged property has been subjected to a combination of high winds and flooding and evidence at the damage scene is limited. Other claims concerns can arise on such properties when the same insurer serves as both the NFIP’s WYO insurer and the property-casualty (wind) insurer. In such cases, the same company is responsible for determining damages and losses to itself and to the NFIP, creating an inherent conflict of interest. H.R. 3121, the Flood Insurance Reform and Modernization Act of 2007, set an effective date for its proposed flood and wind insurance program of June 28, 2008. A version of this bill, S. 2284, was introduced in the Senate in November of 2007, but this version did not include provisions that would establish a federal flood and wind program. As of March 2008, no additional action had been taken on S. 2284. In a September 26, 2007, Statement of Administration Policy regarding H.R. 3121, the Executive Office of the President stated that the Administration strongly opposes the expansion of NFIP to include coverage for windstorm damage. H.R. 3121’s provisions include the following: In order for individual property owners to be eligible to purchase federal flood and wind coverage, their communities must have adopted adequate mitigation measures that the Director of FEMA finds are consistent with the International Code Council’s building codes for wind mitigation. The Director of FEMA is expected to carry out studies and investigations to determine appropriate wind hazard prevention measures, including laws and regulations relating to land use and zoning; establish criteria based on this work to encourage adoption of adequate state and local measures to help reduce wind damage; and work closely with and provide any technical assistance to state and local governmental agencies to encourage the application of these criteria and the adoption and enforcement of these measures. Property owners who purchase a combined federal flood and wind insurance policy cannot also purchase an NFIP flood insurance policy. Federal flood and wind insurance will cover losses only from physical damage from flood and windstorm (including hurricanes, tornadoes, and other wind events), but no distinction between flood and wind damage need be made in order for claims to be paid. Premium rates are to be based on risk levels and accepted actuarial principles and will include all operating costs and administrative expenses. Residential property owners can obtain up to $500,000 in coverage for damages to any single-family structure and up to $150,000 in coverage for damage to contents and any necessary increases in living expenses incurred when losses from flooding or windstorm make the residence unfit to live in. Nonresidential property owners can obtain up to $1,000,000 in coverage for damages to any single structure and up to $750,000 in coverage for damage to contents and for losses resulting from an interruption of business operations caused by damage to, or loss of, the property from flooding or windstorm; If at any time FEMA borrows funds from the Treasury to pay claims under the federal flood and wind program, until those funds are repaid the program may not sell any new policies or renew any existing policies. Over 20,000 communities across the United States and its territories participate in the NFIP by adopting and agreeing to enforce state and community floodplain management regulations to reduce future flood damage. In exchange, the NFIP makes federally backed flood insurance available to homeowners and other property owners in these communities. Homeowners with mortgages from federally regulated lenders on property in communities identified to be in special high-risk flood hazard areas are required to purchase flood insurance on their dwellings. Optional, lower-cost coverage is also available under the NFIP to protect homes in areas of low to moderate risk. Premium amounts vary according to the amount of coverage purchased and the location and characteristics of the property to be insured. When the NFIP was created, Congress mandated that it was to be implemented using “workable methods of pooling risks, minimizing costs, and distributing burdens equitably” among policyholders and taxpayers in general. The program aims to make reasonably priced coverage available to those who need it. The NFIP attempts to strike a balance between the scope of the coverage provided and the premium amounts required to provide that coverage and, to the extent possible, the program is designed to pay operating expenses and flood insurance claims with premiums collected on flood insurance policies rather than tax dollars. However, as we have reported before, the program, by design, is not actuarially sound because Congress authorized subsidized insurance rates for some policies to encourage communities to join the program. As a result, the program does not collect sufficient premium income to build reserves to meet the long-term future expected flood losses. FEMA has statutory authority to borrow funds from the Treasury to keep the NFIP solvent. In 2005, Hurricanes Katrina, Rita, and Wilma had a far-reaching impact on NFIP’s financial solvency. Legislation incrementally increased FEMA’s borrowing authority from a total of $1.5 billion prior to Hurricane Katrina to $20.8 billion by March 2006, and as of December 2007, FEMA’s outstanding debt to the Treasury was $17.3 billion. As we have reported, it is unlikely that FEMA can repay a debt of this size and pay future claims in a program that generates premium income of about $2 billion per year. To implement a combined federal flood and wind insurance program, FEMA would need to complete a number of steps, similar to those undertaken to establish the NFIP, which would require the agency to address several challenges. First, FEMA would need to undertake studies in order to determine appropriate building codes that communities would be required to adopt in order to participate in the combined program. Second, FEMA would need to adapt existing processes under the NFIP flood program to accommodate the addition of wind coverage. For example, FEMA could leverage current processes under the WYO program and the Direct Service program to perform the administrative functions of selling and servicing the combined federal flood and wind insurance policy. Third, to set wind rates, FEMA would have to create a rate-setting structure, which would require contractor support. Fourth, promoting the combined federal flood and wind insurance program in communities would require that FEMA staff raise awareness of the combined program’s availability and coordinate enforcement of the new building codes. Finally, FEMA is facing a $17.3 billion deficit and attempting to address several management and oversight challenges associated with the NFIP, and balancing those demands with expanding staffing capacity to adjust existing administrative, operational, monitoring, and oversight processes and establish new ones to accommodate wind coverage could further strain FEMA’s ability to effectively manage the NFIP. H.R. 3121 would require FEMA to determine appropriate wind mitigation measures that communities would be required to adopt in order to participate in the combined flood and wind program. For several reasons, this could be a challenging process. First, FEMA would have to determine how to most effectively integrate a new federal wind mitigation standard with existing building codes for wind resistance. As we discussed in a previous report, as of January 2007, the majority of states had adopted some version of a model building code for commercial and residential structures. However, some local jurisdictions within states had not adopted a statewide model code and had modified the codes to reflect local hazards. Standards determined by FEMA to be appropriate for participation in the combined federal flood and wind program could conflict with those currently used by some states and local jurisdictions, and resolving any such differences could be challenging. Second, as it did with the NFIP, FEMA would have to address constitutional issues related to federal regulation of state and local code enforcement. Further, FEMA would need to establish regulations similar to those governing the flood program to allow for appeals by local jurisdictions, a process that could be time intensive. Third, as we have noted in a previous report, reaching agreement with communities on appropriate mitigation measures can be challenging, as communities often resist changes to building standards and zoning regulations because of the potential impact on economic development. For example, community goals such as housing and promoting economic development may be higher priorities for the community than formulating mitigation regulations that may include more rigorous developmental regulations and building codes. Fourth, according to FEMA officials, the agency would have to resolve potentially conflicting wind and flood standards. For example, they told us that flood building standards require some homes to be raised off the ground, but doing so can increase a building’s susceptibility to wind damage because the buildings are then at a higher elevation. While some of the NFIP’s current processes could be leveraged to implement a combined federal flood and wind program, they would need to be revised, an action that could pose further challenges for FEMA. According to FEMA officials, both the NFIP’s WYO and Direct Service programs could be used, with some revisions, to sell and underwrite the combined federal flood and wind insurance policy. The provision within H.R. 3121 that prevents FEMA from selling new policies or renewing existing policies if it borrows funds to pay claims would necessitate that the agency segregate funds collected from premiums under the new combined program and the flood program to ensure that it has sufficient funds to cover all future costs without borrowing, especially in catastrophic loss years. While the NFIP Community Rating System (CRS), a program that uses insurance premium discounts to incentivize flood damage mitigation activities by participating communities, could be adapted for combined federal flood and wind insurance coverage, it would not be required for the new program to begin operations because community participation in CRS is voluntary. As part of the WYO program, private property-casualty insurers are responsible for selling and servicing NFIP policies, including performing the claims adjustment activities to assess the cause and extent of damages. FEMA is responsible for managing the program, including establishing and updating NFIP regulations, analyzing data to determine flood insurance rates, and offering training to insurance agents and adjusters. In addition, FEMA and its program contractor are responsible for monitoring and overseeing the quality of the performance of the WYO insurance companies to ensure that NFIP is administered properly. These duties under the WYO program would be amplified with the addition of wind coverage and, according to FEMA officials, would require FEMA to expand the staffing capacity to include those with wind peril insurance experience. In addition, FEMA would need to determine whether existing data systems would be adequate to manage an increased number of policies and track losses for the new program. FEMA could face several challenges in expanding the WYO program. First, program staff would need to determine how to manage and mitigate the potential conflict of interest for those companies in the WYO program that could be selling both their own wind coverage and the combined federal flood and wind coverage. Current WYO arrangements with the NFIP prevent WYO insurers from offering flood-only coverage of their own unless it supplements NFIP coverage limits or is part of a larger policy in which flooding is one of the several perils covered. H.R. 3121, however, does not appear to prevent companies that might sell a combined federal flood and wind policy from also selling wind coverage, which may be part of a homeowners policy. Without this restriction, a conflict of interest could develop because insurers would have an incentive to sell the combined federal policy to its highest-risk customers and their own policies to lower-risk customers. FEMA officials agreed that this would be an inherent conflict and noted that it would be difficult to prevent this from occurring without precluding the WYO insurers from selling their wind policies. Moreover, according to a WYO insurer with whom we spoke, attempting to eliminate the conflict by either restricting a WYO insurer from selling its own wind coverage or requiring it to sell both flood-only and the combined policy could discourage participation in the WYO program. As noted in a previous report, private sector WYO program managers have said that while NFIP has many positive aspects, working with it is complex for policyholders, agents, and adjusters. According to another WYO insurer we spoke with, adding wind coverage could increase these complexities. FEMA officials told us that the agency could also sell and service the combined flood and wind insurance policies through its Direct Service program, which is designed for agents who do not have agreements or contracts with insurance companies that are part of the WYO program. According to FEMA officials, the Direct Service program of NFIP currently writes about 3 percent of the more than 5.5 million NFIP policies sold. Further, as with the WYO program, FEMA may have to contend with an inherent conflict of interest, and expand staffing capacity including adding staff with wind peril insurance expertise in the Direct Service program to administer, monitor, and oversee the sale of the new product. H.R. 3121 calls for FEMA to establish comprehensive criteria designed to encourage communities to participate in wind mitigation activities. As previously noted, the CRS program would be an important means of incentivizing wind mitigation activities in communities, but would not be necessary for the combined federal flood and wind insurance program to operate. According to FEMA, while the CRS process could be adapted for wind coverage, the agency would have to assess current practices, evaluate standards, and devise an appropriate rating system; a developmental process similar to what occurred for the NFIP. FEMA officials told us that it took approximately 5 years to develop the program, during which time extensive evaluation, research, and concept testing occurred. They estimate that replicating a similar approach for wind hazard would require at least the same number of years if not more, recognizing the complexities of current insurance industry experience associated with the wind peril and the complexities involved with evaluating current building code practices related to wind and other wind mitigation techniques. Establishing a new rate-setting structure for a combined federal flood and wind insurance program could pose another challenge for FEMA. According to several insurers and modeling consultants, wind modeling is the accepted method of determining wind-related premium rates, and FEMA does not have the necessary in-house wind modeling and actuarial expertise needed to develop and interpret wind models and translate the model’s output into premium rates. They told us that modeling has several advantages in rate setting over methods that place greater emphasis on loss data from past catastrophic events, such as the method used by NFIP to determine flood insurance premium rates. For example, modeling uses wind speed maps and other data to account for the probability that properties in a certain geographic area might experience losses in the future, regardless of whether those properties have experienced losses in the past. In addition, according to a modeling expert, wind modeling incorporates mitigation efforts at the property level because it can estimate the potential reductions in damage without waiting to see how the efforts actually affect losses during a storm or other event. While several modeling companies that are already providing wind modeling to private sector insurers and state wind insurance programs exist, it is not clear how much such services would cost FEMA. And while FEMA officials told us that the agency would have to contract out for wind-modeling services because it lacks the necessary wind and actuarial expertise, the agency could benefit from at least some in-house expertise in these areas in order to oversee the contractors that will provide these services. FEMA would also need to determine to what extent it might need to use wind speed maps in its rate determination process. Flood maps are currently used in the NFIP to identify areas that are at risk of flooding and thus the areas where property owners would benefit from purchasing flood insurance. If FEMA determined that wind maps were necessary, it would then need to determine whether the agency could develop such maps on its own or whether contracting with wind-modeling experts would be required, and what the cost of these efforts might be. Implementing the combined program would require FEMA to promote participation among communities and coordinate enforcement, a task that could be challenging for FEMA for two reasons. First, FEMA would need to manage community and state eligibility to participate in the program. The proposal calls for FEMA to work closely with and provide any necessary technical assistance to state, interstate, and local governmental agencies, to encourage the adoption of windstorm damage mitigation measures by local communities and ensure proper enforcement. While communities themselves are responsible for enforcing windstorm mitigation measures, FEMA officials told us they would have to coordinate with existing code groups to provide technical assistance training and guidance to local officials, and establish a wind mitigation code enforcement compliance program that would monitor, track, and verify community compliance with wind mitigation codes. According to an official at an organization representing flood hazard specialists, some communities are very good at ensuring compliance, while others are not. For example, in some larger communities, a city or county may have experts with vast experience in enforcing building codes and land use standards, but in other communities, a local clerk or city manager with little or no experience may be responsible for compliance. According to FEMA, the effectiveness of mitigation measures is entirely dependent on enforcement at the local level. Proper enforcement would require that resources were in place to pay for and train qualified inspectors and building department staff. Second, FEMA would need to generate public awareness on the availability of wind insurance through the NFIP. Efforts to adopt new mitigation activities and strategies have been constrained by the general public’s lack of awareness and understanding about the risk from natural hazards. To address this issue in NFIP, FEMA launched an integrated mass marketing campaign called FloodSmart to educate the public about the risks of flooding and to encourage the purchase of flood insurance. As we have noted in a previous report, according to FEMA officials, in a little more than 2 years since the contract began, in October 2003, net policy growth was a little more than 7 percent and policy retention improved from 88 percent to 91 percent. Educating the public on a new combined federal flood and wind insurance program and promoting community participation could demand a similar level of effort by FEMA to encourage participation. Implementing a combined flood and wind insurance program and overseeing the requisite contractor-supported services could place additional strain on FEMA, which is already faced with NFIP management and oversight challenges and a $17.3 billion deficit that it is unlikely to be able to repay. In March 2006, we placed the NFIP on our high-risk list because of its fiscal and management challenges. In addition to the agency’s current debt owed to the Treasury, FEMA is challenged with providing effective oversight of contractors. For example, as previously reported, FEMA faces challenges in providing effective oversight of the insurance companies and thousands of insurance agents and claims adjusters that are primarily responsible for the day-to-day process of selling and servicing flood insurance policies through the WYO program. In FEMA’s claims adjustment oversight, the agency cannot be certain of the quality of NFIP claims adjustments that allocate damage to flooding in cases involving damage caused by a combination of wind and flooding. Expanding the WYO program to include combined flood and wind policies could increase the NFIP’s oversight responsibilities as well as make resolving existing management challenges more difficult. In addition, FEMA faces ongoing challenges in working with contractors and state and local partners—all with varying technical capabilities and resources—in its map modernization efforts, which are designed to produce accurate digital flood maps. Ensuring that map standards are consistently applied across communities once the maps are created will also be a challenge. To the extent that FEMA uses wind speed maps under the combined program, the agency could face challenges similar to those currently faced by the NFIP’s flood-mapping program. New management challenges created by implementing a combined federal flood and wind program could make addressing these existing challenges even more difficult. According to FEMA officials, implementing a new flood and wind program is a process that would likely take several years and would require a doubling of current staff levels. Determining appropriate wind mitigation measures, adapting existing WYO and Direct Service processes for wind coverage, establishing a new rate-setting process, promoting community participation, and overseeing the combined program would all require additional staff and contractor services with the appropriate wind expertise. While the total cost of adding staff and hiring contractors with wind expertise is not clear, FEMA’s 2007 budget for NFIP salaries and expenses was about $38.2 million. Setting premium rates that would adequately reflect all expected costs without borrowing from the Treasury would require FEMA to make a number of sophisticated determinations. To begin with, FEMA would need to determine what those future costs are likely to be, a process that can be particularly difficult with respect to catastrophic losses. Once FEMA has determined the expected future costs of the program, it would need to determine premium rates adequate to cover those costs, a challenging process in itself for several reasons. First, the rate would need to be sufficient to pay claims in years with catastrophic losses without borrowing funds from the Treasury. This determination could be particularly difficult because it is unclear whether the program might be able to purchase reinsurance, and because attempting to build up a sufficient surplus to pay for catastrophic losses would require high premium rates compared to the size of expected claims and an unknown number of years without larger than average losses, over which FEMA has no control. Second, rate setting would have to account for two factors: adverse selection, or the likelihood that the program would insure only the highest-risk properties, and potentially limited participation because of comparatively low coverage limits. Both of these factors would necessitate higher premium rates, which could make rate setting very difficult. Finally, although no distinction between flood and wind damage would be necessary for property owners to receive payment on claims, such a distinction would still be necessary for rate-setting purposes. The proposed flood and wind program would be required, by statute, to charge premium rates that were actuarially sound—that is, that were adequate to pay all future costs. As a result, in setting rates FEMA would need to determine how much the program would be required to pay, including in years with catastrophic losses, and use this amount in setting rates, as is done by private sector insurers. H.R. 3121 does not specify how a federal flood and wind program would pay for catastrophic losses beyond charging an adequate premium rate. According to insurers and industry consultants we spoke with, making such determinations can be difficult and involve balancing the ability to pay extreme losses with the ability to sell policies at prices people will pay. For example, insurers could charge rates that would allow them to pay claims on the type of event they would expect to occur only very rarely, but the resulting rates could be prohibitively expensive. On the other hand, charging premium rates that would enable an insurer to pay losses on events of limited severity could allow them to sell policies at a lower price, but could also result in insufficient funds to pay losses if a larger loss were to occur. Insurers can come to different conclusions over the appropriate level of catastrophic losses on which to base their premium rates. For example, one state regulator said that some private sector insurers in his state used an event he believes has about a 0.4 percent chance of occurring in a given year, but that the state wind insurance program based its rates on events he believes have about a 1 percent chance of occurring. For comparison, one consultant we spoke with believed that an event of the severity of Hurricane Katrina had about a 7 percent chance of occurring in a given year. Determining the losses the program might be required to pay, especially in the event of a catastrophic event, could be especially important for FEMA. This is because if an event occurs that generates losses beyond an amount the program is prepared to pay, the program would be forced to borrow funds to pay those losses, triggering a borrowing restriction that would force it to stop renewing or selling new policies, effectively ending the program. On the other hand, premium rates high enough to pay losses resulting from the most severe catastrophic events might make the program prohibitively expensive for property owners. Determining expected losses for the first year of the program would be complicated by the fact that FEMA would not know what type of properties would be insured. Private sector insurers set their premium rates using models that take into account several variables, including the number of properties to be insured, the risks associated with the properties’ location, and the characteristics of the properties themselves. This information is used in the wind-modeling process to create a variety of scenarios that result in losses of differing severity that can then be used to create possible premium rates. Existing insurers have established portfolios of polices and can use data from these portfolios in the modeling process. A new combined federal flood and wind insurance program, according to wind-modeling companies we spoke with, would need to develop a hypothetical portfolio, making assumptions about how many policies it might sell and where, as well as the characteristics of the properties that might be insured. Such assumptions can be challenging because the number and type of properties insured will, in turn, be affected by the price of coverage. Once FEMA determines the severity of catastrophic losses a federal program would be required to pay, the agency would need to determine a premium rate that is adequate to pay such losses. This determination could be particularly difficult with regard to paying catastrophic losses— something that could occur in any year given the volatility of wind and flood losses—because of the borrowing restriction in H.R. 3121. Because it would be difficult, if not impossible, to repay any borrowed funds without the premium income from new or existing policies, this restriction, if invoked, could end the program. This would effectively require the program to charge premium rates sufficient to pay catastrophic losses without borrowing. Private sector insurers generally ensure their ability to pay catastrophic losses by purchasing reinsurance, and include the cost of this coverage in the premium rate they charge. However, reinsurance may not be an option for FEMA. Some reinsurance industry officials we spoke with said that the potential for the program to insure a large number of only high-risk properties could create a risk of high losses that could make reinsurers reluctant to offer coverage. Another option would be to charge a premium rate high enough to build up a surplus adequate to pay for catastrophic losses. However, such a rate would likely be high, and it would require an unknown number of years of operations with lower than average losses to build up a sufficient surplus, over which FEMA has no control. For example, a loss that exceeds the program’s surplus could occur in the early years, or even the first year, of the program’s operations, potentially forcing the program to borrow funds to pay losses and effectively ending the program. In determining a premium rate for a federal flood and wind program that was adequate to pay all future costs, FEMA would also need to take into account the adverse selection—the tendency to insure primarily the highest risks—and limited participation the program would likely experience. These factors can make rate setting difficult because they can both lead to increased premium rates, which can, in turn, lead to further adverse selection, limited participation, and the need for additional rate increases. For several reasons, a federal flood and wind program would probably insure mostly high-risk properties. First, a policy that combines flood and wind insurance would likely be of interest only to property owners who perceived themselves to be at significant risk of both flood and wind damage. Because consumers tend to underestimate their risk of catastrophic loss, those property owners who saw the need for a combined flood and wind policy would likely be those who knew they faced a high risk of loss. In addition, because the policy would include coverage for damage from flooding, those buying it would probably already have flood insurance, which is currently purchased almost exclusively in high-risk areas where lenders require it. As shown in figure 1, areas where there have been multiple floods as well as hurricanes and where consumers are most likely to see a need for both flood and wind coverage are primarily limited to the eastern and Gulf coasts. Second, a combined federal flood and wind insurance policy is likely to be of interest only in areas where state insurance regulators have allowed insurers to exclude coverage for wind damage from homeowners policies that they sell. According to several insurance industry officials we spoke with, in order to help protect consumers, state insurance regulators generally prohibit insurers from excluding wind damage from homeowners policies. According to insurers we spoke with, insurers can profitably write homeowners policies that include wind coverage in most areas. Only in the coastal areas that are at the highest risk of hurricane damage have insurers asked for and received permission from state regulators to sell homeowners policies that exclude wind coverage. Property owners who already have wind coverage through their homeowners policies—generally those living in areas outside the highest-risk coastal areas—would generally not be interested in a combined federal flood and wind insurance policy because they would already have wind coverage. Once again then, only property owners in high-risk coastal areas would be the most interested in purchasing a federal policy. A federal flood and wind insurance program would find itself in the same situation as state wind insurance programs that generally sell wind coverage only in areas where insurers are allowed to exclude it from homeowners policies. According to officials from the state wind programs we spoke with, their programs generally insure only the highest-risk properties. For several reasons, participation in a federal flood and wind program would probably be limited. First, a federal flood and wind insurance policy would likely cost more than purchasing a combination of flood insurance through the NFIP and wind insurance through a state wind insurance program, potentially limiting participation in the program. With respect to coverage for damages from flooding, while an estimated 24 percent of NFIP policyholders receive subsidized premium rates—with average subsidies of up to 60 percent—H.R. 3121 would require the new program to charge rates adequate to cover all future costs, potentially precluding any subsidies. As a result, the flood-related portion of a federal flood and wind policy would cost more than an NFIP flood policy for any property owners currently receiving subsidized NFIP flood rates. With respect to the wind portion of the coverage, a number of state wind insurance programs typically do not charge rates that are adequate to cover all costs, so a policy from a federal program that did charge adequate rates would likely cost more than a state wind program policy. Property owners who are receiving subsidized NFIP rates and relatively low state wind insurance rates are unlikely to be willing to move to a new program that would be more expensive. Second, a federal flood and wind policy would have lower coverage limits than the flood and wind coverage currently available in high-risk coastal areas, further limiting participation. Currently, property owners in coastal areas subject to both flood and wind damage can purchase flood insurance through the NFIP and, in some areas, wind insurance through a state wind insurance program. Table 1 compares the policy limits for a federal flood and wind policy, as proposed in H.R. 3121, with a combination of policy limits from state wind insurance program and NFIP policies. While the federal flood and wind policy would cover a maximum of $650,000 in damage for a residential property, a combination of NFIP and state wind program policies would provide on average, around $1.7 million in coverage, or about 166 percent more coverage, depending on the state. For commercial properties, the federal flood and wind policy would offer up to $1.75 million in coverage, but combined NFIP and state wind program policies would offer, on average, almost $4 million or 126 percent more coverage. Table 1. Comparison of Combination of State Wind Program and H.R. 3121 Flood Insurance Policy and State Wind Policy Limits with H.R. 3121 Flood and Wind Policy Limits (C) (B) Combined NFIP flood and state wind program (D) (A-B) (C-D) Adverse selection and limited participation could, in turn, force FEMA to raise rates still higher for the projected program, leading to escalating premiums. This possibility further complicates the rate-setting process. In general, having only a small pool of very high-risk insureds requires insurers to charge premium rates at levels above what could be charged if the risk were spread among a larger pool of insureds of varying risk levels. As we have discussed, high premium rates can, in turn, further reduce the number of property owners who are able and willing to pay for coverage and force insurers to raise rates yet higher. This cycle, referred to as an adverse selection spiral, can make it very difficult for insurers to find a premium rate that is adequate to cover losses. Finally, although H.R. 3121 stipulates that a distinction between flood and wind damage would not be required for a policyholder’s claim to be paid by a federal flood and wind program, a determination of the cause of damage would likely still be necessary for rate-setting purposes. According to several insurance industry officials we spoke with, separate determinations would be required because data on the losses associated with each type of damage are used to help determine future rates. For example, data on wind losses would be used to validate the losses predicted by wind models. While the officials said that such determinations would not need to be as accurate as when the distinction between flood and wind damage would determine under which policy a claim was covered, they would still need to be made. As a result, FEMA would need to determine whether and how such a determination might be made by FEMA staff, or if it would need to establish another process for doing so. While a combined federal flood and wind program would entail costs, it could benefit some property owners and market participants. First, property owners could benefit from reduced delays in payments and assured coverage in high-risk areas. In addition, taxpayers in some states could benefit to the extent that the exposure to loss of state wind insurance programs is reduced. At the same time, these benefits could be limited by a borrowing restriction that could terminate the program after a catastrophic event, and comparatively low coverage limits could leave some property owners underinsured. Third, private sector insurers could also benefit if high-risk properties moved to a federal program, reducing the companies’ risk of loss. But this shift would further limit private sector participation. Finally, while H.R. 3121 would require premium rates that were adequate to cover all future costs, actual losses can significantly exceed even the most carefully calculated loss estimates, as we learned from the 2005 hurricanes, potentially leaving the federal government with exposure to new and significant losses. Although a combined flood and wind program could provide benefits to some property owners, states, and insurers, it could expose the federal government to an increased exposure to loss. While the actual exposure that a federal flood and wind program might create is unclear, the likelihood for the program to insure primarily high-risk properties could create a large exposure to loss. As of 2007, wind programs in eight coastal states—programs that insure primarily high-risk coastal properties—had a total loss exposure of nearly $600 billion . While it is unclear how much of this exposure would be assumed by the federal program, a risk management consulting firm developed another estimate of potential wind-related losses that took into account the federal program’s likely adverse selection. Assuming that the program experienced just a moderate amount of adverse selection, and that the program would write coverage for around 20 percent of the current market for wind coverage, the firm used wind modeling technology to estimate the potential wind- related losses. The estimates ranged from around $6.5 billion in losses for the type of catastrophe that has a 10 percent chance of occurring each year, $11.4 billion for one that has a 5 percent chance of occurring each year, to around $32.7 billion for the type that has a 1 percent chance of occurring each year. The same firm that did the modeling for this estimate considered Hurricane Katrina to be the type of event that has a 6.6 percent chance of occurring in any year. For purposes of comparison, NFIP flood losses from Hurricane Katrina alone totaled around $16 billion, and according to the Insurance Services Office, losses paid by private sector insurers—most of which were wind-related—totaled around $41 billion. The potential exposure to the federal government, however, could be reduced by several factors. First, the program could encourage mitigation efforts that would reduce damage from wind. As noted earlier in this report, H.R. 3121 would require communities to adopt mitigation standards approved by the Director of FEMA and consistent with International Code Council building codes related to wind mitigation. In addition, H.R. 3121 would require the Director of FEMA to carry out studies and investigations to determine appropriate wind hazard prevention measures. Further, according to FEMA, the CRS structure could be applied to a federal flood and wind program, reducing premium rates for communities and property owners that implemented wind mitigation measures. Such measures could reduce losses due to wind damage and thus the federal government’s exposure to loss. Second, the federal government’s exposure is potentially limited to the amount FEMA is authorized to borrow from the Treasury, which was raised to $20.8 billion in March of 2006. However, if losses were to exceed this limit, Congress would be faced with raising the amount FEMA could borrow, thereby increasing the government’s exposure or failing to pay policyholders up to the full amounts specified in their policies. While H.R. 3121 would require a federal flood and wind program to charge premium rates that were adequate to pay all future losses in order not to create additional liability for the federal government, as we have seen, estimating future losses is difficult, and losses can exceed expectations. For example, losses from Hurricane Katrina and other hurricanes were beyond what NFIP could pay with the premiums it had collected. NFIP reported unexpended cash of approximately $1 billion following fiscal year 2004, but as of May 2007 the program had suffered almost $16 billion in losses from Hurricane Katrina. In addition, officials from several wind- modeling companies told us that the severity of Hurricane Katrina was well beyond their previous expectations, and rates that they had believed were actuarially sound turned out to be inadequate. As a result, they have had to revise their models accordingly. If losses for a combined flood and wind program did exceed the premiums collected by the program, FEMA could be forced to borrow from the Treasury to pay those losses. As of December 2007, FEMA still owed approximately $17.3 billion to the Treasury, an amount it is unlikely able to repay. In addition, the requirement in H.R. 3121 to stop renewing or selling new polices until such losses are repaid could actually increase the cost to the federal government. This is because the program’s source of revenue, which it could use to pay back the borrowed funds, would be limited to premiums paid by those whose policies had not yet come up for renewal. And once those policies expired, the program would receive no premium income. It is not clear how any debt remaining outstanding at that time would be paid, and the costs could fall to the federal government and, ultimately, taxpayers. We requested comments on a draft from FEMA and NAIC. FEMA provided written comments that are reprinted in appendix II. NAIC orally commented that they generally agreed with our report findings. FEMA also generally agreed with our findings and emphasized the challenges it would face in addressing several key issues. Finally, FEMA provided technical comments, which we incorporated as appropriate. In their comments, FEMA officials stressed their concerns over the effect that the program’s proposed borrowing restriction would have on their ability to set adequate premium rates. Specifically, they said that It would be nearly impossible to set premium rates high enough to eliminate the possibility of borrowing to pay catastrophic losses. Purchasing enough reinsurance to pay all catastrophic losses without borrowing, even if it were possible, would require premium rates so high as to be unaffordable. The high variability of combined flood and wind coverage means that there is always the possibility of catastrophic losses in any given year regardless of how premiums are designed. In addition, FEMA officials said that the termination of the program due to the borrowing restriction would create other difficulties. They said that not only could it leave property owners without coverage, but it could also prevent the program from repaying any borrowed funds. As stated in our report, the proposed borrowing restrictions would make rate setting a difficult and challenging process, and could result in high premium rates. In addition, we stated that termination of the program due to the borrowing restriction could potentially leave some property owners uninsured following a catastrophic event and limit FEMA’s ability to repay any borrowed funds. Finally, we acknowledged that the high variability of flood and wind losses would make setting rates adequate to pay losses without borrowing even more challenging, and we clarified language in the report that the risk of catastrophic losses could occur in any year regardless of how premiums are designed. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Ranking Member of the Committee on Financial Services, House of Representatives; the Chairman and Ranking Member of the Committee on Banking, Housing, and Urban Affairs, U.S. Senate; the Chairman and Ranking Member of the Committee on Homeland Security and Governmental Affairs, U.S. Senate; the Chairman and Ranking Member of the Committee on Homeland Security, House of Representatives; the Secretary of Homeland Security; the Executive Vice-President of NAIC; and other interested committees and parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objective was to examine the proposed federal flood and wind insurance program put forth in H.R. 3121, the Flood Insurance Reform and Modernization Act of 2007, in terms of (1) the program’s potential effects on policyholders, insurance market participants, and the federal government; (2) what would be required for Federal Emergency Management Agency (FEMA) to determine and charge actuarially sound premium rates; and (3) the steps FEMA would have to take to implement the program. To evaluate the program’s potential effects on policyholders, insurance market participants, and the federal government, we interviewed officials from the FEMA, the National Flood Insurance Program (NFIP), state insurance regulators, the National Association of Insurance Commissioners (NAIC), state wind insurance program operators, primary insurers, reinsurers, insurance and reinsurance associations, insurance agent associations, risk- modeling organizations, actuarial consultants, the American Academy of Actuaries (AAA), the Association of State Flood Plain Managers (ASFPM), the National Flood Determination Association (NFDA), and others. We also obtained information on state-sponsored wind insurance programs in three coastal states and one inland state, and discussed them with program officials as well as the insurance regulators within those states. We compared selected wind insurance program policies in force and exposure data from 2004 to the most recent available in eight states: Alabama, Florida, Georgia, Louisiana, Mississippi, North Carolina, South Carolina, and Texas. We also collected and analyzed state wind program data from these eight states and provisions of H.R. 3121 to compare the combination of state wind program and H.R. 3121’s flood insurance policy limits with H.R. 3121’s flood and wind policy limits. To develop our natural hazard risk maps, we used data from FEMA and the National Oceanic and Atmospheric Administration (NOAA). We used historical hazard data from 1980 to 2005 as a representation of current hazard risk for flood, hurricanes, and tornadoes. Finally, to evaluate the federal government’s exposure, we reviewed an estimate of potential wind-related losses for a federal program from an actuarial consulting firm. To examine the challenges FEMA would likely face in determining and charging a premium rate that would cover all expected costs, we spoke with FEMA/NFIP officials, state insurance regulators, NAIC, state wind insurance program operators, primary insurers, reinsurers, insurance and reinsurance associations, insurance agent associations, risk-modeling organizations, actuarial consultants, AAA, ASFPM, NFDA, and others. We also reviewed our previous reports and testimonies, Congressional Budget Office (CBO) reviews, and academic and other studies of coastal wind insurance issues. In addition, we reviewed information provided by professional associations, such as the American Insurance Association, and congressional testimony by knowledgeable individuals from the insurance industry, ASFPM, and NFDA. To examine the challenges FEMA would face in developing and implementing a federal flood and wind insurance program, we discussed the issue with FEMA/NFIP officials, state insurance regulators, NAIC, state wind insurance program operators, primary insurers, reinsurers, insurance and reinsurance associations, insurance agent associations, risk-modeling organizations, actuarial consultants, AAA, ASFPM, NFDA, and others. We also reviewed our previous reports on FEMA’s management and oversight of NFIP. In addition, we reviewed congressional testimony by knowledgeable individuals from the insurance industry, ASFPM, and NFDA. We conducted our work in Washington, D.C., and via telephone from October 2006 to April 2007 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Orice M. Williams, (202) 512-8678 or williamso@gao.gov. In addition to the person named above, Lawrence D. Cluff, Assistant Director; Farah B. Angersola; Joseph A. Applebaum; Tania L. Calhoun; Emily R. Chalmers; William R. Chatlos; Thomas J. McCool; Marc W. Molino; and Patrick A. Ward made key contributions to this report.
Disputes between policyholders and insurers after the 2005 hurricanes highlight the challenges of determining the cause and extent of damages when properties are subject to both high winds and flooding. Additionally, insurers want to reduce their exposure in high-risk areas, and state wind insurance programs have grown significantly. H.R. 3121, the Flood Insurance Reform and Modernization Act of 2007, would create a combined federal insurance program with coverage for both wind and flood damage. GAO was asked to evaluate this potential program in terms of (1) what would be required to implement it; (2) the steps the Federal Emergency Management Agency (FEMA) would need to take to determine premium rates that reflect all future costs; and (3) how it could affect policyholders, insurance market participants, and the federal government. To address these questions, GAO analyzed state and federal programs, examined studies of coastal wind insurance issues, and interviewed federal and state regulatory officials as well as industry participants and analysts. FEMA and the National Association of Insurance Commissioners generally agreed with GAO's report findings. FEMA emphasized the challenges it would face in addressing several key issues. FEMA also provided technical comments, which were incorporated as appropriate. To implement a combined federal flood and wind insurance program, FEMA would need to complete certain challenging steps. First, FEMA would need to determine wind hazard prevention standards that communities would have to adopt in order to receive coverage. Second, FEMA would need to adapt existing programs to accommodate wind coverage--for example, the Write Your Own program. Third, FEMA would need to create a new rate-setting process, as the process for setting flood insurance rates is different from what is needed for wind coverage. Fourth, promoting the new program in communities would require that FEMA staff raise awareness of the combined program's availability and coordinate enforcement of the new building codes. Finally, FEMA would need to put staff and procedures in place to administer and oversee the new program while it faces current management and oversight challenges with the National Flood Insurance Program (NFIP). Setting premium rates adequate to cover all the expected costs of flood and wind damage would require FEMA to make sophisticated determinations. For example, FEMA would need to determine how the program would pay claims in years with catastrophic losses without borrowing from the Department of the Treasury. H.R. 3121 would require the program to stop renewing or selling new policies if it needed to borrow funds, effectively terminating the program. It is also unclear whether the program could obtain reinsurance to cover such losses, and attempting to fund losses by building up a surplus would potentially require high premium rates and an unknown number of years without large losses, something over which FEMA has no control. Further, FEMA would need to account for the likelihood that participation would be limited and only the highest-risk properties would be insured. These factors would further increase premium rates and make it difficult to set rates adequate to cover future costs. A federal flood and wind insurance program could benefit some policyholders and market participants but would also involve trade-offs. For example, not requiring adjusters to distinguish between flood and wind damage could reduce both delays in reimbursing participants and the potential for litigation. However, borrowing restrictions could also leave property owners without coverage after a catastrophic event. In addition, the proposed coverage limits are relatively low compared with the coverage that is currently available, potentially leaving some properties underinsured. The program could also reduce the exposure of some insurers by insuring high-risk properties that currently have private sector coverage. However, an unknown portion of the exposure currently held by state wind programs--nearly $600 billion in 2007--could be transferred to the federal government. While H.R. 3121 would require premium rates to be adequate to cover any exposure and restrict borrowing by the program, the potential exists for losses to greatly exceed expectations, as happened with Hurricane Katrina in 2005. This could increase FEMA's total debt, which as of December 2007 was about $17.3 billion.
You are an expert at summarizing long articles. Proceed to summarize the following text: GAO has issued several reports on the establishment of AFRICOM and its components. In 2008, we testified that DOD had made progress in transferring activities, staffing the command, and establishing an interim headquarters for AFRICOM but had not yet fully estimated the additional costs of establishing and operating the command. We also reported in 2008 that DOD had not reached an agreement with the Department of State (State) and potential host nations on the structure and location of the command’s presence in Africa, and that such uncertainty hindered DOD’s ability to estimate future funding requirements and raised questions about whether DOD’s concept for developing enduring relationships on the continent could be achieved. In 2009 we reported that the total future cost of establishing AFRICOM would be significant but remained unclear because decisions on the locations of AFRICOM’s permanent headquarters and its supporting offices in Africa had not been made. We also stated that it would be difficult to assess the merits of infrastructure investments in Germany for AFRICOM’s interim headquarters without knowing how long AFRICOM would use these facilities or how they would be used after a permanent location was established. To determine the long-term fiscal investment for AFRICOM’s infrastructure, we recommended that the Secretary of Defense, in consultation with the Secretary of State, as appropriate, conduct an assessment of possible locations for AFRICOM’s permanent headquarters and any supporting offices in Africa that would be based on transparent criteria, methodology, and assumptions; include the full cost and time-frames to construct and support proposed locations; evaluate how each location would contribute to AFRICOM’s mission consistent with the criteria of the study; and consider geopolitical and operational risks and barriers in implementing each alternative. We further recommended that DOD limit expenditures on temporary AFRICOM infrastructure until decisions were made on the long-term locations for the command. DOD partially agreed with the recommendations in our 2009 report, stating that in some cases, actions were already underway that would address the issues identified in our report. In 2007, the President directed the Secretary of Defense to establish a new geographic combatant command, consolidating the responsibility for DOD activities in Africa that had been shared by U.S. Central Command, U.S. Pacific Command, and U.S. European Command. AFRICOM was initially established as a subunified command within the European Command and was thus purposely staffed by European Command personnel. Because of this link to the European Command, DOD located AFRICOM’s headquarters at Kelley Barracks in Stuttgart, Germany, where the European Command headquarters was located, with the intent that this location would be temporary until a permanent location was selected. In 2008, AFRICOM became fully operational as a separate, independent geographic command. Since that time DOD has considered several courses of action for the permanent placement of the headquarters. Initially DOD’s goal was to locate AFRICOM headquarters in Africa, but that goal was later abandoned, in part because of what DOD described as significant projected costs and sensitivities on the part of African countries to having such a presence on the continent. Consequently, in 2008 DOD conducted an analysis of other locations in Europe and the United States, using cost and operational factors as criteria against which to evaluate the permanent placement of AFRICOM headquarters. Although this 2008 analysis contained no recommendation about where AFRICOM’s headquarters should be permanently located, it concluded that several locations in Europe and the United States would be operationally feasible as well as less expensive than Stuttgart. Finally, in January 2013, the Secretary of Defense decided to keep AFRICOM’s headquarters in Stuttgart, Germany. This decision was made following the completion of an analysis directed by the House Armed Services Committee in 2011 and reiterated in 2012 and conducted by CAPE. The study, which presented the costs and benefits of maintaining AFRICOM’s headquarters in Stuttgart and of relocating it to the United States, stated that the AFRICOM commander had identified certain operational concerns as critical and that even though the operational risks could be mitigated, it was the AFRICOM commander’s professional judgment that the command would be less effective in the United States. In announcing the decision to keep AFRICOM’s headquarters in Stuttgart, the Secretary of Defense noted that the commander had judged that the headquarters would be more operationally effective from its current location, given shared resources with the U.S. European Command. The initial plan for AFRICOM was to have a central headquarters located on the African continent that would be complemented by several regional offices that would serve as hubs throughout AFRICOM’s area of responsibility (see figure 1). According to DOD officials, having a command presence in Africa would provide a better understanding of the regional environment and African needs; help build relationships with African partners, regional economic communities, and associated standby forces; and add a regional dimension to U.S. security assistance. However, after conducting extensive travel throughout Africa to identify appropriate locations and meet with key officials in prospective nations, DOD concluded that it was not feasible to locate AFRICOM’s headquarters in Africa, for several reasons. First, State officials who were involved in DOD’s early planning teams for AFRICOM voiced concerns over the command’s headquarters location and the means by which the AFRICOM commander and the Department of State would exercise their respective authorities. Specifically, DOD and State officials said that State was not comfortable with DOD’s concept of regional offices because those offices would not be operating under the Ambassador’s Chief of Mission authority. Second, African nations expressed concerns about the United States exerting greater influence on the continent, as well as the potential increase in U.S. military troops in the region. Third, since many of the African countries that were being considered for headquarters and regional office locations did not have existing infrastructure or the resources to support them, DOD officials concluded that locating AFRICOM headquarters in Africa would require extensive investments and military construction in order to provide appropriate levels of force protection and quality of life for assigned personnel. Officials were also concerned that if the headquarters were located in Africa, assigned personnel would not be able to have dependents accompany them because of limited resources and quality-of-life issues. In 2008, the Office of the Secretary of Defense’s Office of Program Assessment and Evaluation conducted an analysis that considered other locations in Europe as well as in the United States for the permanent location of AFRICOM headquarters. It compared economic and operational factors associated with each of the locations and concluded that all of the locations considered were operationally feasible. It also concluded that relocating the headquarters to the United States would result in significant savings for DOD. However, DOD officials decided to defer a decision on the permanent location for AFRICOM headquarters until 2012 in order to provide the combatant command with sufficient time to stabilize. In 2011, the Office of the Under Secretary of Defense for Policy and the Joint Staff conducted a study that considered alternatives to the current geographic combatant command structure that could enable the department to realize a goal of $900 million in cost reductions between fiscal years 2014 and 2017. As part of DOD’s overall effort to reduce recurring overhead costs associated with maintaining multiple combatant commands, the study considered merging AFRICOM with either U.S. European Command (also located in Stuttgart, Germany) or U.S. Southern Command (located in Miami, Florida). The study concluded that these two options were neither “strategically prudent” nor “fiscally advantageous,” stating that combining combatant commands would likely result in a diluted effort on key mission sets, and that the costs incurred by creating a single merged headquarters would offset the available cost reductions. The study additionally found that altering the contemporaneous geographic combatant command structure would result in cost reductions well below the targeted $900 million. Subsequently, DOD determined that it would need to identify other ways to realize its goal of finding savings from combatant commands, and the department changed the timeframe to fiscal years 2014 through 2018. According to Joint Staff officials, DOD would seek to accomplish this goal by reducing funding in the President’s budget request for fiscal year 2014 across all the geographic and functional combatant commands by approximately $881 million for fiscal years 2014 through 2018. To realize these savings, these officials stated that the department would reduce the number of civilian positions at the combatant commands and Joint Staff by approximately 400 through fiscal year 2018, but they provided few specifics. See figure 2 for a timeline of the courses of action DOD considered. In January 2013, the Secretary of Defense decided to keep AFRICOM’s headquarters in Stuttgart, Germany. This decision was made following the completion of an analysis directed by the House Armed Services Committee in 2011 and conducted by the CAPE office. The purpose of the CAPE study was to present the strategic and operational impacts, as well as the costs and benefits, associated with moving AFRICOM headquarters from its current location to the United States. DOD considered two options for the basing of AFRICOM headquarters: (1) maintain AFRICOM’s current location in Stuttgart, Germany, or (2) relocate AFRICOM headquarters to the United States. However, the CAPE study also included a mitigation plan to address strategic and operational concerns identified by leadership as factors to consider in the event that AFRICOM were relocated to the United States. The main findings of the DOD study were as follows: The annual recurring cost of maintaining a U.S.-based headquarters would be $60 million to $70 million less than the cost of operating the headquarters in Stuttgart. The break-even point to recover one-time relocation costs to the United States would be reached between 2 and 6 years after relocation, depending on the costs to establish facilities in the United States. Relocating AFRICOM to the continental United States could create up to 4,300 additional jobs, with an annual impact on the local economy ranging from $350 million to $450 million. The study stated that the AFRICOM commander had identified access to the area of responsibility and to the service component commands as critical operational concerns. The study also presented an option showing how operational concerns could be mitigated by basing some personnel forward in the region. However, it stated that the commander had judged that the command would be less effective if the headquarters were placed in the United States. In January 2013, Secretary of Defense Leon Panetta wrote to congressional leaders notifying them of his decision to retain AFRICOM in Stuttgart. In the letter, the Secretary cited the judgment of the AFRICOM commander about operational effectiveness as a rationale for retaining the command in its current location. DOD’s decision to keep AFRICOM headquarters in Stuttgart was made following the issuance of CAPE’s 2012 study, although the extent to which DOD officials considered the study when making the decision is unclear. The decision, however, was not supported by a well-documented economic analysis that balances the operational and cost benefits for the options open to DOD. Specifically, the CAPE study does not conform with key principles GAO has derived from a variety of cost estimating, economic analysis, and budgeting guidance documents, in that (1) it is not well-documented, and (2) it does not fully explain why the operational benefits of keeping the headquarters in Stuttgart outweigh the benefit of potentially saving millions of dollars per year and bringing thousands of jobs to the United States. According to key principles GAO has derived from cost estimating, economic analysis, and budgeting guidance, a high-quality and reliable cost estimate or economic analysis is, among other things, comprehensive and well-documented. Additionally, DOD Instruction 7041.3, Economic Analysis for Decisionmaking, which CAPE officials acknowledged using to inform their analysis, states that an economic analysis is a systematic approach to the problem of choosing the best method of allocating scarce resources to achieve a given objective. The instruction further states that the results of the economic analysis, including all calculations and sources of data, must be documented down to the most basic inputs to provide an auditable and stand-alone document. The instruction also states that the costs and benefits associated with each alternative under consideration should be quantified whenever possible. When this is not possible, the analyst should still attempt to document significant qualitative costs and benefits and, at a minimum, discuss these costs and benefits in narrative format. CAPE officials agreed that DOD Instruction 7041.3 provides reasonable principles to apply in conducting a cost analysis, but officials stated that, as the independent analytic organization for the department, CAPE reserves the right to conduct analysis as it deems appropriate to inform specific decisions. In April 2013, after the decision had been made to maintain AFRICOM headquarters in Stuttgart, Secretary of Defense Chuck Hagel called on DOD to challenge all past assumptions in order to seek cost savings and efficiencies in “a time of unprecedented shifts in the world order, new global challenges, and deep global fiscal uncertainty,” to explore the full range of options for implementing U.S. national security strategy, and to “put everything on the table.” In particular, the Secretary stated that the size and shape of the military forces should constantly be reassessed. He stated that this reassessment should include determining the most appropriate balance between forward-stationed, rotationally deployed, and home-based forces. CAPE’s 2012 report describes strategic and operational factors that were considered when determining whether to place AFRICOM headquarters in the United States or keep it in its present location, and it includes estimates of annual recurring and one-time costs associated with each option. However, the analysis does not include enough narrative explanation to allow an independent third party to fully evaluate its methodology. Further, in our follow-up discussions, CAPE officials could not provide us with sufficient documentation for us to determine how they had developed their list of strategic and operational benefits or calculated cost savings and other economic benefits. CAPE officials told us that they did not have documentation to show how raw source data had been analyzed and compiled for the report. The CAPE report, entitled “U.S. Africa Command Basing Alternatives,” dated October 2012, consists of 28 pages of briefing slides. It includes a discussion of the study’s assumptions and methodology, along with the one-time and recurring costs of each option. The report presents a table summarizing the strategic and operational factors that were considered when determining whether to retain AFRICOM’s headquarters in Stuttgart or move it to the United States. The table indicates that the most critical factors for a combatant command headquarters are for it to have access to its area of responsibility, partners, and organizations, as well as to have access to service components and forces. Working groups of DOD officials had compiled a list of factors considered important for a combatant command and had selected the factors they considered “critical.” The list included access to the Pentagon, interagency partners, analytic intelligence capabilities, and European partners, including the North Atlantic Treaty Organization (NATO); ability to recruit and retain civilian personnel, embed personnel from other agencies, and leverage U.S.-based non-governmental organizations; and ability to operate independently without the need for agreement from a host country. However, the CAPE report contains limited explanation of how these factors were developed or why access to Africa and proximity to its service component commands were judged to be most critical. In follow- up discussions, CAPE officials told us that when they began their study they formed working groups to compile an authoritative list of strategic and operational factors critical to the operation of a combatant command headquarters, and that the groups independently developed similar factors, thereby verifying the comprehensiveness of the list and its relevance. However, CAPE officials provided no documentation of the meetings of these groups, the sources used to develop the factors, or the process used to arrive at a consensus in ranking the factors in terms of their criticality. According to CAPE officials, the reason they did not develop such documentation is that they viewed the study to be a straightforward analysis intended to be easily digestible for its policy- maker audience. CAPE officials told us that if they had anticipated an outside review of the study and its analysis, they would have documented the study differently. We therefore could not evaluate the methodology used in developing or ranking the operational and strategic factors presented in the CAPE study. Such an explanation is important, however, since operational and strategic factors were judged to outweigh cost savings and other economic benefits. Also, while proximity to Africa and to service component commands were ranked as the most important criteria for determining where to place the headquarters, some of the service components that were created to support the establishment of AFRICOM were originally located in Europe so that they would be close to the command headquarters. For similar reasons, we were not able to determine the comprehensiveness, accuracy, or credibility of CAPE’s cost estimates. The report itself does not provide sufficient explanation of how the costs were calculated or the effect of the various assumptions on the estimated costs for us to assess the estimates. Specifically, the report does not provide the sources of the cost estimates or the methodology used in calculating them. In follow-up discussions, CAPE officials explained that support for their calculations included e-mails and phone calls. Finally, the study presented estimates of the economic benefits that could accrue to a local community if the command were relocated to the continental United States, but it is unclear how these estimates were factored into the Secretary of Defense’s decision. In discussing the costs of the alternatives, the CAPE study presents a summary of one-time costs, including construction and the transfer of personnel and materiel. The study states that relocating AFRICOM to the continental United States may create up to 4,300 jobs (in addition to those of AFRICOM personnel), with a $350 million to $450 million a year impact on the local economy. However, the study does not explain how these possible savings were calculated, and CAPE officials could not explain how this analysis had been factored into the Secretary of Defense’s decision. CAPE’s analysis estimated that the annual cost of providing AFRICOM personnel with overseas housing and cost-of-living pay was $81 million per year, as compared with the $19 million to $25 million these would cost if the personnel were located in the United States. These costs associated with stationing military and civilian personnel overseas comprise the bulk of the savings from CAPE’s analysis. Although CAPE officials did not provide us with documentation for us to assess the accuracy and completeness of their cost estimates, they are comparable with those developed in OSD’s 2008 analysis. Moreover, our analysis confirmed that savings would be likely for both military and civilian personnel if the headquarters were located in the United States. For example, our analysis indicates that, conservatively, DOD could save from $5 million to $15 million per year overall on reduced housing allowances for military personnel, depending on where in the United States they were located. In addition, an AFRICOM document states that the command spent more than $30 million in fiscal year 2011 on overseas housing benefits for civilian personnel, which they would not receive if they were stationed in the United States. In its 2012 study, DOD tasked CAPE with analyzing two options—keeping AFRICOM’s headquarters in Stuttgart or moving it to a generic location in one of the four U.S. time zones. CAPE analysts also considered establishing a forward operating headquarters so as to allay concerns about a diminished forward presence if AFRICOM headquarters were located in the United States. In CAPE’s scenario, the forward headquarters would be staffed with about 25 personnel but would be rapidly expandable. It would also place an additional 20 personnel in existing component command headquarters. CAPE officials estimate that the annual recurring costs for the forward-deployed element would be $13 million, with a one-time cost of $8 million. CAPE added these estimates to its overall estimate of how much it would cost to move AFRICOM headquarters to the United States. In CAPE’s summary of its findings, however, there is no discussion of how this factored into the commander’s conclusion when he stated his preference, or of how the CAPE study had factored into the Secretary of Defense’s final decision. Operating with a U.S. headquarters with forward locations is the way in which the U.S. Central Command and U.S. Southern Command operate from their respective headquarters in Tampa, Florida, and Miami, Florida. The Central Command, for example, has a forward operating location in Qatar, and the Southern Command has forward locations in Honduras and El Salvador. AFRICOM already has a command element at a forward location—Combined Joint Task Force - Horn of Africa. According to Task Force officials, there are about 1,800 personnel temporarily assigned to this site at Camp Lemonnier, Djibouti. In 2012, the Navy submitted a master plan to Congress listing $1.4 billion in planned improvements to that site. . When we asked AFRICOM staff about the specific operational benefits of having its headquarters located in Stuttgart, they cited the following: (1) it takes less time to travel to Africa from Stuttgart than it would from the United States; (2) it is easier to interact with partners in Africa from Stuttgart because they are in the same or similar time zones; and (3) it is easier to interact with AFRICOM’s service components because they all are in Europe, and because the U.S. European Command headquarters is also in Stuttgart. An AFRICOM briefing, however, indicated that the strategic risk of relocating the headquarters to the United States would be “minimal,” and also stated that establishing a forward headquarters could mitigate strategic and operational risks. CAPE officials also stated that maintaining AFRICOM’s headquarters in Stuttgart makes it easier for AFRICOM to share resources at the service component level with the U.S. European Command, and that AFRICOM’s sharing service components with the European Command makes it unique among the combatant commands. During our site visits, however, European Command officials told us that the two commands do not share personnel, even though two of the components are dual-hatted. In its analysis, CAPE calculated the likely increase in hours that would be spent in traveling from the headquarters location to Africa if the headquarters were relocated to the United States. CAPE also estimated that if AFRICOM headquarters were relocated to the United States, the number of trips to Africa would likely remain the same. We believe that the number of trips to the United States would decrease. However, CAPE did not analyze travel patterns by individual AFRICOM staff. Our interview with AFRICOM officials and our review of travel patterns of AFRICOM staff indicate that being closer to Africa may offer few benefits for many personnel. For example, according to AFRICOM officials, 70 percent of AFRICOM staff travel infrequently. As a result, these staff could be relocated in the United States without negative effects. This is because the AFRICOM staff includes many support personnel–-accountants, personnel specialists, information technology experts, and planners, among other staff—who do their jobs primarily at the headquarters. (Appendix 1 shows a detailed breakdown of AFRICOM staff by mission area.) In addition, our independent analysis found that about 60 percent of AFRICOM headquarters staff’s travel in fiscal years 2010 and 2011 was to locations in the United States or within Europe. In fiscal year 2011, for example, AFRICOM spent $4.8 million on travel to the United States and $3.9 million on travel to other locations in Europe, while it spent about $5.2 million on travel to Africa (see figure 3). AFRICOM officials told us that travel to other parts of Europe includes trips to Berlin to obtain visas and passports, as well as to planning meetings with its components and other partners. If AFRICOM headquarters were to be relocated in the United States, the costs associated with travel to U.S. locations would likely be reduced. While some costs for official travel throughout Europe could increase, the travel that involves administrative tasks such as obtaining visas would be eliminated. In fiscal year 2011, this travel consumed almost one-third of all AFRICOM travel expenditures. Moreover, the view that AFRICOM could perform its mission from the United States is supportable, in part, because other combatant commands have operated successfully with a U.S.-based headquarters. During our review, we met with U.S Central Command and U.S. Southern Command officials to understand the extent to which their headquarters location in the United States affects them operationally. Officials expressed various opinions regarding the benefits of forward stationing personnel, and added that they are able to address time-zone and travel challenges. Central Command officials also explained that they manage partner relationships (including with NATO partners), overcome time-zone challenges, and travel to remote locations in their area of responsibility from their headquarters location in Tampa, Florida. They also stated that although they can quickly relocate personnel to a forward location in Qatar when needed, most of the headquarters staff does not need to be physically located in their area of responsibility in order to carry out their functions. A U.S. Southern Command official told us that they use video teleconferences with the components when they need to communicate with them. He also told us that the command has a forward presence in Honduras and in El Salvador. Neither the CAPE study nor the letter accompanying it when it was transmitted to Congress in January 2013 provides a complete explanation of why DOD decided that the operational benefits associated with remaining in Stuttgart outweigh the associated costs. Past studies conducted or commissioned by DOD, however, suggest that a more thorough approach to analyzing costs and benefits is possible. For example, unlike the 2012 analysis, DOD’s 2008 analysis of potential AFRICOM locations ranked each location according to how it fared against cost and operational factors. While the analysis made no recommendation and stated that Germany was superior to all of the considered U.S. locations based on factors other than cost, it concluded that any of the examined locations would be an operationally feasible choice, and that U.S. locations were routinely and significantly cheaper to maintain than overseas bases. Moreover, a 1994 study was initiated by the U.S. Southern Command and validated by a committee appointed by the Deputy Secretary of Defense to review and refine the analysis. The committee included the Assistant Secretary of Defense for Strategy and Requirements, the Principal Deputy Comptroller, and the Director for Strategic Plans and Policy, Joint Staff. The Committee’s final report quantified and prioritized operational benefits to determine where in the United States to place the U.S. Southern Command headquarters when it was required to move from Panama. Although this study did not consider overseas locations and assumed that remaining in Panama was not an option, it nevertheless stands as an example of a more transparent approach to weighing costs and operational concerns. This study examined 126 sites in the United States and then narrowed the possibilities based on criteria that addressed the mission and quality of life for assigned personnel. The names of the locations under consideration were “masked” to ensure that the criteria were applied objectively. As a result, six locations were chosen as most desirable: Tampa, Atlanta, New Orleans, Miami, Puerto Rico, and Washington, D.C. Visits were made to each of the locations and the final tallying of scores, including consideration of costs, showed that Miami was the preferred choice. The committee expanded the analysis through additional evaluation of Southern Command’s mission requirements and quality of life issues. Once its analysis was complete, the committee briefed the Deputy Secretary of Defense on its findings and conclusions based on three criteria: mission effectiveness, quality of life, and cost. In summary, the committee stated that if mission effectiveness was the most important of the three criteria, then Miami was clearly the superior location. If quality of life was the most important, then Washington was the leading candidate. If cost was the most important consideration, then New Orleans was the leading candidate. The committee’s recommendation was for the Secretary of Defense and the Deputy Secretary of Defense to select the final Southern Command relocation site from among those three candidate cities. Finally, a 2013 RAND study conducted in response to a congressional requirement for DOD to commission an independent assessment of the overseas basing presence of U.S. forces provides several examples of principles that can be used to determine where to geographically place personnel so that they can most effectively be employed. For example, the study states that, because basing personnel in overseas locations is generally more expensive than basing them in the United States, DOD could consider configuring its forward-based forces overseas so that they can provide the initial response to a conflict, while placing in the United States the forces that will provide follow-up support. To inform the assessment of overseas forces, RAND examined how overseas posture translates to benefits, the risks that it poses, the cost of maintaining it, and how these costs would likely change if the U.S. overseas presence were to be modified in different ways—for example, by changing from a permanent to a rotational presence. DOD’s letter describing the January 2013 decision to maintain the command in Stuttgart was based on operational benefits that are not clearly laid out, and it is unclear how cost savings and economic benefits were considered in the decision. DOD’s analysis stated that significant savings and economic benefits would result if the command were relocated to the United States, and our independent analyses confirmed that significant savings are possible. Moreover, the decision does not explain why using a small contingent of personnel stationed forward would not mitigate operational concerns. Our analysis of travel patterns and staff composition raises questions about why the AFRICOM staff needs to be located overseas, because not all staff would benefit from being closer to Africa—especially when other combatant commands operate with their headquarters in the United States. Key principles that GAO has derived for economic analysis and cost estimating, as well as a DOD instruction containing principles for certain types of economic analysis, suggest that the department’s rationale should be detailed and the study underpinning it should be comprehensive and well-documented. Since making the decision to keep AFRICOM’s headquarters in Stuttgart, the Department of Defense has sought to fundamentally rethink how the department does business in an era of increasingly constrained fiscal resources. Until the costs and benefits of maintaining AFRICOM in Germany are specified and weighed against the costs and economic benefits of moving the command, the department may be missing an opportunity to accomplish its missions successfully at a significantly lower cost. To enable the department to meet its Africa-related missions at the least cost, GAO recommends that the Secretary of Defense conduct a more comprehensive and well-documented analysis of options for the permanent placement of the headquarters for AFRICOM, including documentation as to whether the operational benefits of each option outweigh the costs. These options should include placing some AFRICOM headquarters personnel in forward locations, while moving others to the United States. In conducting this assessment, the Secretary should follow key principles GAO has derived for such studies, as well as principles found in DOD Instruction 7041.3, to help ensure that the results are comprehensive, well-documented, accurate, and credible. Should DOD determine that maintaining a location in Stuttgart is the best course of action, the Secretary of Defense should provide a detailed description of why the operational or other benefits outweigh the costs and benefits of relocating the command. In written comments on a draft of this report, DOD stated that the 2012 CAPE study met the requirements of the House Armed Services Committee report accompanying the National Defense Authorization Act for Fiscal Year 2012. DOD stated that the CAPE study was not intended to be a comprehensive analysis to determine the optimal location for AFRICOM’s headquarters. Rather, DOD believed that the study provided sufficient detail to support the specific questions posed in the National Defense Authorization Act. While the CAPE office did present the estimated costs of relocating AFRICOM’s headquarters, the National Defense Authorization Act directing DOD to conduct this study specifically urged DOD to conduct this basing review “in an open and transparent manner consistent with the processes established for such a major review.” As we state in the body of our report, the CAPE study did not provide sufficient detail to support its methodology and cost estimates for a third party to validate the study’s findings. Moreover, DOD’s own guidance on conducting an economic analysis states that such an analysis should be transparent and serve as a stand-alone document. DOD also stated that Secretary Panetta’s decision not to relocate the AFRICOM headquarters to the United States was based largely on the combatant commanders’ military judgment, which is not easily quantifiable. We recognize that military judgment is not easily quantifiable. However, we continue to believe that an accurate and reliable analysis should provide a more complete explanation of how operational benefits and costs were weighed, especially in light of the potential cost savings that DOD is deciding to forego. DOD partially concurred with our recommendation. DOD stated that to meet the requirements of the Budget Control Act, the Department of Defense will consider a wide range of options. If any of these options require additional analysis of the location of AFRICOM headquarters, DOD said that it will conduct a comprehensive and well-documented analysis. We continue to believe that such an analysis is needed. Because of the current tight fiscal climate and the Secretary of Defense’s continual urging that DOD identify additional opportunities for achieving efficiencies and cost savings, DOD should reassess the option of relocating AFRICOM’s headquarters to the United States. The department’s written comments are reprinted in appendix II. We are sending copies of this report to the Secretary of Defense and the Secretary of State. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3489 or at pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. In addition to the contact named above, Guy LoFaro, Assistant Director; Nicole Harris; Charles Perdue; Carol Petersen; Beverly Schladt; Mike Shaughnessy; Amie Steele; Grant Sutton; and Cheryl Weissman made major contributions to this report.
A House Armed Services Committee report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2013 mandated GAO to conduct an analysis of options for the permanent placement of AFRICOM headquarters. While GAO's work was ongoing, DOD announced its decision to keep AFRICOM's headquarters at its current location in Stuttgart, Germany. This report addresses the following questions: (1) What courses of action did DOD consider for the permanent placement of AFRICOM headquarters? and (2) To what extent was DOD's decision to keep AFRICOM headquarters in Stuttgart based on a well-documented analysis of the costs and benefits of the options available to DOD? To meet these objectives, GAO analyzed documents provided by and interviewed officials from the Office of the Secretary of Defense; the Joint Staff; and AFRICOM and other combatant commands. The Department of Defense (DOD) has considered several courses of action for the placement of the headquarters for U.S. Africa Command (AFRICOM) but decided in early 2013 to keep it in Germany. When AFRICOM was created in 2007, DOD temporarily located its headquarters in Stuttgart, Germany, with the intent of selecting a permanent location at a later date. DOD's initial goal was to locate the headquarters in Africa, but this was later abandoned in part because of significant projected costs and sensitivities on the part of African countries. Subsequently, in 2008, DOD conducted an analysis that found that several locations in Europe and the United States would be operationally feasible and less expensive than keeping the headquarters in Stuttgart. A final decision, however, was deferred until 2012, when the Cost Assessment and Program Evaluation office completed its analysis. Subsequent to this analysis, in January 2013, the Secretary of Defense decided to keep AFRICOM's headquarters in Stuttgart. In announcing the decision, the Secretary noted that keeping AFRICOM in Germany would cost more than moving it to the United States but the commander had judged it would be more operationally effective from its current location, given shared resources with the U.S. European Command. GAO's review of DOD's decision to keep AFRICOM headquarters in Germany found that it was not supported by a comprehensive and well-documented analysis that balanced the operational and cost benefits of the options available to DOD. The 2012 study that accompanied the decision does not fully meet key principles for an economic analysis. For example, the study is not well-documented and does not fully explain the decisions that were made. Although details supporting DOD's cost estimates were not well-documented, the analysis indicated that moving the headquarters to the United States would accrue savings of $60 million to $70 million per year. The 2012 study also estimated that relocating the headquarters to the United States could create up to 4,300 additional jobs, with an annual impact on the local economy ranging from $350 million to $450 million, but it is not clear how this factored into DOD's decision. Beyond costs and economic benefits, the study lists several factors to be considered when determining where to place a headquarters. It ranks two of these factors--access to the area of responsibility and to service components--as critical. However, little support exists showing how the factors were weighted relative to each other. Moreover, the study describes how a small, forward-deployed headquarters element such as the ones employed by other U.S.-based combatant commands might mitigate operational concerns, but the study is silent about why this mitigation plan was not deemed a satisfactory option. In discussions with GAO, officials from the Central and Southern Commands stated that they had successfully overcome negative effects of having a headquarters in the United States by maintaining a forward presence in their theaters. In sum, neither the analysis nor the letter announcing the decision to retain AFRICOM headquarters in Stuttgart explains why these operational factors outweighed the cost savings and economic benefits associated with moving the headquarters to the United States. Until the costs and benefits of maintaining AFRICOM in Germany are specified and weighed against the costs and benefits of relocating the command, the department may be missing an opportunity to accomplish its missions successfully at a lower cost. To meet operational needs at lower costs, GAO recommends that DOD conduct a more comprehensive and well-documented analysis of options for the permanent placement of the headquarters for AFRICOM, including documentation on whether the operational benefits of each option outweigh the costs. DOD partially concurred with GAO’s recommendation, stating that the decision was based primarily on military judgment but that it will perform additional analysis of the location of the headquarters if the Secretary deems it necessary. GAO continues to believe such analysis is needed.
You are an expert at summarizing long articles. Proceed to summarize the following text: Some DI benefit recipients have incomes low enough to qualify them for SSI as well and receive benefits from both programs. gainful activity because of a severe physical or mental impairment. The standards for determining whether the severity of an applicant’s impairment qualifies him or her for disability benefits are set out in the Social Security Act and SSA regulations and rulings. SSA’s disability claims process is complex, multilayered, and lengthy. Potential beneficiaries apply for benefits at any one of SSA’s local field offices, where applications are screened for nonmedical eligibility: applicants for DI must meet certain work history requirements, and applicants for SSI must meet financial eligibility requirements. If the applicants meet the nonmedical eligibility requirements, their applications are forwarded to a state disability determination service (DDS), which gathers, develops, and reviews the medical evidence and prior work history to determine the individual’s medical eligibility; the DDS then issues an initial determination on the case. Applicants who are dissatisfied with the determination may request a reconsideration decision by the DDS. Those who disagree with this decision may appeal to SSA’s Office of Hearings and Appeals (OHA) and have the right to a hearing before one of the administrative law judges (ALJ) located in hearings offices across the country. Individuals who disagree with the ALJ decision may pursue their claim with SSA’s Appeals Council and ultimately may appeal to a federal district court. This process can be both time-consuming and confusing for the applicants and may compel many of them to seek help from an attorney. Obtaining representation for a pending case has become increasingly popular because disability representatives have been successful in obtaining favorable decisions for their clients upon appeal.In fiscal year 1997, about 70 percent of all cases decided at the ALJ-hearing level involved representatives. The fees attorneys representing DI and SSI applicants can charge are limited by law and must be approved by SSA. In order to be compensated, attorneys must file either a fee agreement—a formal contract signed by the applicant and the attorney setting the fee as a percentage of the applicant’s past-due benefits—or a fee petition that details the specific costs associated with the case. Past-due benefits are calculated by multiplying the monthly benefit amount by the total number of months from the month of entitlement up to, but not including, the month SSA effectuates the favorable disability decision. When fee agreements are filed, attorney fees are limited to 25 percent of the applicant’s past-due benefits, up to $4,000 per case.In fee petition cases, however, SSA can approve any fee amount as long as it does not exceed 25 percent of the beneficiary’s past-due benefits. For DI cases, SSA usually withholds the amount of the fee from the beneficiaries’ past-due benefits and pays the attorneys directly, in effect guaranteeing payment for the attorney. In SSI cases, however, SSA does not have the authority to pay attorneys directly, and only calculates the amount an attorney is due. Attorneys must instead collect their fees from the SSI recipients. Effective February 1, 2000, the Ticket to Work Act imposed a 6.3 percent user fee on attorneys for SSA’s costs associated with “determining and certifying” attorney fees on the basis of beneficiaries’ past-due benefits. This amount is deducted from the approved attorney’s fee. The act also directed us to study a number of issues related to the costs of determining and certifying the attorney fees, “efficiencies” available to reduce these costs, changes to the attorney fee requirements, and the new user fee. While SSA has been paying attorney fees for over 30 years, the payment process itself is inefficient, and the costs of the process are not known. Approving and paying attorney fees is a complex process that involves many steps; a number of staff in different units and locations; and various information systems that are not linked and that, therefore, require considerable manual intervention. Regarding the costs to administer this multistep process, we have not yet fully determined whether SSA’s past estimate appropriately captured the costs associated with administering attorney fees; however, the agency is currently developing a way to capture actual costs. Attorneys are compensated for their services through either a fee agreement or a fee petition. Attorneys told us that the fee agreement is usually an easier, quicker way to get paid and that, although the fee petition is useful, it is also a more cumbersome tool used primarily when potential fees exceed the statutory limits or when attorneys were unable to file a fee agreement at the beginning of a case. In 1999, fee agreements accounted for about 85 percent of attorney payments, and fee petitions accounted for the balance. Figure 1 shows the steps involved in processing attorney fee agreements. First, officials in SSA’s field offices or ALJs in OHA—depending on where the case is being determined—review fee agreements for DI and SSI cases to assess the reasonableness of the attorney fee charges.If a favorable decision is made on the case and SSA approves the fee agreement, both items—the applicant’s case and the fee agreement—are forwarded to a processing center for payment. 5All parties involved—SSA, the beneficiary, and the attorney—may question the amount of the attorney’s fee, and the fee may be changed if warranted. The Ticket to Work Act requires SSA to impose an assessment, or user fee, to pay for the costs the agency incurs when paying attorneys directly from a claimant’s past-due benefits. For calendar year 2000, the act established the user fee at 6.3 percent of the attorney fees; for calendar years after that, the percentage charged is to be based on the amount the Commissioner determines necessary to fully recover the costs of “determining and certifying” fees to attorneys, but no more than 6.3 percent. The actual costs of administering attorney fees are not yet known because SSA was not required to capture these costs in its information systems and did not have a methodology to do so. The 6.3 percent user fee found in the law was based on an estimate prepared by the agency. Documentation SSA provided us indicates that the percentage was computed by multiplying the numbers of fee petitions and fee agreements the agency processed in 1994 by the amount of time SSA determined it spent on various related activities. When data were not available on the volume of activities or the time spent on them, SSA used estimates. The agency’s overall cost estimate included both the time spent by the ALJs reviewing documentation to support the attorney fees—that is, the fee petitions and fee agreements—as well as the processing centers’ costs associated with calculating the fees, choosing the notice language, and preparing the notices. In addition, the agency included the cost of administering the user fee itself. We recently received information on the basis for SSA’s 6.3 percent user fee calculation and have only begun to analyze the assumptions the agency used to compute it. In order to comply with the Ticket to Work Act, SSA is currently in the process of developing a methodology to capture the current costs of administering the attorney fee provisions. These costs could then provide the foundation for the agency’s decisions about what the rate should be to achieve full recovery of costs. SSA has established a work group to identify the components of administering attorney fees and to develop its new methodology. Thus far, the work group has identified four components associated with the cost of administering attorney fees: (1) the time that SSA field office staff spend informing claimants that they are entitled to legal representation when filing an appeal; (2) the time it takes an ALJ to review and approve the fee; (3) the charges incurred by SSA’s Office of Systems to program systems to track attorney fee cases and related computing time to generate a payment file/tape for Treasury to use to pay the attorney; and (4) the process for calculating the attorney fee, entering relevant attorney and other key data into SSA’s information systems, and certifying related amounts for payment. In April and May of this year, SSA work group officials told us that they do not plan to capture cost information from the first two components because it would be time-consuming to do so, and the methods currently available to SSA for capturing these two types of costs may not produce reliable results. For the third component, SSA officials told us they can readily gather cost information related to time spent on programming SSA’s systems to track attorney fees. However, SSA does not have a cost allocation methodology in place to determine related computing time for processing attorney fees. SSA officials indicated that computing time would constitute an insignificant portion of SSA’s total costs to administer attorney fees. To capture data on the fourth component, SSA modified one of its information systems in February 2000 to determine the number of attorney fee cases it administers. Using the number of cases it processes, SSA is working on a methodology to estimate the costs involved in this fourth component for paying attorneys. SSA plans to have this cost data available by the end of fiscal year 2000. However, in commenting on a draft of this statement, SSA officials told us that they do plan to capture costs for the second component—the time it takes the ALJ to review and approve the fee. In reviewing the law, the cost of ALJ time spent reviewing and approving fees appears to be part of the cost of “determining and certifying” fees and may represent a significant portion of the total costs. As SSA determines the ALJ costs in its current approach, it will need an allocation methodology that accurately allocates the costs associated with DI cases for which SSA is paying an attorney directly to those cases. Attorneys we talked with told us they are concerned now that they are paying more than their fair share of the cost of the process. Attorneys have expressed concern about the length of time it takes SSA to process their fees and have questioned the appropriateness of charging a user fee for a service that takes so long. In regard to the user fee, you specifically asked us to look at issues surrounding (1) linking the amount of the user fee to the timeliness of the payment to the attorney and (2) expressing the user fee as a fixed amount instead of a percentage. When considering one or both of these changes, certain policy and administrative implications would need to be addressed. According to the National Organization of Social Security Claimants’ Representatives (NOSSCR),6 individual attorneys, and SSA officials, SSA often has trouble making timely payments to attorneys. Processing attorney fees represents a small part of SSA’s overall activities—in 1999, we estimate that SSA processed about 6 billion beneficiary payments and SSA reported it processed less than 200,000 attorney payments. Additionally, SSA officials told us that they view such responsibilities as paying beneficiaries as more directly linked to their mission than paying attorneys. As a result, SSA has not routinely gathered and monitored performance data on the length of time it has taken to pay attorneys. However, recently tabulated data show that from January 1995 through May 2000, only 10 percent of attorney fees for fee agreements were paid within 30 days from the time of the beneficiary is put on current-pay status to payment of fees. As figure 2 shows, there is a wide range of elapsed processing times for payments. NOSSCR is an interest group for Social Security lawyers. To address timeliness concerns, a recent legislative proposal (H.R. 4633) would permit the user fee to be assessed against attorneys only if SSA pays attorneys within 30 days from the time of initial certification of benefits. Figure 2 above shows that from 1995 to the present, SSA has only been able to meet this timeframe in 10 percent of the cases. However, certain issues related to this proposal should be clearly understood by both SSA and the attorneys. All parties involved must clearly understand at what point in the process the clock starts ticking, when it stops, and what activities are performed during this period. When considering the current legislative proposal or contemplating other options, concerned parties need to weigh the attorneys’ right to be paid in a timely manner with SSA’s need to ensure the accuracy of its payments. While SSA’s current process is inefficient and the agency can make some improvements, not all factors are within SSA’s control, such as awaiting fee petition information from attorneys and coordinating workers’ compensation offsets. The current legislative proposal states that the clock starts ticking with initial certification of benefits—also referred to as the point when the beneficiary is put in current-pay status. At this point, SSA might be developing the case for final calculation of past-due benefits and might not have control over processing times. Attorneys need to realize that because the proposal starts the clock with initial certification, and additional work may still need to be done to develop the case, the total elapsed time from favorable decision to attorney fee payment might not actually be decreased. Information on these issues needs to be clearly communicated or the frustration and complaints with the process are likely to continue. In addition, having the clock start before SSA has complete control over the process could create perverse incentives that may actually delay payments to attorneys. Because SSA does not have control over all the activities that occur following initial certification of benefits, it is conceivable that some attorneys might view this as an opportunity to delay providing needed information to SSA in hopes of avoiding the user fee. Aside from the delays that are outside its control, SSA is aware that there are steps it could take to make the process more efficient. For example, agency officials have said that instituting direct deposit of attorney fees is more efficient; it could shorten the time it takes for the fee payment to reach the attorney, and could eliminate delays that result when attorneys change their addresses without notifying SSA.SSA currently pays 65 percent of beneficiaries by means of direct deposit and wants to expand this approach to all its transactions. Possible improvements to SSA’s information systems may also help reduce processing times. For instance, enhancements to SSA’s information systems could eliminate much of the manual workload involved in processing and certifying attorney fees. As stated earlier, various information systems are currently used to process SSA’s attorney fee workload associated with DI cases. These systems capture data on various aspects of the disability claims process, but are not linked to one another and, thus, require some manual intervention. As a result, without linked systems or a more streamlined process it is difficult for SSA to capture the data required to measure the timeliness of the total range of activities involved in paying attorneys. To efficiently administer user fees that are based on timeliness of fee payments to attorneys, SSA will need to develop new software code to link these stand-alone information systems, or develop a new system to process the entire attorney fee workload. SSA currently has plans for systems enhancements to improve the attorney fee process, which should help improve case processing time. According to SSA, these enhancements would automate the steps in order for systems to recognize attorney fee agreement cases, compute and withhold the 6.3 percent user fee, pay the actual attorney fee, and release the remainder of the past-due benefits immediately to the beneficiary.9 If SSA were to make the proposed system enhancements to process attorney fees, it may be advisable to revisit requirements for how quickly the agency could be expected to process an attorney fee. A number of issues surround the question of whether the user fee should be expressed as a fixed amount or as a percentage, and these are linked in large part to the question of what costs the user fee should cover. On one hand, expressing the user fee as a percentage of the attorney fee, as is currently the case, assumes that the costs SSA incurs in processing user fees grow in proportion to the fees. This could be the case, for example, if an ALJ spends extra time reviewing a fee petition in cases involving more activity and larger fees. On the other hand, expressing the user fee as a fixed amount assumes that the costs of processing the attorney fees are relatively the same and, therefore, a higher attorney fee does not translate into higher processing costs. This could be the case if the costs are fixed and do not vary from case to case. To adequately weigh the relative merits of both options, we need to further study the cost estimate information SSA used to develop the 6.3 percent user fee, the cost data that SSA is currently capturing, and the percentage of DI versus SSI cases processed. This analysis will be included in our final report, due to the Congress by the end of this year. Attorneys, NOSSCR, and advocates have discussed various changes related to attorney fees: issuing joint checks for past-due benefits to both the attorney and the beneficiary, raising the $4,000 limit on attorney fees allowable under the fee agreement process, and extending the statutory withholding of attorney fees to the SSI program. Each of these proposals involves trade-offs that should be considered before its implementation. Under the current process, when an individual receives a favorable DI decision, SSA makes an effort to issue the beneficiary’s past-due benefits as soon as possible and withholds the amount of the attorney fee. After the fee is processed, Treasury issues a check to the attorney. Individual attorneys have suggested changing this process from one in which two separate payments are made to one in which a single check for the total amount of the past-due benefits—made out jointly to the beneficiary and the attorney—is sent directly to the attorney. The attorney would deposit the check into an escrow account and pay the past-due benefits, minus his or her fee, to the beneficiary. Attorneys told us that joint checks would help expedite the attorney fee process because the beneficiary’s money and attorney fees would be linked, and SSA views paying beneficiaries as a priority. Such a change could have serious policy implications, however. For instance, SSA currently attempts to pay the beneficiary as soon as possible following a favorable decision. Issuing joint checks might delay payment to the beneficiary because the beneficiary would have to wait until after the attorney deposited the money into an escrow account to receive benefits. In addition, when SSA controls the payment, it is assured that no more than 25 percent is deducted from the past-due benefits. Sending joint checks to the attorney would reduce SSA’s ability to enforce attorney fee limits and could increase the risk that attorneys would short change beneficiaries. In turn, control over payment to the beneficiary would shift to the attorney, while accountability for the payment would remain with SSA. In addition, a number of administrative issues dealing with the implementation of joint checks would need to be addressed. First, SSA needs to know when the beneficiary receives his or her benefits. SSA is responsible for sending out benefit statements, SSA-1099s, to beneficiaries because sometimes Social Security benefits are taxable. With joint checks, SSA might have difficulty tracking when beneficiaries received their benefits. If attorneys were responsible for paying the past-due benefits from their escrow accounts, SSA would need a system certifying when—in which tax year—the beneficiary was paid. This reporting system would be needed to ensure the accuracy of the SSA-1099s. Another administrative consideration is that the current information system used for processing DI cases—MCS—would need to be modified so that joint payments could be issued. As noted earlier, this system is designed to effectuate payments to the beneficiary or his or her representative payee only. Another change being discussed is raising the $4,000 cap on attorney fees for the fee agreement process. As I explained earlier, under the fee agreement process, attorneys can receive 25 percent of the past-due benefits up to $4,000, whichever is less. By statute, the Commissioner of SSA has the authority to adjust the cap at his or her discretion. Debate on this issue centers around how legal representation for DI applicants might be affected. Attorneys we spoke with told us that higher fees would increase the attractiveness of DI claims. According to this argument, attractive fees could result in more attorneys for DI cases, which could increase the rate of representation for this population. Further, an increased rate of representation might result in more favorable decisions for DI applicants. The opposing argument is that representation is readily available to DI applicants. According to an SSA official, the agency has not raised the cap because it determined that a higher cap was not needed to support representation. In either case, evaluating this issue is difficult in the absence of such data as historical and current representation rates and without knowing the proportion of applicants who could not secure representation and why. A final change being discussed would be to expand withholding to the SSI program. SSA currently calculates the amount of attorney fees due in SSI cases but does not withhold the fee from beneficiaries’ past-due benefits. Current law explicitly differentiates between DI and SSI regarding attorney fees, stating that withholding and paying attorney fees is only permissible for DI cases. Many believe that extending withholding to SSI is appropriate because it would increase representation for SSI applicants and alleviate a perceived equity imbalance for attorneys who represent both DI and SSI applicants. Because there is no guarantee that attorneys will receive fees due to them for SSI cases, some attorneys told us that they are reluctant to accept SSI cases. The attorneys maintained that expanding withholding to SSI would increase the attractiveness of the cases, and representation would increase. In fact 1999 data show that at the hearing level, applicants for DI and combined DI/SSI benefits were more likely to be represented by an attorney than those applying for SSI only. Additionally, according to an official from an association of ALJs, expanding withholding to SSI would be beneficial to the applicants because cases with representation are better presented and have a better chance of receiving a favorable decision than nonrepresented cases. Proponents of extending withholding to SSI also told us that the current situation is unfair to attorneys representing SSI applicants. According to this view, it is inequitable for attorneys to be guaranteed payment for DI cases but not for SSI cases. As with the DI cases, the SSI recipient has an obligation to pay for his or her legal services; however, in DI cases, SSA ensures that this happens. For SSI cases, the attorney must obtain payment directly from the beneficiary. The opposing view of extending withholding to SSI is based on the relative economic status of DI beneficiaries and SSI recipients. SSI recipients tend to be poorer than DI beneficiaries, and some advocates have expressed concern that taking money from a recipient’s past-due benefits to pay attorneys would be detrimental to the recipient’s economic well-being. SSI recipients often have many financial obligations, such as overdue rent and utility bills that need to be paid. Advocates maintain that deducting the attorney fee from the past-due benefits might make it impossible for recipients to pay these bills. However, if an attorney successfully appeals a case for an SSI recipient, the recipient should be in a better position financially. From an administrative standpoint, if SSA was required to withhold attorney fees for SSI cases, it will need to develop new information systems or significantly modify existing systems to process this new workload. However, as with any system effort, SSA’s ability to carry out this task will depend on its available resources and the priority that it gives to this initiative. Mr. Chairman, this concludes my prepared statement. At this time, I will be happy to answer any questions you or other Members of the Subcommittee may have. For information regarding this testimony, please contact Barbara Bovbjerg at (202) 512-7215. Individuals who made key contributions to this testimony include Yvette Banks, Kelsey Bright, Kay Brown, Abbey Frank, Valerie Freeman, Valerie Melvin, Sheila Nicholson, Daniel Schwimer, and Debra Sebastian. (207092)
GAO discussed issues involving the Social Security Administration's (SSA) process for paying attorneys representing applicants for disability benefits, focusing on three areas of the attorney payment process: (1) the process itself, including the costs of processing the payments; (2) possible changes to the way the user fee is charged; and (3) changes being considered for the attorney fee payment process overall. GAO noted that: (1) while SSA has been paying attorney fees from beneficiaries' past-due benefits for over 30 years, the payment process remains inefficient, and little historical data are available to help GAO analyze proposed changes; (2) under the current procedures, the inefficiencies in processing fee payments to attorneys result from using a number of different staff in different units and various information systems that are not linked, and are not designed to calculate and process all aspects of the attorney fee payment, thus necessitating manual calculations; (3) the Ticket to Work Act includes a provision that requires SSA to charge an assessment to recover the costs of this service; (4) GAO has only begun to analyze the estimate that was used as a basis for the user fee, and SSA does not know the actual cost it incurs in processing attorney fees; (5) however, the agency is developing a methodology to better capture these costs; (6) SSA has trouble with making timely payments to attorneys, and some have questioned the appropriateness of charging a user fee for a service that takes so long; (7) a recent legislative proposal calls for eliminating the user fee if SSA does not pay the attorney within 30 days; (8) in many cases, it will be difficult for SSA to meet these timeframes; (9) attorneys need to realize that, while it is possible for SSA to improve the efficiency of the process it uses to pay them, some factors that delay their payments are outside SSA's control and are unlikely to change at this time; (10) three possible changes to the attorney fee payment process include whether: (a) joint checks for past-due benefits should be issued to the beneficiary and the attorney; (b) the dollar limit on certain attorney fees should be raised; and (c) SSA's attorney fee payment process should be expanded to the Supplemental Security Income program; (11) these changes would have both policy and administrative implications that need to be considered; (12) some of the changes could increase attorney representation for disability applicants, according to attorneys GAO spoke with; (13) however, not everyone agrees with this premise; (14) moreover, there are some drawbacks to these changes; and (15) SSA indicated it may need to make significant modifications to its information systems to issue joint checks or pay attorneys who represent SSI recipients.
You are an expert at summarizing long articles. Proceed to summarize the following text: In fiscal year 2014, DOD’s prime contractors subcontracted for $133 billion in goods and services to support DOD’s missions, of which $44 billion, or 33 percent, was awarded to small businesses. The Federal Acquisition Regulation (FAR) generally requires proposed prime contractors to have individual subcontracting plans in place for contracts (including modifications) of more than $650,000—or $1.5 million for construction contracts—whenever subcontracting opportunities exist. These plans are to document subcontracting goals as a specific dollar amount planned for small business awards and as a percentage of total available subcontracting dollars to various socioeconomic categories of small businesses. The plans also are to identify the types of products and services suitable for subcontracting awards. Under the authorizing statute, as implemented in regulation, each participant in the Test Program negotiates and reports on subcontracting goals and achievements for a specific fiscal year on a plant, division, or corporate-wide basis. A comprehensive plan may cover a large number of individual contracts. For example, one participant told us about a plan that covered more than 3,000 contracts that otherwise would require individual subcontracting plans. Reporting small business subcontracting activity in a comprehensive plan means that less data may be available on the subcontracting activities for specific contracts or programs. In addition, comprehensive subcontracting plans are to include various initiatives to enhance small business subcontracting opportunities through specific programs or other actions. According to a DOD official, these initiatives are not specific to any one contract, but can be completed across the entire scope of defense work the prime contractor performs. Table 1 highlights the key differences between the comprehensive subcontracting plans used in the Test Program and individual subcontracting plans. Current Test Program eligibility is limited by statute to defense contractors that performed at least three DOD prime contracts for supplies and services worth a combined value of at least $5 million during the preceding fiscal year. In addition, the contractor must have achieved a small disadvantaged business subcontracting participation rate of at least 5 percent during the preceding fiscal year. Participation in the Test Program is voluntary and, as shown in table 2, there are currently 12 participants. According to data in the Federal Procurement Data System—Next Generation, the federal government’s contract reporting system, 8 of these contractors are ranked among the top 10 U.S. defense contractors, based on contract dollars obligated in fiscal year 2014. According to DOD officials, the DOD Office of Small Business Programs (OSBP) is responsible for overseeing the Test Program, but delegates the management and oversight of comprehensive subcontracting plan annual negotiations and performance evaluation to the Defense Contract Management Agency (DCMA). DCMA is responsible for reviewing and approving Test Program participants’ proposed comprehensive subcontracting goals and initiatives to ensure that they are challenging yet realistic. DCMA is also responsible for reviewing the achievements of the participants at the end of the fiscal year and rating their performance. The Test Program does not have an overall measure for demonstrating success in creating small business subcontracting opportunities; instead, metrics are established for each individual participant. The performance of participants in the Test Program is measured by their achievement of negotiated goals and initiatives. Congress has extended the program eight times but has not made the program permanent in part because of a lack of data on program performance. The latest extension was in the National Defense Authorization Act for Fiscal Year 2015 in which Congress temporarily extended the Test Program to December 31, 2017, and made a number of amendments to the program. These amendments included, among other things: an increase in the eligibility requirements; additional reporting requirements on subcontracting activities and costs to increase visibility at the contract, program, and military department levels; and an additional consequence for failure to make good faith efforts to comply with the program. DOD commissioned reviews of the Test Program from two consulting groups in 2002, 2007, and 2013. These reviews examined the performance of Test Program participants against their goals and initiatives and whether administrative savings had been achieved. According to DOD officials, DOD has not released these reviews publicly or submitted them to the Congress. All of the reviews found that the program resulted in administrative costs savings and enhanced subcontracting opportunities for small businesses. Some in the small business community have publicly raised concerns about the Test Program. The primary concern is that the lack of data available to evaluate the program precludes a determination of the program’s effectiveness and impact on small businesses. Specifically, some small business advocates believe that more data are needed to evaluate whether the program has resulted in more awards in areas such as innovative technology research that traditionally have not been made available to small businesses. Prior DOD reviews estimated that use of the Test Program resulted in the avoidance of millions of dollars in administrative costs for both participants and DOD and may offer other benefits. Significant one-time administrative costs could result if the Test Program were canceled or allowed to expire and comprehensive subcontracting plans could no longer be used. Test Program participants and DOD officials we interviewed stated that the use of comprehensive subcontracting plans results in benefits other than administrative cost avoidance as well. However, DOD has not taken action to address the program’s status. DOD’s reviews estimated that the program resulted in millions of dollars in cost avoidance for the program’s participants. According to these reviews, cost avoidance is enabled primarily by the use of a single comprehensive subcontracting plan for multiple contracts rather than an individual subcontracting plan for each contract. Our examination of the methodologies and data used in these reviews, as well as our own analysis of more recent data on contracts covered by comprehensive subcontracting plans, support these conclusions. Table 3 shows the total estimated annual administrative costs avoided by the participants, according to the DOD reviews. Some of the Test Program participant officials we interviewed explained that they have not quantified their administrative cost avoidance under the program; however, they stated that savings likely accrued from negotiating a single plan for multiple contracts as opposed to individual plans. One participant official stated that utilizing comprehensive subcontracting plans also allows them to use fewer people for the administrative tasks of developing and monitoring subcontracting plans. Certain participants noted that the resources that would have gone to the development and administration of individual plans can be used instead for increased small business outreach activities such as more small business forums. Expansion of the Test Program from division-level to company-level could increase the cost avoidance for certain participants, but officials from one division-level participant we interviewed stated that the continued test status of the program served to inhibit that expansion. The 2007 DOD review also found the cost avoided by Test Program participants and DOD would increase substantially if more prime contractors used the program, but that this was unlikely due to their reluctance to enter the program because of its test status. In addition to the costs avoided by program participants, DOD also may benefit from avoiding administrative costs as a result of the Test Program. The 2007 DOD review found that DOD benefited from negotiating, administering, and monitoring consolidated small business subcontracting plans rather than multiple individual plans. For example, for fiscal year 2005, the review estimated that the administrative cost avoided by DOD under the Test Program was at least $45 million. Defense officials stated that additional participation in the Test Program by other prime contractors could increase the department’s cost avoidance, but stated that they are not considering new participants for the program due to its test status. DOD’s 2013 review noted that the program participants at that time had about 7,000 contracts and subcontracts that would require individual subcontracting plans if the Test Program did not exist. The estimated cost to convert the contracts to individual subcontracting plans if the Test Program were canceled or allowed to expire was $21.7 million. The review also noted that the combined cost for the participants to prepare, submit and negotiate their comprehensive subcontracting plans on an annual basis under the Test Program was approximately $660,000. In addition, most of the current participants have been in the Test Program for at least 10 years; two have been in the program since its inception 25 years ago. As a result, converting from comprehensive subcontracting plans to individual subcontracting plans may require participants to add personnel and change their current administrative systems. One participant noted that reverting to individual subcontracting plans would require an additional 44 employees to handle the additional administrative workload and an estimated $2 million in system changes. Our analysis of similar data supports these estimates. We analyzed data from DCMA for contracts that were active as of March 2015 for 6 of the 12 Test Program participants, including those with the largest number of contracts, and found a total of 3,299 contracts that would require individual subcontracting plans in the absence of the Test Program. Using estimates provided by DOD for the cost and number of hours required to develop an individual subcontracting plan, we determined that these six participants would need to spend between $6.3 million and $9.5 million to convert existing contracts to individual subcontracting plans. One participant we interviewed estimated that for fiscal year 2013, the cost to convert all of the contracts under the Test Program to individual plans would have been approximately $6.6 million. The cost to this participant to prepare, review, and negotiate its fiscal year 2013 comprehensive subcontracting plan was $159,000. As is the case with the Test Program participants, DOD would likely experience an increase in costs for negotiating, administering, and monitoring individual small business subcontracting plans if the program were to end. Defense officials noted that DOD would have to hire additional personnel in DCMA and the military services to manage the thousands of additional individual subcontracting plans. These personnel—which would include contracting officers, small business specialists, and cost analysts—all require specific levels of expertise and training depending on the cost and complexity of the contract. According to DOD officials, it would be challenging to provide the amount of resources required with the limited acquisition workforce DOD employs and available budgets for hiring new personnel. According to some of the Test Program participants we interviewed, the use of comprehensive subcontracting plans as opposed to individual subcontracting plans also provides additional non-monetary benefits. For example, certain participants stated that they benefit from the approach of consolidating small business subcontracting, as it allows them to consider or leverage small business subcontracting opportunities across the whole organization rather than on a contract-by-contract basis. Some participants stated that as a result of this approach they have increased small business subcontracting outside of their defense contracts by utilizing the same small businesses cultivated under the Test Program for other work. All of the participants we interviewed noted that the issue of small business subcontracting has greater visibility and awareness with corporate leadership as a result of the Test Program and that their entities might be less inclined to award subcontracts to small businesses in its absence. DOD officials agreed that, in addition to administrative cost avoidance, the Test Program gives them leverage they may not have in negotiating individual contracting plans. The ability to require the contractor to look for opportunities across multiple contracts increases the likelihood of small business outreach and subcontract awards. Officials stated, however, that there are some concerns within the department about the Test Program. For example, they stated that reporting of small business subcontracting activity at a division or corporate level reduces the department’s visibility into small business activity on a contract or programmatic level. In addition, DOD officials said that the continuing test status of the program has made it difficult for OSBP and DCMA to develop the policies and guidance they believe are needed to enhance program activities. The 2007 review strongly recommended that DOD work with Congress to make the Test Program permanent. According to DOD officials, DOD has not acted on this recommendation, by drafting a legislative proposal or taking other actions, primarily due to concerns that the program, at least as structured prior to the latest legislative changes, may limit visibility into small business subcontracting activities for individual programs or contracts. However, implementation of the increased reporting requirements legislated in the National Defense Authorization Act for Fiscal Year 2015 may alleviate these concerns as they include reporting on small business subcontracting activities at, among other things, the contract and major defense acquisition program level. Working with Congress to address the program’s status, for example by providing information on the effectiveness of the Test Program as identified in the three DOD-commissioned reviews and our analysis, could help eliminate the uncertainty associated with the program. In reviewing participants’ performance in the Test Program, DCMA looks at whether participants have achieved the negotiated initiatives and goals associated with their comprehensive subcontracting plans, and assigns an overall rating for that fiscal year’s performance. Our analysis found that 87 percent of DCMA’s reviews indicated that participants made acceptable progress on their initiatives to enhance small business opportunities, but that participants’ performance in meeting their negotiated subcontracting goals varied. Our analysis also found that while participants did not always meet individual goals, their aggregate performance resulted in subcontract awards to small business that exceeded aggregate goals by approximately $5.4 billion from fiscal years 2006 through 2013. When compared to the performance of contractors not participating in the program (nonparticipants), Test Program participants have lower percentages of dollars awarded to small businesses. Participants we interviewed explained that this is due, among other things, to the nature of the contracts they perform and the way the Test Program tracks achievement of goals. Finally, DCMA’s annual performance reviews, which assess achievements against both negotiated initiatives and goals, have been largely positive. Participants made acceptable progress, as assessed by DCMA, on their negotiated initiatives in 74 of the 85 reviews we analyzed—an 87 percent success rate. Our analysis indicates that completion of these initiatives has resulted in tangible subcontracting opportunities for small businesses. DOD officials stated the achievement of these initiatives is not tied to any one contract within the comprehensive subcontracting plan; any of the defense work being subcontracted by a prime contractor can be used to meet the initiative. According to a participant and DOD officials, the initiatives offer a distinct advantage over individual subcontracting plans. Examples of initiatives include: redirecting subcontracts from large businesses to small businesses; targeting increased small business subcontracts in technical fields such as integrated circuits, computer information technology products, and engineering and technical services; and participating in the Small Business Innovation Research (SBIR) and Mentor-Protégé Programs. Some of these initiatives target the socioeconomic categories of small businesses for which the participants had the most difficulty achieving performance goals. For example, one participant failed to meet its subcontracting goals for the small disadvantaged business category over a 3-year period. For each of these years the participant had an initiative to increase performance in that category. According to our analysis of DCMA reviews, participants are generally successful at achieving their initiatives. In our review period of fiscal years 2006 through 2013, we found 16 initiatives resulted in the redirect of approximately $93 million in subcontracts from large business to small businesses; a 72-percent success rate in participants meeting milestones for increasing small business subcontracts in targeted industries; 24 mentor-protégé initiatives resulted in 61 new mentor-protégé relationships between participants and small businesses; and 11 SBIR projects that identified 83 new small business suppliers. The 2013 DOD review estimated that initiatives completed by Test Program participants could amount to as much as $1.8 billion per year in increased small business opportunities, and that participants spent as much as $5.5 million annually on small business subcontracting enhancements and initiatives. Our analysis found that Test Program participants’ performance in meeting their negotiated subcontracting goals varied. The Test Program does not have overall small business subcontracting goals similar to those that DOD negotiates annually with the Small Business Administration, but does have goals on a contractor-by-contractor basis. The goals, as well as the achievements, for small business subcontracting are expressed in terms of (1) the total dollars awarded to small businesses, and (2) the dollars awarded to small business as a percent of total subcontract dollars. DCMA is responsible for annually negotiating and approving Test Program participants’ dollar and percentage goals to ensure that they are realistic and challenging. In all the negotiation support memorandums we reviewed, DCMA attempted to negotiate with the participants to establish challenging goals. According to DCMA documentation, this was accomplished by, for example, analyzing a participant’s 5-year subcontracting performance trends and comparing them to submitted goals, or assessing a participant’s documented support of its proposed goals to determine if a goal higher than that presented by the participant was realistic. As shown in figure 1, the number of Test Program participants that met or exceeded goals varied by fiscal year. According to some participants we interviewed, there are a number of reasons that individual participants may not meet their annual goals. One reason is that the goals are intended to be “stretch goals” beyond what the participants are confident they know can be achieved. Our analysis also shows that when participants failed to meet a goal they generally missed the goal by a small percentage. For example, in fiscal year 2011, the eight participants who missed their goals all did so by less than 10 percent and in two cases by less than 1 percent. Some participants also stated that their inability to renegotiate goals to account for changes in the dollars available for subcontracting may also negatively affect their achievements. DCMA officials said that they generally prefer not to allow for renegotiation, as it is a time consuming process. When examining the performance of Test Program participants as a group, we found that they exceeded their combined goals for subcontracting dollars awarded to small businesses in each year of our review period, as shown in figure 2, for a total of approximately $5.4 billion above the combined goals from fiscal years 2006 through 2013. For the goal of percentage of total subcontracting dollars awarded to small business, we found that, at an aggregate level, participants exceeded goals in 3 of the 8 fiscal years we reviewed and missed achieving those goals by less than 4 percent in every other fiscal year. In our discussion with one small business advocate, one of the criticisms of the Test Program raised was that participants generally awarded a lower percentage of their subcontracting dollars to small businesses than nonparticipants. Our analysis, as shown in figure 3, shows this to be the case. For Test Program participants, the percentage of subcontract dollars awarded to small businesses ranged from a low of around 25 percent in fiscal year 2011 to a high of around 29 percent in fiscal year 2008. However, for nonparticipants the percentage was a low of around 33 percent in fiscal year 2008, and a high of nearly 43 percent in fiscal year 2006. We found that over the 8-year review period, Test Program participants also experienced less success in achieving their negotiated goals for the various socioeconomic categories of small businesses when compared to nonparticipants’ achievements. Test Program participants consistently awarded, on the basis of percentage of subcontract dollars awarded, lower percentages than nonparticipants. The only fiscal year that Test Program participants’ awarded higher amounts than nonparticipants was in fiscal year 2008 and then only for 2 of the 5 subcategories measured. According to some participants we interviewed, there are a number of factors that can adversely affect their ability to achieve their annual percentage and dollar subcontracting goals. Examples of these factors include funding changes that affect the value of a contract or other changes to contracts that affect the amounts available for subcontracting. These factors are not unique to Test Program participants and could apply to any contractor involved in federal contracting; however, there are some factors that may have a disproportionate effect on Test Program participants. These factors include the following: Test Program participants represent some of DOD’s largest prime contractors. As a result, many of the contracts they receive from DOD are large, technically complex, and include a scope of work that may require teaming arrangements with other large businesses, reducing the dollars available for subcontracting to small businesses. For example, the F-35 Joint Strike Fighter program has a prime contract with Lockheed Martin Corporation. For this highly complex and technical work, Lockheed Martin has subcontracted significant portions of the work to Northrup Grumman Corporation and BAE Systems, neither of which is considered a small business. Test Program participants can only count certain small business subcontracting activities toward their goals. While small businesses may be involved at some level of subcontracting—either through a direct award from the prime contractor as a first-tier subcontractor or as a second-tier subcontractor through an award from a first-tier subcontractor—the ability of a Test Program participant to count small business subcontracting dollars towards their performance goals is limited. According to Test Program participants and DOD officials we interviewed, federal acquisition regulations require that goals for small business subcontracting be based on the prime contractor’s first-tier subcontracts. This government-wide practice does not permit prime contractors to report subcontracting below the first-tier for purposes of demonstrating the achievement of small business subcontracting goals. Therefore, participants can only take credit for awards made at the first-tier of subcontracting. For example, for the F-35 program Lockheed Martin has approximately 500 first-tier suppliers and 1,250 second-tier suppliers. Lockheed Martin does not receive credit for small business subcontract achievements below the first-tier, which could represent a significant amount of small business subcontracting. If a small business subcontractor grows into a large business, its subcontracts are no longer counted toward participants’ goals. The determination of a subcontractor’s size as small for subcontracting purposes is set on the date that it self-certifies that it is small for the subcontract and is not typically revisited for the duration of the contract. Some participants we interviewed stated that, for their comprehensive subcontracting plans, the size determination is revisited when assessing progress against annual goals. For example, if a small business becomes successful and wins contracts that grow it into a larger business during the comprehensive subcontracting plan’s annual assessment period, a Test Program participant cannot count those subcontracting dollars toward the attainment of its small business goals. For instance, if a participant subcontracts work to a veteran-owned small business which then wins contracts from other sources and grows beyond the small business size standards, those subcontract dollars are excluded from the calculation of goal attainment. Make or buy decision. Participants may make decisions to produce components in-house rather than buying them from a supplier in order to reduce the overall contract cost. This decision thus reduces the amount of subcontracting dollars available to small businesses. To achieve another perspective on Test Program performance, we analyzed DCMA performance ratings for the individual participants. In our review of 85 annual performance reviews for fiscal years 2006 through 2013, Test Program participants generally received positive annual performance ratings from DCMA. DCMA is responsible for annually evaluating the performance of Test Program participants and making recommendations as to their continued participation in the program. When evaluating the participants, DCMA is to consider their performance on both goals and initiatives and, if they failed to achieve these, whether the participants made a “good faith effort” to do so. Participants receive an annual overall program rating ranging from “Unsatisfactory” to “Outstanding.” Participants that do not receive at least an “Acceptable” rating are required to submit to DCMA a detailed corrective action plan to account for and improve on known deficiencies. For example, for fiscal year 2013, the 12 participating firms received the following ratings: Five participants received an Outstanding rating—meaning they generally exceeded the annual negotiated small business goals and two additional socioeconomic category goals and had exceptional success with numerous specific initiatives to assist, promote, and utilize small businesses. Four participants received a Highly Successful rating—meaning they generally met or exceeded negotiated goals, including three small business categories, and had moderate success with some initiatives to assist, promote, and utilize small businesses. Three participants received an Acceptable rating—meaning they generally demonstrated a good faith effort to meet all of their annual subcontracting goals and provided reasonable and supportable explanations why certain goals could not be met. In the period we reviewed, fiscal years 2006 through 2013, no participants received a rating of “Unsatisfactory” and three participants received a rating of “Marginal.” Also, none of the participants we reviewed in the period were found by DCMA to not have made a “good faith effort” in seeking to meet their goals, no matter what their overall rating. According to DCMA annual performance documentation, the “Marginal” ratings were assigned generally because the participants were deficient in meeting key subcontracting plan elements, or the contractor failed to satisfy one or more requirements of a corrective plan from a prior review. Two of these participants achieved ratings at the “Acceptable” level or above in subsequent reporting periods and one voluntarily exited the program. DOD officials said they have never terminated a participant from the program, but that some have voluntarily exited the program. DOD officials stated that ratings are taken into consideration by both DOD and participants when negotiating future participation in the program and may be taken into consideration as part of the determination of past performance by contracting officers in awarding future government contracts. One participant representative highlighted that the company uses its high ratings as a marketing tool to attract small businesses. Because of its large contracting operations, DOD is critical to the success of federal programs designed to provide opportunities for small businesses. The Test Program is aimed at enhancing these opportunities and reducing participants’ administrative costs. The evidence collected by DOD and our analysis of that evidence indicates that the program has achieved these goals. The use of comprehensive subcontracting plans allows both DOD and Test Program participants to avoid millions of dollars in administrative costs and has led to demonstrable enhancements in small business subcontracting opportunities, thereby meeting the criteria established by Congress. While there may be concerns about the visibility of small business subcontracting on particular contracts or programs, these concerns may diminish when legislative changes made to the program as part of the fiscal year 2015 National Defense Authorization Act are implemented. Given that the Test Program has been in existence for 25 years, and therefore has become a de facto permanent program for both DOD and participants, termination of the program in favor of individual subcontracting plans would likely require substantial increases in manpower and fiscal resources. However, DOD has not acted on a 2007 review recommendation to work with Congress to make the Test Program permanent. Continually extending the program rather than making it permanent creates uncertainty among participants and DOD, inhibiting the expansion of the program by some participants, the inclusion of new participants, and the formulation of DOD policies and additional guidance that could enhance the program’s results. Working with Congress to address the program’s status, for example by providing information on the effectiveness of the Test Program as identified in the three DOD-commissioned reviews and our analysis, could help eliminate the uncertainty associated with the program. To help ensure continued reductions in administrative costs to DOD and program participants and enhance subcontracting opportunities for small businesses, Congress should consider making the Test Program permanent. We recommend that the Secretary of Defense work with Congress to determine the status of the Test Program. In doing so, the Secretary could provide Congress with information on the effectiveness of the Test Program as discussed in the three DOD-commissioned reviews. In written comments, DOD did not concur with the draft report’s recommendation to draft a legislative proposal to make the program permanent and questioned the finding related to the Test Program enhancing small business subcontracting opportunities. The department agreed to work with Congress to determine the status of the Test Program. Given DOD’s disagreement, we added a matter for Congressional consideration to this report and modified the recommendation as discussed below. DOD’s comments are provided in full in appendix II. Related to the finding on enhancing small business subcontracting opportunities, with its comments on our draft report, DOD provided a chart that shows a decrease in the percentage of subcontract dollars awarded to small business by Test Program participants from fiscal years 1996 through 2014. The department stated in its response that “his trend shows that the current practices associated with the do not enhance opportunities for small businesses.” We do not agree with that assessment for a number of reasons. While the percentage of subcontracting dollars awarded to small business by program participants declined since fiscal year 1996, more recent data through fiscal year 2013 show that the performance of program participants has remained relatively stable, as shown in Figure 3. Although the percentages for program participants are lower than for nonparticipants, the trends are generally the same for both. The differences between the groups may be attributable to some of the factors we discussed in our report—such as the large, technically complex nature of some participant’s contracts and their inability to count certain small business subcontracting activities towards their goals—which do not suggest shortcomings in the program. Enhancing opportunities for small business is measured by more than just the percentage of subcontract dollars awarded. For example, the value of the small business subcontracts awarded by program participants grew during this period from about $895 million to more than $7.7 billion. Small businesses therefore have received a relatively smaller portion of a significantly larger amount. As discussed in the report, the Test Program also provided enhanced opportunities to small businesses through a variety of initiatives completed by the participants. These initiatives, among other things, resulted in the redirection of millions of dollars of subcontracts from large businesses to small businesses, and increased small business participation in targeted industries, including innovative research. Finally, regardless of the overall percentages, we found that program participants met or nearly met all of the “stretch” percentage goals negotiated with the department in the fiscal years we reviewed. In total, the Test Program participants exceeded the dollar goals by approximately $5.4 billion. Thus, we continue to believe the program achieved its goals of enhancing opportunities for small businesses, and as discussed in the report, reducing participants’ and DOD’s administrative costs. Given these findings, and that DOD had not acted on the 2007 review recommendation to work with Congress to make the program permanent, we included a recommendation in the draft report for DOD to draft a legislative proposal to make the program permanent or otherwise work with Congress to determine the status of the Test Program. The department agreed to work with Congress to determine the status of the Test Program; however, DOD did not concur with the recommendation to draft a legislative proposal. Consequently we modified the recommendation and added a matter for Congressional consideration to help ensure continued reductions in administrative costs to DOD and program participants and enhance subcontracting opportunities for small businesses. We are sending copies of this report to the appropriate congressional committees and to the Secretary of Defense. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by email at woodsw@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are listed in appendix III. Section 821(e) of the National Defense Authorization Act for Fiscal Year 2015 included a provision for us to report on the results of the Department of Defense’s Test Program for Negotiation of Comprehensive Small Business Subcontracting Plans (Test Program). This report addresses the extent to which the Test Program (1) reduces administrative costs, and (2) enhances subcontracting opportunities for small businesses. To address if the Test Program reduces administrative costs for program participants, we collected and analyzed three reviews conducted for the Department of Defense (DOD) on the Test Program in 2002, 2007, and 2013 that included estimates of the costs associated with the program. We assessed the methodologies and assumptions utilized—including the number of contracts and cost of labor to compile program documentation—and used data collected for DOD’s 2013 study to validate the findings. To help assess the validity of the data, we discussed with DOD officials how the data was collected. We determined that the methodologies were valid and the data were reliable for our purposes. In addition, we interviewed DOD officials and 5 of the 12 Test Program participants to obtain their views on any cost savings generated by having the Test Program, as well as any benefits they experienced from a reduced administrative burden. Test Program participants we interviewed represented different levels of participation in the program, including two corporate-level participants and three division-level participants, as well as range of subcontracting activity. The views expressed by these participants provided insight into the operation of the program but are not generalizable to all program participants. In order to determine if the conversion costs articulated by the 2013 review were reasonable, we assessed the methodology used, obtained the data used for the estimate, discussed how the data was collected with DOD, and performed our own analysis using more recent data that found the conclusions of the 2013 review to be reasonable. To perform this analysis, we obtained March 2015 contract data from the Defense Contract Management Agency (DCMA) for the Test Program participants. From the contract listings we received, we removed those contracts for which we could not determine contract costs, as well as those that were below reporting thresholds that would require an individual subcontracting plan, to determine the total number of contracts that would have required individual subcontracting plans in the absence of the Test Program. DOD officials also provided a range of estimates for the number of hours required to develop an individual subcontracting plan, as well as the cost per hour, which we used to determine the cost for conversion. To validate the March 2015 contract data received from DCMA, we also obtained data from one corporate-level participant with a high level of subcontracting activity that we interviewed. This allowed us to estimate its cost for converting contracts under their comprehensive subcontracting plan to individual subcontracting plans. We determined that the data were reliable for our purposes. To address whether the effect of Test Program initiatives enhanced small business subcontracting opportunities, we selected the period of fiscal years 2006 through 2013 for analysis. This selection was made because prior DOD reviews released in 2002 and 2007 assessed participants’ performance against goals and initiatives through fiscal year 2005 and fiscal year 2013 was the last full year of data available. The 2002 study used data from fiscal years 1991 to 2000, and the 2007 study used data from fiscal years 2001 to 2005. We reviewed the available plans, memorandums, and reviews used by DCMA to assess participant performance from fiscal years 2006 through 2013. This included: 80 annual comprehensive subcontracting plans submitted by participants to DCMA, 60 DCMA memorandums documenting negotiations with the participants, and 85 annual Form 640 reviews performance reviews by DCMA. We also analyzed the three DOD-commissioned reviews. To provide context for the negotiation process, as well as to gather different viewpoints about the initiatives as a whole, we interviewed representatives from DCMA’s Small Business Programs Division, DOD’s Office of Small Business Programs (OSBP), and five Test Program participants. To address whether the Test Program enhances subcontracting opportunities for small businesses by successfully achieving its annual goals, we reviewed Test Program participants’ comprehensive subcontracting plans and negotiation support memorandums and analyzed performance data from fiscal years 2006 through 2013. We compared the actual performance contained in the Form 640 reviews against the approved subcontracting goals. We also obtained information about the small business subcontracting performance of DOD in general from DOD’s OSBP. Combining this information with that obtained for program participants, we identified trends in small business subcontracting for Test Program participants in comparison to DOD small business subcontracting performance in general. We also reviewed legislation, agency guidance, Federal Register notices, and relevant GAO and DOD reports. We also interviewed officials from DOD’s OSBP, DCMA’s Small Business Programs Division, 5 of the 12 prime contractors participating in the Test Program, and two small business advocacy groups chosen for their representation of the small business community, for their views on the Test Program, and factors that contribute to or undermine its success. The views expressed by these groups provided insight into the operation of the program but are not generalizable to all small businesses. We conducted this performance audit from March to November 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, J. Kristopher Keener (Assistant Director), Kathryn (Emily) Bond, Joe Hunter (Analyst-in-Charge), Cale Jones, Julia Kennon, Stephen V. Marchesani, Sylvia Schatz, and Roxanna Sun made key contributions to this report.
Since 1990, DOD has been conducting a congressionally directed test program related to how contractors report their subcontracting activities. The purpose of the program is to test whether using comprehensive subcontracting plans that cover multiple contracts across contractor plants, divisions, or entire companies can yield administrative cost savings and enhance small business subcontracting opportunities. Despite the 25-year existence of the program, little is publicly known about its effectiveness. The National Defense Authorization Act for Fiscal Year 2015 included a provision for GAO to report on the results of the program. This report addresses the extent to which the program (1) reduces administrative costs, and (2) enhances subcontracting opportunities for small businesses. GAO analyzed prior DOD reviews and data on estimates of administrative costs savings; reviewed program participants' performance for enhancing small business subcontracting opportunities for fiscal years 2006 through 2013; and interviewed officials from DOD, program participants, and small business advocacy groups. Reviews commissioned by the Department of Defense (DOD) concluded that the Test Program for Negotiation of Comprehensive Small Business Subcontracting Plans (Test Program) has resulted in the avoidance of millions of dollars in administrative costs for both program participants and DOD. According to the review conducted in 2013, the 12 firms then participating in the program avoided about $18.5 million in costs through the use of single comprehensive subcontracting plans rather than multiple individual subcontracting plans. Also, a 2007 review estimated that DOD avoided administrative costs of at least $45 million in fiscal year 2005. GAO reviewed the methodologies used for these reviews and took other steps to validate their findings. According to DOD officials, if the Test Program were terminated or allowed to expire, a significant one-time administrative cost of about $22 million could result to participants. GAO's analysis confirms this conclusion. Test Program participants and DOD officials GAO interviewed stated that the program also has resulted in non-financial benefits, including greater company-wide awareness of small business subcontracting opportunities. The participants GAO interviewed said that without the program their companies might be less inclined to award subcontracts to small businesses. They emphasized, however, that the program's continuing test status creates uncertainty and inhibits further expansion. The 2007 review recommended that DOD work with Congress to make the Test Program permanent; however, DOD has not acted on this recommendation. Doing so could help eliminate uncertainty with the program. GAO found that the Test Program enhanced small business subcontracting opportunities, although participants' performance in meeting individual goals has varied. Participants are evaluated on their achievement of negotiated initiatives and goals in their comprehensive subcontracting plans. GAO's analysis of performance reports found that participants made acceptable progress on their initiatives 87 percent of the time, thus providing tangible subcontracting opportunities for small businesses. For example, during fiscal years 2006 through 2013, program participants redirected nearly $93 million in subcontracts from large businesses to small businesses. Participants also achieved a 72-percent success rate in increasing small business subcontracts in areas such as integrated circuits and information technology, thus addressing a concern among some small businesses that high-end technical work was not being subcontracted under the program. The 2013 DOD review estimated that participant initiatives could amount to as much as $1.8 billion per year in increased small business opportunities. GAO's analysis found that participants did not always meet individual goals, in part due to the challenging nature of these goals. However, GAO also found that their combined performance from fiscal years 2006 through 2013 resulted in subcontract awards to small businesses that exceeded aggregate goals by about $5.4 billion. The annual performance reviews of the participants, which take into account performance on both initiatives and goals, have been largely positive. Congress should consider making the program permanent. GAO also recommends that DOD work with Congress on the program's status. DOD agreed. DOD disagreed with a recommendation to draft a legislative proposal to make the program permanent. GAO subsequently modified the recommendation and added the matter for Congress.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Food Stamp Program helps low-income households (individuals and families) obtain a more nutritious diet by supplementing their income with food stamp benefits. In fiscal year 1996, the average monthly food stamp benefit was $73 per person. These benefits are generally provided through coupons or electronically on a debit card (similar to a bank card) that may be used to purchase food at stores authorized to receive food stamps. The Food Stamp Program is a federal-state partnership, in which the federal government pays the full cost of the food stamp benefits and approximately half of the states’ administrative costs. USDA’s Food and Nutrition Service (FNS)—formerly, the Food and Consumer Service—administers the program at the federal level. The states’ responsibilities include certifying eligible households, calculating the amount of benefits, and issuing benefits to participants who meet the requirements set by law. The Welfare Reform Act overhauled the nation’s welfare system and significantly changed the Food Stamp Program. In addition, the Fiscal Year 1997 Supplemental Appropriations Act (P.L. 105-18, June 12, 1997) included new authority allowing the states to purchase federal food stamps to provide state-funded food assistance for legal immigrants and able-bodied adults without dependents who are no longer eligible for federal food stamps under the Welfare Reform Act. Under the supplemental act, states are required to receive approval from FNS to distribute additional food stamps and to fully reimburse the federal government in advance for all costs associated with providing the benefits. In addition, the states’ food stamp programs must be cost-neutral to the federal government. Changes to the Food Stamp Program included imposing time limits on those able-bodied individuals between the ages of 18 and 50 without dependents who were not working at least 80 hours a month or participating in certain kinds of employment and training programs. This work requirement was effective not later than November 22, 1996. States were required to terminate food stamps for these nonworking able-bodied adults without dependents after 3 months within any 36-month period. Disabled individuals, if they meet eligibility requirements, can still receive assistance. The act allows FNS to grant waivers to states for exempting able-bodied adults without dependents from the work requirement if they live in an area where unemployment is over 10 percent or in an area with an insufficient number of jobs. FNS generally grants waivers for a 1-year period. Once approved, these waivers may be renewed if the areas covered continue to have high unemployment or insufficient jobs. Once the waivers are approved, the states or localities can choose to either implement them in whole or in part, or choose not to implement them at all. In addition, The Balanced Budget Act (P.L. 105-33, Aug. 5, 1997) gives states the discretion to exempt certain types of able-bodied adults without dependents from the work requirement—up to 15 percent of those not otherwise waived. The Balanced Budget Act also provided an additional $131 million for each of the next 4 years to the Food Stamp Program—80 percent of it is designated for employment and training opportunities for these adults. According to data from FNS for fiscal year 1995—the latest year for which data were available—in an average month, the Food Stamp Program provided benefits for 27 million people. Of these, 2.5 million were able-bodied adults between the ages of 18 and 50 without dependents. An estimated one-half of these adults, about 1.3 million, are subject to the 3-month time limit. In addition, the Welfare Reform Act and the Supplemental Appropriations Act allowed immigrants with legal status as of August 22, 1996, to retain food stamps up to August 22, 1997. However, if legal immigrants have 40 quarters or more of work history in the United States or are veterans or active duty members of the U.S. military, they may continue to retain food stamps. Spouses and minor children of veterans are also eligible. At the time of our survey of states in the summer of 1997, the states were pursuing a variety of options to address changes in the Food Stamp Program that affected able-bodied adults without dependents and legal immigrants. Some state actions, such as job training assistance, although primarily intended to move individuals toward self-sufficiency, may have the effect of allowing some able-bodied adults without dependents to retain food stamps by meeting the act’s work requirements. Twenty states provided legal immigrants with information on how to become citizens so that they can be eligible for food stamps. Other state actions are intended to replace the food stamp benefits that individuals have lost. According to our survey results, when the states notified able-bodied adults without dependents that they were subject to the work requirements in order to retain food stamps, many states told us that they chose to also notify these adults of job placement and/or training services that were available. Although these programs are intended primarily to move individuals toward self-sufficiency, participants may still receive food stamps if income and other requirements are met. For example, our survey indicated that Texas provided information on jobs and/or employment resources and training. Thirty-two states provided information about jobs and/or employment resources; 29 provided information on training; 19 provided information on workfare. In addition, 20 states helped assess an individual’s employment skills. The states also offered one or more ways to meet the work requirements: 25 states counted volunteer work, 25 counted workfare, and 33 counted employment training that leads to a job. In addition, as allowed under the Welfare Reform Act, our survey indicated that 43 states had applied for, and 42 received, authority to waive the work requirement for able-bodied adults without dependents in areas where unemployment exceeded 10 percent or in areas with insufficient jobs. (See app. III for the waiver status of each state.) FNS estimated that as many as 35 percent of the affected able-bodied adults without dependents would retain their eligibility through a waiver. However, 8 of the 43 states were not planning to implement their waivers—either in their entirety or in part. In seven states—California, Indiana, Nevada, New York, Ohio, Oklahoma, and Virginia—waivers were approved for selected regions but the local governments, which are authorized to implement the waivers, did not plan to do so. Texas planned to implement its waiver only in localities with an unemployment rate of over 10 percent. Two of the eight states or their localities had not fully implemented the waivers because they believed that it was unfair to exempt able-bodied adults without dependents from the work requirement while single mothers receiving federal assistance, like Temporary Assistance for Needy Families (TANF), are required to participate in work activities. At the time of our survey, 20 states provided or planned to provide legal immigrants—who were scheduled to lose their food stamps—with information on how to become U.S. citizens. In May 1997, we reported that it took between 112 and 678 days (with an average of 373 days) to process applications for citizenship at INS between June of 1994 and June of 1996. For example, it took just over 1 year to process a request for citizenship in Los Angeles—a city with one of the largest immigrant populations in the nation—and almost 2 years to process an application in Houston. INS officials told us that among the reasons for the significant increase in the number of applications that INS has received since fiscal year 1989 is that there are incentives to becoming a citizen because of the benefits that can be derived. Because it takes an average of over 1 year to process applications for citizenship and legal immigrants were not eligible to receive food stamps after August 22, 1997, many legal immigrants have lost their federal food stamp benefits. FNS’ most current estimate is that 935,000 legal immigrants lost their federal food stamps under the welfare reform provisions. However, as of December 1997, estimates are that over one-quarter, or about 241,000, of these individuals are receiving food stamps funded by the states. For those individuals who lose federal food stamp benefits, 20 states were taking one or more actions to provide state-funded food assistance. (App. IV identifies the 20 states with food assistance programs that serve able-bodied adults without dependents and/or legal immigrants.) Ten states decided to purchase federal food stamps with their own funds for certain legal immigrants—primarily children and the elderly. According to FNS and the states, 9 of the 10 states have estimated that about 241,000 legal immigrants are now receiving state-funded food stamps. Among these states are California, Florida, New York, and Texas, which, according to an FNS report, had about 70 percent of the legal immigrants receiving food stamps in fiscal year 1995—the latest year for which data were available. These states will generally use the Food Stamp Program’s infrastructure and benefit structure to deliver food assistance, according to FNS. For example, Washington State appropriated just over $60 million for fiscal years 1998-99 to fully restore benefits to an estimated 38,000 legal immigrants—all of whom were slated to become ineligible for federal food stamps. Households eligible for participation receive the same benefits that they did under the federal program. However, FNS also told us that, unlike Washington State’s, most states’ eligibility standards are likely to apply to only certain categories of legal immigrants. California, for example, recently appropriated $34.6 million to provide food stamps for legal immigrants who are children or elderly. Thirteen of the 20 states reported that they were using their own state-funded food assistance programs, and 2 of the 13 states created programs in response to welfare reform. These two states—Colorado and Minnesota—developed state-funded food assistance programs to aid those legal immigrants losing their federal food stamp eligibility as a result of welfare reform. Colorado, for example, has appropriated $2 million to provide emergency assistance, including food, for legal immigrants. Minnesota has allocated just over $4.7 million for two programs to provide food assistance for legal immigrant families in that state. The remaining 11 states had food assistance programs—created before the Welfare Reform Act was passed—that ranged from those that provide individuals with cash directly to those that provide funds for local food banks and food pantries that serve, among others, both able-bodied adults without dependents and legal immigrants. A program with significant funding is Pennsylvania’s State Food Purchase Program, which provided about $13 million in fiscal year 1997 and $13.6 million in fiscal 1998 to counties for the purchase of food. This program is intended to supplement the efforts of food pantries, shelters for the homeless, soup kitchens, food banks, and similar organizations to reduce hunger. Two states with state-funded programs are also providing existing state or local programs with additional funding to assist able-bodied adults without dependents and legal immigrants. Rhode Island appropriated $250,000 in fiscal year 1998 for a community-run food bank. Massachusetts increased the funding it provides for local food banks and food pantries from just under $1 million to $3 million in fiscal year 1998 in anticipation of an increased need by both groups. Seven of the 20 states reported that they had allocated additional money to federally funded programs that assist groups of individuals, which may include those losing food stamp benefits. Programs identified by the states in our survey include The Emergency Food Assistance Program (TEFAP) and the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC). In the five localities we visited, government officials reported that their assistance largely consists of implementing state programs. Most nonprofit organizations that we contacted said that although it is too soon to assess the impact of welfare reform, they anticipate an increased need for their services. Given their limited resources, however, these organizations are concerned that the supplemental assistance they provide will not compensate for the basic food assistance provided by the federal program. (See apps. V-IX for information about food assistance programs in these localities.) In the five localities we visited—Denver, Detroit, Hartford, Houston, and Los Angeles—employment and training programs were offered to able-bodied adults without dependents through existing or expanded programs. Although these programs are intended to promote self-sufficiency, they may also help participants to retain food stamps if they meet income and work requirements. For example, in Hartford, Connecticut, able-bodied adults without dependents can participate in the statewide Connecticut Works System. This program’s objective is to enhance the state’s economy by helping to match the needs of businesses with workers’ skills. The Connecticut Works System brings together state, regional, and local organizations to provide job listings, job search assistance, access to training and education programs, resume assistance, interviewing, and networking assistance. In Detroit, Michigan, able-bodied adults without dependents can participate in a new state employment and training assistance program that specifically targets this population; about one-half of these adults live in the greater Detroit area. For fiscal year 1998, the state will receive $13.4 million from FNS to expand work programs for this population. For legal immigrants, three out of the five localities—Denver, Houston, and Los Angeles—had plans to offer limited food assistance through state-funded programs. California passed legislation to provide food stamps for legal immigrant children and the elderly by purchasing federal food stamps. Similarly, Colorado passed legislation that provides legal immigrant families with special emergency assistance. As a result, families in Denver can receive, among other things, food coupons redeemable at designated food pantries. Finally, Texas is planning to offer food assistance to elderly and disabled legal immigrants. In addition, Los Angeles County launched two special efforts on behalf of legal immigrants after the passage of welfare reform. First, Los Angeles initiated a countywide citizenship campaign that brought together 200 public and nonprofit organizations whose goal was to assist legal immigrants in obtaining citizenship. Los Angeles County coordinated the efforts of these organizations, worked with the local district office of the INS, and directly contacted 400,000 potentially affected legal immigrants. However, Los Angeles officials told us that (1) because of the time it takes to process applications for citizenship—including the fact that criminal background checks are required on all applicants—and (2) because many of the applications remain unprocessed owing to the volume of applications received by the INS, they estimate that about 91,000 legal immigrants lost their food stamps. Los Angeles County officials said they continue to encourage legal immigrants to become U.S. citizens and, for those who do become citizens, hope to restore benefits to those who meet the Food Stamp Program’s requirements. Second, Los Angeles County and the local United Way jointly sponsored the efforts of a local referral service to provide information on food assistance. Information on how to contact this referral service was enclosed with termination notices to legal immigrants. In every locality that we visited, nonprofit organizations—including food banks, food pantries, soup kitchens, and religious organizations—generally serve anyone who needs their services, including able-bodied adults without dependents and legal immigrants. Historically, these organizations provide supplemental food assistance on an emergency basis, perhaps once or twice a month. According to these nonprofit organizations, food stamp recipients—even before welfare reform—had turned to them for assistance. These organizations generally expect an increase in the need for their services—both in terms of the numbers of people and frequency of visits—as a result of welfare reform. For example, in Denver, one organization was getting 40 to 50 more applicants for food assistance per week in August 1997. Furthermore, the organizations were generally concerned that they could not replace the long-term, sustained assistance that food stamps provided. At the time of our visits in the late summer of 1997, however, most organizations had not experienced this anticipated increase. Our visits occurred before benefits were cut off for legal immigrants and before the usual increase in the need for food assistance in the winter months. The organizations are unsure how they will meet the expected increase because they have limited resources. Furthermore, these organizations are competing for these limited resources, and officials told us that they do not anticipate larger contributions as a result of welfare reform. While most organizations were waiting to see the full impact of welfare reform, some were developing contingency plans to handle the expected increase. For example, in Detroit, a kosher food pantry surveyed its existing clientele to determine which individuals would lose their benefits. The pantry learned that it would need about $100,000 the first year to serve its existing population. According to officials from the food pantry, this effort is not likely to be duplicated by other organizations because, unlike most other organizations, the kosher food pantry serves a known group of legal immigrants. More typically, most organizations are unsure how they will sustain a long-term increase in the number of people needing their services because they typically provide assistance on an emergency basis for anyone in need, and their resources are already limited. These organizations are considering strategies that would restrict eligibility, such as limiting eligibility to serve children or the elderly, in order to accommodate the anticipated increase and/or reduce their existing levels of service in order to accommodate the needs of more individuals. It is too soon to assess how able-bodied adults without dependents and legal immigrants will fare in the long term under welfare reform. However, many states have taken actions that could result in continuing food assistance, under certain conditions, for some of these individuals. For able-bodied adults, some of these actions—employment assistance and training—may help move these individuals towards self-sufficiency. For legal immigrants, citizenship could restore federal food stamps to those who meet income and work eligibility requirements. However, because of the amount of time it takes to process citizenship applications, many individuals have likely lost their food stamps. We provided USDA with a copy of a draft of this report for review and comment. We met with FNS officials, including the Acting Deputy Administrator for the Food Stamp Program. USDA concurred with the accuracy of the report but stated that while some states are providing or will provide food assistance for legal immigrants with state funds, in many cases, the assistance will not replace federal benefits because it generally targets only certain portions of the legal immigrant population, such as the elderly or children. USDA officials indicated that about one-quarter of the 935,000 legal immigrants that they estimated would lose food stamp benefits are now being covered under state funded programs. The USDA officials also pointed out that while many states are offering employment and training services for able-bodied adults without dependents, often, the services offered are job search activities, which do not satisfy the work requirements under the Welfare Reform Act and, thus, do not qualify these individuals for food stamps. We expanded our discussion of these points where appropriate and made some additional minor clarifications to the report on the basis of USDA’s comments. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the Senate Committee on Agriculture, Nutrition, and Forestry; the House Committee on Agriculture; other interested congressional committees, and the Secretary of Agriculture. We will also make copies available upon request. If you have any questions about this report, I can be reached at (202) 512-5138. Major contributors to this report are listed in appendix X. In October 1996, the Ranking Minority Member, Subcommittee on Children and Families, Senate Committee on Labor and Human Resources, asked us to study several issues concerning the impact of welfare reform on the Food Stamp Program. This report focuses on the two groups of individuals most likely to lose their food stamp benefits—able-bodied adults without dependents and legal immigrants. Specifically, we describe the (1) actions, if any, that states have taken to assist those individuals who lose eligibility for the Food Stamp Program and (2) related actions, if any, taken by other organizations in selected localities—local governments and nonprofit organizations—to assist those individuals who lose their eligibility for the Food Stamp Program. To address the first objective, we surveyed and received responses from the 50 states and the District of Columbia. We also updated our results as appropriate. The tabulated results of the survey are included as appendix I. To address the second objective, we visited five localities. These localities were selected using the following criteria regarding the states in which they are located: (1) whether the states offered general relief to able-bodied adults without dependents and (2) whether the states had filed waivers precluding able-bodied adults without dependents from meeting the work requirement because of high unemployment or an insufficient number of jobs. We then selected states within these categories by (1) those with the highest food stamp participation of able-bodied adults without dependents and legal immigrants and (2) geographic diversity. Within the states, we chose the locality, usually a county, with the highest participation in the Food Stamp Program. We visited these localities in the late summer of 1997. We contacted several organizations that were significantly involved in providing the localities with food assistance. We also met with government officials responsible for food stamps and other officials involved in welfare reform. In several localities, we also met with officials affiliated with the Federal Emergency Management Agency (FEMA) because of their expertise in providing emergency food assistance after natural disasters. We also visited nonprofit organizations, such as community action agencies; food banks; church-affiliated food assistance providers, such as soup kitchens; local advocacy groups; local United Way affiliates; and food pantries. (See apps. V-IX for individual reports on the food assistance provided in these localities.) In addition, we contacted several national organizations that provide local communities with food assistance, including Catholic Charities USA; Lutheran Social Services; Second Harvest; World Share, Inc.; and the United Way of America. We also attended a conference sponsored by Second Harvest on the implications of welfare reform on food assistance. Additionally, we attended the American Public Welfare Association’s National Conference for Food Stamp Directors to obtain information on current state and local food assistance programs. Finally, we met with officials from the U.S. Department of Agriculture’s (USDA) Food and Nutrition Service (FNS) to obtain program information and statistics. We performed our work in accordance with generally accepted government auditing standards from March through December 1997. Los Angeles County had a population of about 9.1 million in 1995. In 1996, its unemployment rate was 8.3 percent, and its poverty rate for 1995-96 averaged 18.7 percent. In comparison, the state’s unemployment rate was 7 percent, and the poverty rate for 1995-96 averaged 16.8 percent. Nationwide, unemployment was 5.4 percent and the poverty rate for 1995-96 averaged 13.8 percent. In January 1997, as states were beginning to implement the Welfare Reform Act, over 1 million individuals participated in the Food Stamp Program in Los Angeles County. Of this total, over 189,000 were legal immigrants, and an estimated 56,400 were able-bodied adults without dependents. As of September 1997, after many changes to the Food Stamp Program were implemented, the county had about 870,000 food stamp participants of which about 31,000 were able-bodied adults without dependents and about 24,000 were legal immigrants. In addition, 29,000 legal immigrant children and elderly were receiving state-funded food stamps. The Los Angeles County Department of Public Social Services (DPSS) administers the Food Stamp Program, with guidance from the California Health and Welfare Agency. DPSS officials told us that they assist both able-bodied adults without dependents and legal immigrants in retaining food stamp benefits to the extent possible. For able-bodied adults without dependents, local officials were providing employment and training experiences through workfare. The county has expanded its workfare program to include from 80 percent of these adults prior to welfare reform to 100 percent. Officials were concerned that if these adults were not offered workfare to meet the work requirement, they would lose their food stamp benefits. For legal immigrants losing food stamp benefits, DPSS had an extensive notification process to advise them of their impending change in status for the Food Stamp Program as a result of welfare reform. DPSS sent out notification flyers entitled “You May Lose Your Food Stamp Benefits” to legal immigrants on five occasions and in several languages. The flyers described the process for obtaining citizenship. DPSS is providing assistance through a countywide effort in partnership with 200 public and nonprofit organizations. Activities have included providing assistance with applications for U.S. citizenship, including completing forms, and offering classes in English as a second language and in American government. However, because of the time needed to process citizenship applications by the Immigration and Naturalization Service (INS), including the fact that the INS has to do criminal background checks on all applicants for U.S. citizenship, Los Angeles officials indicated that about 91,000 legal immigrants lost their federal food stamp benefits. These officials indicated, however, that the citizenship campaign continues and they hope to be able to restore food stamps to those who qualify once they become U.S. citizens. At the time of our visit, DPSS was also considering what state and federal assistance could be provided. DPSS officials were awaiting the outcome of pending state legislation that would assist legal immigrants who were losing food stamps. In August 1997, the state legislature restored food stamps by purchasing federal food stamps for legal immigrants who are elderly or are children. County officials believed it was important to restore food stamps and other benefits to legal immigrants—particularly because they represent 15-20 percent of the population in Los Angeles County. Nonprofit organizations in Los Angeles County, some of which are affiliated with national groups, provide direct and indirect food assistance through a well-established network. These organizations are also connected with federal, state, and local government agencies to provide services. Officials in different organizations told us that this locality’s food assistance providers are effective in their efforts because of their experience in providing assistance following natural disasters, such as earthquakes, brush fires, and land slides, and because of experiences with rioting. These organizations generally expected to see an increase in the number of people needing their services as a result of welfare reform. Officials expressed concern that they would not be able to provide more services if their current level of resources remained the same. Additionally, several officials told us that resources for food and funding were diminishing. Accordingly, the organizations had developed the following approaches for handling the anticipated increase in needed services: (1) seeking additional donations for funds and food, (2) considering decreasing the amount of services that each recipient receives, and (3) targeting certain populations, such as the elderly, for services. Table V.1 describes the nonprofit organizations that we contacted. Religious-based social service nonprofit (local affiliate of Catholic Charities) Distributes food through its community centers to about 90,000 individuals twice a week. Food bank (local affiliate of Second Harvest) Distributes food to 750 charitable organizations at 14¢ per pound. These organizations distribute food to an estimated 200,000 individuals per week; also distributes federal agricultural commodities. Religious organization (a local affiliate of Lutheran Social Services) Distributes bags of groceries and hot meals to more than 2,500 families. Interfaith/Religious social service nonprofit (represents over 300 denominations and temples) Distributes food through programs such as food pantries, meals-on-wheels, “homebound meals,” and nutrition sites. Distributes funding to and purchases food for food pantries, soup kitchens, food banks, and homeless shelters. Community social service agency (local affiliate of the United Way) Donates funding for food assistance to 15 food service providers. Estimates serving 408,000 clients with food and meals service. Provides the needy with information about food pantries and soup kitchens throughout the Los Angeles area. Handles about 150 food assistance inquiries per day. Provides advocacy assistance for the poor in representing their views to local political officials on a number of issues, including food assistance. Most organizations did not have specific eligibility requirements for recipients of their food assistance services and did not keep demographic information on those they served. Generally, they serve anyone in need, including able-bodied adults without dependents and legal immigrants. Officials told us that their organizations serve the working poor, including single mothers with children and grandparents raising young children. The resources available to these organizations included federal, state, and local government grants, philanthropic grants, private donations, and in-kind donations, such as voluntary services and housing. For example, the city of Los Angeles provides some of these organizations with funding from its federal Community Development Block Grant. The population of Denver County in 1995 was approximately 500,000. In May 1996, the unemployment rate for the Denver metropolitan area was 3.9 percent. At the state level, the unemployment rate was 4.3 percent in May 1996, and the poverty rate for 1995-96 averaged 9.7 percent.Nationally, in May 1996, the unemployment rate was 5.4 percent, and the poverty rate for 1995-96 averaged 13.8 percent. In January 1997, as states were beginning to implement the Welfare Reform Act, about 56,000 individuals participated in the Food Stamp Program. By September 1997, after many changes to the Food Stamp Program were implemented, participation had declined to approximately 47,000. Between January and September, the number of able-bodied adults without dependents with food stamps decreased from about 1,600 to about 300. According to an official with the Denver Department of Social Services (DDSS), most of these adults lost food stamp benefits because they did not attend a required orientation session informing them of their work requirements under welfare reform. The information on this session was publicized through fliers at food pantries and soup kitchens as well as in the food stamp office. Although the number of legal immigrants on food stamps is unknown, a 1996 study by the Colorado Department of Human Services estimated that, statewide, approximately 5,700 immigrants would lose their benefits as a result of welfare reform. DDSS administers the Food Stamp Program in Denver County with supervision from the Colorado Department of Human Services. To assist able-bodied adults without dependents in meeting the Food Stamp Program’s work requirements, DDSS provides employment and training assistance through Denver Employment First. This program helps these adults prepare for jobs by teaching them resume writing, interviewing techniques, and appropriate dress. The program also offers General Educational Developmental (GED) self-study courses to move adults without a high school education toward earning a high school equivalency diploma. The program also operates the county workfare program for able-bodied adults without dependents and maintains a list of approved nonprofit agencies at which participants can meet their work requirements. In addition, DDSS is administering an emergency assistance program for legal immigrants in Denver County who lost federal food stamps. Colorado appropriated $2 million for emergency assistance to legal immigrants from July 1997 to June 1998. Under this program, legal immigrants can receive assistance, including food vouchers, that can be redeemed at designated food pantries. In order to receive this special emergency food assistance, the legal immigrants’ participation must be approved by DDSS. With DDSS’ approval, legal immigrants can continue to receive this emergency assistance on a monthly basis as long as they continue to be in an emergency situation. Nonprofit organizations in Denver County provide direct and indirect food assistance. Most of the organizations we visited were affiliated with national groups; others were state or local. In the last several years, these nonprofit organizations and some government agencies have established a network to discuss food assistance problems. Several organizations expected the number of individuals requesting services to increase as a result of welfare reform. Most organizations reported that they were already experiencing an increased demand, with one organization reporting 40 to 50 more applicants per week. Although many of the nonprofit organizations we contacted expect more individuals to request services, none are sure how they will deal with the expected increase. They are also concerned about their ability to meet an increased need for their services because of their limited resources. A few of them also reported that they would try to raise additional money through fund-raising activities and grants. Two officials also voiced concern about their ability to meet the demand for emergency food assistance in an economic downturn. Table VI.1 describes the nonprofit organizations that we contacted. Provides bags of food to clients. Serves approximately 26,000 individuals per year. Provides funding for local food assistance programs. Provides advocacy on issues, including food assistance, in Colorado. Foodbank serving metropolitan Denver, northern Colorado, and Wyoming (local affiliate of Second Harvest) Serves approximately 750 hunger- relief programs, including, for example, a program to pick up surplus prepared foods and a “kid’s cafe” providing food for children in Denver’s inner city. (continued) Manages a kosher food pantry that provides food for those who meet income requirements. Serves approximately 250 people per month. Provides a variety of services, including funding for approximately 13 emergency food assistance programs. Provides advocacy on food assistance in Colorado. A religious organization (local member of Catholic Charities) Provides food through a network of emergency assistance centers in the Denver metropolitan area; a food bank, which pools together the resources of 22 food banks to buy food in bulk at lower cost; the SHARE program; and meals at a temporary shelter for the homeless. The nonprofit organizations we contacted generally required their clients to meet some type of eligibility requirement in order to receive services. The organizations said that they serve many different groups of people besides legal immigrants and able-bodied adults without dependents, including the working poor, single mothers with children, and the elderly. The organizations use various resources to fund their operations, including federal government grants, foundation grants, individual contributions, and volunteer services. For example, one organization received approximately $264,000 in volunteer services and $1.4 million in in-kind food pantry donations during the last year. The greater Hartford area had a population of approximately 835,000 in 1995. The area consists of three jurisdictions—the city of Hartford and the towns of East and West Hartford. In May 1996, the unemployment rate for the Hartford area was 5.9 percent. By comparison, the state’s unemployment rate was 5.5 percent in May 1996, and the poverty rate for 1995-96 averaged 10.7 percent. Nationally, in May 1996, the unemployment rate was 5.4 percent, and the poverty rate for 1995-96 averaged 13.8 percent. In August 1996, before the Welfare Reform Act was implemented, over 219,000 individuals were participating in the federal Food Stamp Program statewide, according to Connecticut officials. (Statistics were not available for the greater Hartford area.) Of this total, about 5,800 were able-bodied adults without dependents. Furthermore, as of August 1996, an estimated 9,700 food stamp participants were legal immigrants. By September 1997, after many changes to the Food Stamp Program were implemented, participation had declined to about 202,000. Of this total, about 5,400 were able-bodied adults without dependents, and about 7,100 were legal immigrants. Connecticut’s Department of Social Services (DSS) administers the Food Stamp Program throughout the state. At the time of our visit, DSS officials told us that they did not have and did not plan to develop outreach services to help individuals retain their food stamps. In August 1997, however, the state received approval for a waiver of the work requirement for areas with limited employment opportunities and began to notify able-bodied adults without dependents who lost benefits because of welfare reform that their benefits could be restored. DSS’ goal is to provide access to information and services for employment and training. However, if participants in these programs meet income and work requirements, they may still qualify for food stamps. In addition, the state has developed a referral system to provide individuals with information on available food assistance. Able-bodied adults without dependents can participate in employment and training in a number of ways. For example, they can obtain training through the Connecticut Works System, which offers a “one-stop” approach to employment services and unemployment benefits. In addition, able-bodied adults without dependents can participate in the Self-Initiated Food Stamp Community Service Program/Working for In-Kind Income. In this state program, an able-bodied adult without dependents can meet workfare requirements by participating in a community service activity. The state will provide these adults with information on potential community service opportunities. Individuals accepting these community service positions will be able to maintain their eligibility for food stamps. State officials told us they had not made plans to provide outreach programs/services for legal immigrants losing food stamps because they were uncertain how many legal immigrants would become ineligible for the food stamp program. However, the state provided legal immigrants with information about obtaining U.S. citizenship when they were notified about changes in their eligibility for food stamps. The nonprofit organizations at work in the greater Hartford area provide food assistance directly and indirectly through food banks, food pantries, shelters, and soup kitchens. These organizations are affiliated with a network overseen by the local board of FEMA. The local board provides an opportunity for nonprofit organizations to communicate and coordinate the efforts or services they provide. Several of the organizations noted that it was too soon to clearly determine the effects of welfare reform. Nevertheless, they expected an increased need for food assistance because of the loss of eligibility for food stamps and were concerned about their ability to respond to that increase with little or no additional funding. Organizations told us that they plan to (1) seek additional funding and food donations and (2) make adjustments with the amounts and/or types of services they normally provide. Table VII.1 lists the nonprofit organizations we contacted. Community Renewal Team of Greater Hartford, Inc. Distributes funding to five agencies to provide food assistance. Food bank (local affiliate of Second Harvest) Distributes donated food to over 200 private, nonprofit programs that feed the hungry (e.g., food pantries, soup kitchens, shelters). Center City Churches (Center for Hope) Serves meals to approximately 1,200 to 1,400 individuals and provides referrals to other food assistance programs. Conducts research, outreach, training, advocacy, and provides referrals to other food assistance. Serves meals or provides bags of food. Food bank (local affiliate of the Second Harvest) Provides donated food to 450 private, nonprofit feeding agencies. Religious organization (local affiliate of Jewish Federation) Provides kosher lunches for approximately 300 to 350 individuals. Distributes funding to the local food bank to service shelters, food pantries, and soup kitchens. Provides food vouchers, Senior meal programs, hot meals, home-meal delivery for the elderly, seasonal meal programs, and part-time soup kitchens. Information referral service Makes referrals to food assistance programs. Religious organization (local affiliate of Lutheran Social Services) Provides referrals to food assistance for refugees and immigrants. These nonprofit organizations have no or minimal eligibility requirements for participation, such as picture identification and documentation of income. Currently, the nonprofit organizations receive funding from federal, state, and local government grants; individual and corporate contributions; and volunteer services. Two municipalities—the town of West Hartford and the town of East Hartford—maintain town food pantries where the needy can obtain either bags of groceries or food vouchers redeemable at local grocery stores. In East Hartford, participants must show a photo identification, which includes a social security number and date of birth; provide verification of income for all family members; and sign a “Client Information Form” that provides proof of dependents. The Detroit Tri-County area—Macomb, Oakland, and Wayne counties—had a population of about 3.9 million people in 1995—about 41 percent of Michigan’s population. In May 1996, the Detroit metropolitan area had an unemployment rate of 4.3 percent. In comparison, the state unemployment rate in May 1996 was 4.6 percent, and the poverty rate for 1995-96 averaged 11.7 percent. Nationally, in May 1996, the unemployment rate was 5.4 percent and the poverty rate for 1995-96 averaged 13.8 percent. In November 1996, as states were beginning to implement the Welfare Reform Act, about 427,600 persons received food stamps in the tri-county area. Of this total, about 29,500 were able-bodied adults without dependents. According to a Michigan state official, the agency does not track the number of legal immigrants receiving food stamps. As of September 1997, after many changes to the Food Stamp Program were implemented, about 381,500 individuals were receiving food stamps. Michigan’s Family Independence Agency (FIA) administers the Food Stamp Program in all counties, including Macomb, Oakland, and Wayne. An FIA official told us that FIA was assisting able-bodied adults without dependents with employment and training so that they can become self-sufficient while meeting work requirements that allow them to continue receiving food stamps. According to an FIA official, the state is not planning to create a new food assistance program to assist legal immigrants who lost food stamp benefits. Able-bodied adults without dependents have several opportunities to participate in employment and training and meet the Food Stamp Program’s work requirements. They can participate in a state-approved employment training program, work 20 hours a week, or perform 25 hours of public service at a nonprofit agency. Effective October 1, 1997, the number of community service hours must equal the benefit divided by the minimum wage ($5.15 per hour). In addition, in 1996, Michigan established the Food Stamp Community Service Program, which focuses on able-bodied adults without dependents. In fiscal year 1998, the state will receive $13.4 million from USDA’s FNS to expand work programs for this population. Nonprofit organizations in greater Detroit, Michigan, some of which are affiliated with national groups, provide direct and indirect food assistance through an established network that includes soup kitchens, food pantries, and food banks. According to officials of 10 nonprofit agencies, able-bodied adults without dependents and legal immigrants who lose their food stamps as a result of welfare reform will look for food assistance from these nonprofit organizations. Several of these officials told us that they had already experienced an increased need for their services as a result of welfare reform. They expressed concern about their ability to provide these additional services because of limited funding. However, several organizations we visited have developed strategies to increase the supply of food. These strategies include (1) raising additional money through fund-raising activities, (2) seeking more government and corporate grants, (3) encouraging Michigan to apply for federal food stamp waivers, (4) raising funds for target groups of legal immigrants, and (5) improving the emergency provider infrastructure. Table VIII.1 describes the nonprofit organizations that we contacted. Food bank (a Second Harvest affiliate) Serves about 300 emergency food providers, including soup kitchens and food pantries. Refers clients to 124 soup kitchens and food pantries in Wayne County and provides technical assistance for emergency food providers. Retrieves perishable food from restaurants and other food service organizations. Each month transports 60,000 pounds of food to tri-county soup kitchens and shelters. Provides about 500,000 pounds of food per year to about 1,100 Jewish families in the metropolitan area. Establishes about 20 new emergency food assistance providers each year. Since 1991, the coalition has received about $1.8 million in grants which it distributed to about 270 emergency food providers. Provides 7 days of food for needy families. Also serves two meals daily for the homeless in downtown Detroit. Commodity Supplemental Food Program, providing groceries for mothers, infants, preschool children, and seniors over the age of 60 meeting certain income guidelines. Many nonprofit organizations had eligibility requirements for individuals to receive their food assistance services. One required a certain qualifying income level but frequently made exceptions. Some providers serve only people within certain geographical boundaries. Others will provide food for anyone who asks. A few groups provide groceries for people on special diets; for example, an Oakland County food pantry provides groceries for those who keep kosher kitchens. Other providers primarily serve specific groups such as Hmong, Vietnamese, and migrant farm workers. Detroit’s emergency food assistance providers depend upon a variety of resources to fund their operations: grants from large corporate foundations; federal, state, and local governments; and community fund-raising activities that donate food and money. Community-based organizations also depend upon volunteers to manage and staff their food pantries and soup kitchens. According to the emergency food providers we interviewed, food is generally available for soup kitchens and food pantries, but additional funding for infrastructure is needed. The supply of food available for emergency food services does not depend only on the number of people needing emergency food services or the amount of food available for donation. The availability of funding for infrastructure—transportation and storage space, including refrigeration, and staff—is key to a successful food assistance operation. For example, many smaller soup kitchens and food pantries lack refrigeration and storage space, which prevents them from obtaining and keeping donated meats, vegetables, fruits, or dairy products. Furthermore, these organizations anticipate an increase in individuals and families needing their services. For example, early in 1997, one food pantry—Yad Ezra—realized that welfare reform would affect a number of Russian Jewish immigrants and that some means had to be found to replace the food stamps that would no longer be available. A Yad Ezra survey indicated that 212 of the 1,006 families currently being assisted would be affected by welfare reform and that many of the families are elderly and sick. Therefore, their ability or desire to learn English and gain citizenship is doubtful. Only 39 percent of the surveyed immigrants are taking citizenship classes. Specifically, to assist the 212 families, Yad Ezra will need $100,000 to augment the food pantry’s current year’s budget of $680,000. Each subsequent year, the organization will need to raise additional money to assist needy legal immigrants. Any additional money could exceed $100,000 each year, since Yad Ezra did not attempt to identify any new families or individuals whom it was not currently serving and who could be affected by welfare reform. Officials from Yad Ezra believe that Yad Ezra’s effort to replace food stamps for its legal immigrants is not likely to be duplicated by other food pantries because, unlike most other organizations, Yad Ezra serves a specific group of legal immigrants and was able to obtain the necessary resources to meet its needs. Houston, Texas (Harris County) had a population of 3.1 million in 1995. In May 1996, the unemployment rate was 5.1 percent in Houston, while the state unemployment rate was 5.4 percent. The poverty rate in 1995-96 averaged 17 percent. Nationally, in May 1996, the unemployment rate was 5.4 percent, and the poverty rate for 1995-96 averaged 13.8 percent. In February 1997, as states were beginning to implement the Welfare Reform Act, over 333,000 individuals in Houston received food stamps. Of this population, about 14,000 were able-bodied adults without dependents, and about 17,000 were legal immigrants. As of September 1997, after many changes to the Food Stamp Program were implemented, the number of able-bodied adults without dependents receiving food stamps had decreased to about 4,900, and the number of legal immigrants had decreased to about 2,700. The Texas Department of Human Services (TDHS) administers USDA’s Food Stamp Program. Texas had obtained a waiver of the work requirement for selected counties; however, it decided not to implement the waiver in Houston because the unemployment rate was less than 10 percent. Therefore, all able-bodied adults in Houston are required to participate in employment and training activities in order to continue receiving food stamps. According to TDHS officials, activities meeting these work requirements include regular employment; self-employment; volunteer work with a business, government entity, or nonprofit organization; and/or participation in the Job Training Partnership Act or the Trade Adjustment Assistance Act Program. In addition, these adults can obtain assistance from the Texas Workforce Commission’s Food Stamp Employment and Training Program. The purpose of this program is to move welfare recipients to work as quickly as possible. However, participation in the job search and the job search training component of this program does not satisfy the work requirement. For legal immigrants, Texas is providing, effective February 1998, targeted food assistance to elderly and disabled legal immigrants who were receiving food stamp benefits as of August 22, 1996, and lost benefits because of welfare reform. Texas is providing about $18 million for this effort. Benefits will range from $10 to $122 per month per individual. The nonprofit organizations that provide direct and indirect food assistance—including food banks and food pantries—in Houston generally operate independently of each other. At the time of our visit, most of the organizations reported that their ability to provide food assistance for those needing it had not yet been affected by welfare reform. However, most organizations told us that they expected the amount of food assistance provided by them to increase within the next 2 to 3 years because of welfare reform. In addition, many organizations’ officials expressed concern that they may have difficulty in providing food assistance in the future. One organization attributed this difficulty to the fact that so many organizations were competing for the same monetary and food donation resources. Most organizations did not have planned approaches for dealing with the expected increase in the need for their services. However, one organization is considering a reduction in the number of items that it distributes in bags of groceries in order to meet the expected increased need for its services. Table IX.1 list the nonprofit organizations that we contacted. Community service agency Provides supplemental food for low-income families in emergency situations. Food bank (local affiliate of Second Harvest) Distributes food to local charities that care for the needy. Community social service agency (local affiliate of United Way) Distributes funds to community groups that target hunger. Distributes food in low-income neighborhoods through volunteers who operate food pantries and community gardens. Distributes rice, instant noodles, and canned food within the Vietnamese community. Conducts fund-raising activities for member coalitions that provide food assistance. Provides funding that supplements local food assistance programs. Provides food support for 30 agencies and 110 food pantries. Associated Catholic Charities of the Diocese of Galveston-Houston (Guadalupe Social Services) Provides emergency food assistance and bags of groceries once a month for senior citizens and disabled persons. Most organizations had income eligibility requirements for their food assistance services, and several limited their assistance to individuals residing in certain areas. In addition, one organization focused its efforts in the Vietnamese community. Funding sources for the nonprofit organizations we visited varied. Most received funding from federal, state, and local government grants and donations from religious organizations and individuals. Robert E. Robertson, Associate Director Patricia Gleason, Assistant Director Tracy Kelly Solheim, Project Leader Carolyn Boyce, Senior Social Science Analyst Carol H. Shulman, Communications Analyst Kathy Alexander Renee McGhee-Lenart Janice Turner Sheldon Wood, Jr. Patricia A. Yorkman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the impact of welfare reform on the Food Stamp Program, focusing on: (1) the actions, if any, that states have taken to assist those individuals who lose eligibility for the Food Stamp Program; and (2) related actions, if any, taken by other organizations in selected localities--local governments and nonprofit organizations--to assist those individuals who lose their eligibility for the Food Stamp Program. GAO noted that: (1) most states are taking a variety of measures to address the changes in the Food Stamp Program as a result of welfare reform; (2) for able-bodied adults without dependents, many states are providing employment and training assistance; (3) this assistance, although primarily intended to move these individuals toward self-sufficiency, may still allow them to qualify for food stamp benefits if they meet both income and work requirements; (4) most states have obtained the authority from the Department of Agriculture, if they choose to exercise it, to continue providing food stamp benefits for individuals in areas with high unemployment or in areas with insufficient jobs; (5) 20 states are providing or plan to provide legal immigrants with information on how to become U.S. citizens; (6) because it takes over 1 year on average to process citizenship applications, many legal immigrants lost their food stamp benefits as of August 22, 1997; (7) the Food and Nutrition Service estimated that 935,000 legal immigrants had lost their federal food stamp benefits; (8) some states have existing programs that provide food assistance for the needy--such as food pantries--that able-bodied adults without dependents and legal immigrants who have lost their food stamps already had access to; (9) some states have developed new programs to specifically meet the needs of individuals who lose their food stamps; (10) 10 states--including 4 states estimated to have about 70 percent of the legal immigrants who receive food stamps in the U.S.--are purchasing or planning to purchase federal food stamps with their own funds--primarily for legal immigrant children and the elderly; (11) in December 1997, the states involved indicated that about 241,000 of these individuals are now receiving food stamp benefits funded by the states; (12) the extent to which any of these actions will meet the food assistance needs of those affected remains unknown; (13) in the five localities GAO visited, government officials are implementing their state's efforts to address changes in the Food Stamp Program and, in some cases, are working with local nonprofit organizations to plan for an expected increase in the need for food assistance; (14) most of the nonprofit organizations GAO visited said that it is too early to assess the impact of welfare reform on their food assistance programs; and (15) however, the organizations fear that their limited resources may be insufficient to meet the needs of the individuals who have lost their food stamps, which included the basic foods that the program provided.
You are an expert at summarizing long articles. Proceed to summarize the following text: In the United States, product safety, including fire safety, is largely promoted through a process of consensus-based standards and voluntary certification programs. ANSI establishes requirements to ensure that standards are formulated through a consensus-based process that is open and transparent and that adequately considers and resolves comments received from manufacturers, the fire safety community, consumers, government agencies, and other stakeholders. Standards are generally developed in the technical committees of organizations that include independent laboratories, such as Underwriters Laboratories; and trade and professional associations, such as the American Society for Testing and Materials. These entities form a decentralized, largely self-regulated network of private, independent, standards-development organizations. For those organizations that choose to follow ANSI procedures, ANSI performs audits and investigations to ensure that standards-development organizations follow approved consensus-based procedures for establishing standards. Standards promulgated by such organizations can become part of a system of American National Standards currently listed by ANSI. Overall, according to NFPA, the U.S. standards community maintains over 94,000 active standards, both American National Standards and others. These 94,000 active standards include private sector voluntary standards as well as regulatory and procurement standards. The process of developing consensus-based standards is designed to balance the needs of consumers, federal and nonfederal regulators, and manufacturers. According to ANSI officials, new standards are commonly adopted or existing ones are frequently revised because manufacturers express a need for such actions on the basis of the development of new products. Representatives of other parties—such as regulators or consumers—may raise concerns about product safety and performance. For marketing and consumer safety purposes, product manufacturers may have their products tested at independent testing laboratories to certify that the products meet applicable product standards. This testing and certification process is called “product conformity testing and certification.” Some local, state and federal agencies require such testing and certification. For example, manufacturers of electrical home appliances have their products tested and certified by Underwriters Laboratories to enable them to attest that the products meet safety standards regarding fire, electrical shock, and casualty hazards. Alternatively, where acceptable, manufacturers can certify on their own that their products were tested and met applicable standards. Standards are also voluntarily accepted and widely used by manufacturers and regulatory agencies to provide guidance and specifications to manufacturers, contractors, and procurement officials. Each year millions of products are sold in the United States and throughout the world that bear the mark of testing organizations. Consumers, manufacturers, and federal agencies follow the very widespread, internationally recognized practice of relying on consensus standards and testing at laboratories to promote public safety. In the case of facilities and residences, the most extensive use of the standards is their adoption into model building codes by reference. Model building codes contain standards published by many organizations, including professional engineering societies, building materials trade associations, federal agencies, and testing laboratories. When erecting facilities; renovating offices; and purchasing equipment, materials, and supplies, federal agencies rely on the fire safety standards developed by private standards-development organizations. Furthermore, the federal government has historically encouraged its agencies to use standards developed by these organizations. For example, in its 1983 Circular A-119, OMB encouraged agencies to use these standards. Moreover, the National Technology Transfer and Advancement Act of 1995 requires agencies to use standards developed or adopted by voluntary consensus bodies, except when it is inconsistent with applicable law or otherwise impractical. Essentially, OMB Circular A-119 and the act direct federal agencies to use voluntary consensus standards whenever possible. They also direct federal agencies to consult with and participate, when appropriate, in standards-setting organizations and provide explanations when they do not use voluntary consensus standards in their procurement or regulatory activities. As of June 2001, according to NFPA, about 15 percent of the estimated 94,000 standards effective in the United States had been developed by civilian federal agencies. Furthermore, the Public Buildings Amendments of 1988 require GSA to construct or alter buildings in compliance with the national building codes and other nationally recognized codes to the maximum extent feasible. Federal agencies also engage in a variety of activities related to certifying that products conform to standards. For example, the National Institute of Standards and Technology publishes directories listing more than 200 federal government procurement and regulatory programs in which agencies are actively involved in procuring or requiring others to procure products meeting certification, accreditation, listing, or registration requirements. Furthermore, many federal agencies participate in the development of fire standards and product-testing procedures. For example, GSA participates on technical committees, such as those of NFPA and Underwriters Laboratories. As a result, GSA specifies numerous products and building code regulations that meet standards and testing requirements from standards-development organizations and testing laboratories. In addition, voluntary standards and the testing of products to those standards are widely accepted by other civilian federal agencies, such as the departments of Agriculture, Housing and Urban Development, the Interior, Labor, Transportation, and the Treasury as well as the Environmental Protection Agency. The federal government has no comprehensive, centralized database regarding the incidence of fires in federal facilities or the causes of such fires. According to NFPA, fires in office facilities, including federal civilian facilities, annually cause about 90 injuries and about $130 million in property damages. Although responsible for maintaining a national fire incident database and for serving as the lead agency in coordinating fire data collection and analysis, the U.S. Fire Administration does not collect data on the number of fires in federal office facilities and the causes of those fires, nor about specific types of products involved in the fires. For its part, GSA collects a minimal amount of information in the facilities for which it is responsible—about 330 million square feet in over 8,300 buildings—to determine the number and causes of fires that have occurred in the facilities. In addition, like the U.S. Fire Administration, NFPA does not gather specific information about whether a fire occurred on private or government property or whether the fire involved specific products. Thus, these databases do not contain sufficiently detailed data to allow the identification of fire incidents in federal facilities or fires associated with specific product defects. Also, the government does not have a mechanism for providing fire incident data to standards- development organizations when they consider the revision of product standards and testing procedures. As a result of a lack of detailed data collection and reporting systems, the government cannot assess the number and causes of fires in federal facilities and therefore cannot determine if any action is needed to ease the threat of fire. Certain private sector firms take steps to identify the nature of the fire threat in their facilities. For example, to help insurance companies, communities, and others evaluate fire risks, the Insurance Services Office, an affiliate of the insurance industry of the United States, maintains detailed records and performs investigations about individual properties and communities around the country, including such factors as the physical features of buildings, detailed engineering analyses of building construction, occupancy hazards, and internal and external fire protection. In addition, the Marriott Corporation, a worldwide hotel chain, maintains data on fires throughout its facilities. According to a Marriott official, Marriott uses this information to assess the risk of fire in its facilities and to take corrective actions. At the same time, the number and causes of fires in federal workspace are not known. The federal government—an employer of over two million civilian employees—does not have a system for centrally and comprehensively reporting fire incidents in its facilities and the causes of those incidents. For example, according to GSA officials, the agency-- which manages over 300 million square feet of office space--collects information on fires that cause over $100,000 in damage. However, when we requested this information, GSA could not provide it and provided examples of only two fires. According to a GSA official, GSA cancelled a requirement for its regional offices to report smaller fires to a central repository. GSA explained that it found the task of reporting smaller fires to be very labor intensive and time consuming. GSA also found that analysis of the reported information could not determine specific fire trends. Databases that are available and maintained by federal agencies—such as databases of the Department of Labor, Consumer Product Safety Commission, and U.S. Fire Administration—do not provide sufficient detail for determining the number and causes of fires in federal facilities, including the products involved in the fires. For example, according to the Department of Labor (Labor), 7 civilian federal employees died (excluding the 21 who died in forest or brush fires), and 1,818 civilian federal employees were injured while at work as a result of fires or explosions between 1992 and 1999. Although Labor gathers information about federal employees’ injuries and fatalities caused by fires, this information does not identify details, such as the cause of the fire. Furthermore, because of a lack of reporting detail, the data do not lend themselves to an analysis of what specific products may have been involved in the fire and whether the product had been certified as meeting appropriate product standards. Within Labor, OSHA’s Office of Federal Agency Programs, the Bureau of Labor Statistics, and the Office of Workers’ Compensation Programs routinely gather information about federal employee injuries and fatalities. OSHA’s Office of Federal Agency Programs, whose mission is to provide guidance to each federal agency on occupational and health issues, also collects annual injury statistics from each federal agency. These statistics are in aggregated form, however, and do not provide detail about the nature or source of the injury. The Department of Labor’s Bureau of Labor Statistics has been collecting information on federal employee fatalities since 1992 through its Census of Fatal Occupational Injuries (CFOI). This census contains information regarding work-related fatality data that the federal government and the states have gathered from workers’ compensation reports, death certificates, the news media, and other sources. According to the CFOI, between 1992 and 1999, 7 civilian federal employees were fatally injured due to fire-related incidents while working (excluding the 21 who died in brush or forest fires). Although the fatal injuries census does identify federal employee fatalities due to fires, it does not contain details about the fire, such as the cause of the fire or the types of products or materials that may have been involved in the fire. Also within the Department of Labor, the Office of Workers’ Compensation Programs maintains information about federal employees or families of federal employees who have filed claims due to work-related traumas. The office was able to provide from its database information about the claims of federal employees or their families resulting from fire- related incidents. According to the Office of Workers’ Compensation, between 1992 and 1999 1,818 civilian federal employees were injured in federal workspace as a result of fire-related incidents while working. However, this information includes data only for those federal employees who actually filed claims. Similar to CFOI data, this database does not contain additional details about the fire, such as the cause of the fire or the types of products or materials that may have been involved in the fire. The Consumer Product Safety Commission maintains a variety of data on product recalls and incidents related to consumer products. However, none of the four databases that it maintains can identify information about federal facilities or federal employees. The U.S. Fire Administration is chartered as the nation’s lead federal agency for coordinating fire data collection and analysis. However, the national fire incident databases maintained by the U.S. Fire Administration do not gather specific information about whether a fire occurred on private or government property or whether the fire involved specific products. The Fire Administration maintains the National Fire Incident Reporting System (NFIRS)—a national database through which local fire departments report annually on the numbers and types of fires that occur within their jurisdictions, including the causes of those fires. Reporting, however, is voluntary; according to the U.S. Fire Administration, this results in about one-half of all fires that occur each year being reported. In addition, the U.S. Fire Administration does not collect data on the number of fires in federal office facilities and the causes of those fires, nor about specific types of products involved in a fire. According to its comments on a draft of our report, the Fire Administration does not have the resources or authority to implement a nationwide study of fires in federal workspace. In addition to the federal databases, NFPA also maintains a national fire incident database. According to NFPA, between 1993 and 1997, an average of 6,100 fires occurred per year in federal and nonfederal office space, resulting in an average of 1 death, 91 injuries, and $131.5 million in property damage per year. NFPA’s estimates are based on information that fire departments report to the Fire Administration’s NFIRS system and on information from NFPA’s annual survey. NFPA annually samples the nation’s fire departments about their fire experiences during the year; using this data, NFPA projects overall information about fires and their causes to the nation as a whole. However, neither the U.S. Fire Administration nor NFPA gathers specific information about whether a fire occurred on private or government property or whether the fire involved specific products. In the past, the federal government has collected data regarding fires occurring on federal property. The Federal Fire Council was originally established by Executive Order within GSA in 1936 to act as an advisory agency to protect federal employees from fire. The council was specifically authorized to collect data concerning fire losses on government property. However, the council moved to the Department of Commerce in 1972 and was abolished in 1982. Along with manufacturers, consumer representatives, fire safety officials, and others, the federal government is one of several important stakeholders involved in the standards-development process. However, as previously discussed, the government does not consistently and comprehensively collect information on fire incidents in federal facilities, and hence it cannot systematically provide these data to standards- development organizations for consideration during revisions of standards. Furthermore, some federal agencies may be slow to respond to information about failures of certain products, including those products intended to suppress fires. In at least one case, a fire sprinkler product that failed in both the work place and the testing laboratory, as early as 1990, continued to be used in federal facilities, and it has only recently been replaced at some facilities. This case is discussed below. Omega sprinklers were installed in hundreds of thousands of nonfederal facilities and in about 100 GSA-managed buildings. In 1990, a fire occurred at a hospital in Miami, FL, resulting in four injuries. During this fire, Omega sprinklers failed to activate. Through 1998, at least 16 additional fires occurred, during which Omega sprinklers failed to work, including a May 16, 1995, fire at a Department of Veterans Affairs hospital in Canandaigua, NY. During the New York fire, an Omega sprinkler head located directly over the fire failed to activate. Losses resulting from these and other fires were estimated at over $4.3 million (see table 1). Although none of the fires reported in table 1 occurred in Fairfax County, VA, the County fire department became concerned that many of the sprinklers were installed in public and private facilities in the county. Throughout the mid-1990s, by publicizing its concerns about the sprinklers, the County fire department contributed to the widespread dissemination of information about the sprinklers in the media. In addition, tests performed in 1996 at independent testing laboratories— Underwriters Laboratories and Factory Mutual Research Corporation— revealed failure rates of 30 percent to 40 percent. On March 3, 1998, the Consumer Product Safety Commission announced that it had filed an administrative complaint against the manufacturer, resulting in the October 1998 nationwide recall of more than 8 million Omega sprinklers. The agency began investigating Central Sprinkler Company’s Omega sprinklers in 1996 when an agency fire engineer learned about a fire at a Marriott hotel in Romulus, MI, where an Omega sprinkler failed to activate. After identifying that there was a hazard that warranted recalling the product, the Commission staff sought a voluntary recall from Central. Unable to reach such an agreement with Central, the agency’s staff were authorized to file an administrative complaint against the company. Moreover, the Commission attempted to coordinate with other federal agencies, such as the Department of Veterans Affairs and GSA. The Department of Veterans Affairs participated in the recall in accordance with the terms of the Commission’s settlement agreement with the manufacturer. GSA officials stated that they became aware of the problems associated with Omega sprinklers in 1996 after hearing about them from the news media and Fairfax County Fire Department officials. GSA began a survey to identify the 100 GSA-managed buildings that contained the sprinklers. It also pursued an agreement with the manufacturer, resulting in a 1997 negotiated settlement for the replacement of some 27,000 devices in GSA- controlled buildings. Officials from OSHA stated that they were unsure about when they became aware of the problems associated with Omega sprinklers. An agency official explained that OSHA generally does not monitor information regarding problems with specific products, except for Consumer Product Safety Commission recalls. According to OSHA, it checks such recalls only informally and within the limited context of one of its programs, but not as a part of its primary compliance efforts. In addition, according to OSHA officials, when OSHA did find out about the Omega sprinklers problems, it took no action because such problems are outside the agency’s jurisdiction unless the problems involve noncompliance with applicable OSHA requirements. According to an OSHA official, OSHA does issue “Hazard Information Bulletins” that could potentially contain information about failures of specific products. However, these bulletins do not generally duplicate Consumer Product Safety Commission recall information and do not generally concern consumer products. Federal facilities not controlled by GSA—including those of Capitol Hill (the House of Representatives, the Capitol, the Senate, and the Library of Congress) and the Smithsonian Institution—have either recently replaced or are just now replacing the defective Omega sprinklers. According to an official of the Architect of the Capitol, although the facility’s management was aware of the problems with the sprinklers, it continued using them because of cost considerations. At the time our review was completed, the Architect of the Capitol had removed and replaced the Omega sprinklers from all of the House of Representatives buildings and Capitol buildings, most of the Senate buildings, and one of the Library of Congress’ buildings. The Architect of the Capitol was also in the process of replacing them in the remainder of the Senate and Library buildings. In addition, according to the Chief Fire Protection Engineer of the Smithsonian, agreement for a free-of-cost replacement of the Omega sprinklers has been reached, although the process of replacing them had not begun at the time we completed our work. At your request, we also reviewed concerns about the extent to which information technology equipment—such as computer printers, monitors, and processing units—could be a source of fires in offices, homes, and other places, including federal workspace. A private testing laboratory in Sweden recently performed experiments that suggested that some types of information technology equipment could be subject to damage from flames that originate from external sources. In response to these concerns, the Information Technology Industry Council convened a panel of stakeholders—including the Consumer Product Safety Commission, Underwriters Laboratories, and others—to study the issue. The panel found that information technology equipment did not pose a widespread fire threat in the United States. According to the representatives of the American Chemistry Council, the threat of information technology equipment fires from external sources is mitigated by the presence of various types of flame retardants in the casings of this equipment. Moreover, representatives of the Information Technology Industry Council stated that the industry has a policy of making its equipment as safe as possible for consumers. They agreed, however, that the issue of the flammability of information technology equipment needed further study. Fires, even relatively small ones, can have tragic and costly consequences. Knowing the numbers and types of fires in workspace, as well as the causes of fires and any products involved, is critical for understanding the extent of the risk of fire and can lead to identification and implementation of steps to reduce this risk. Some private sector organizations—for example, a major hotel chain and some insurance organizations—track the number of fires in different types of facilities and their causes. Such information is used to manage this risk and reduce property damage, injuries, and the loss of life. However, the federal government, which employs over two million people in space that GSA and other agencies manage, collects very limited information on fires and lacks information on the risk of fires in its workspace. Without more complete information on fires, the federal government—a key player in the standards- development process—cannot provide timely information on the causes of fires in federal facilities to standards-development organizations for their use in developing and revising standards, testing procedures, and certification decisions. Collecting and analyzing data on the risk of fire in its workspace could enable the government to better protect its employees and enhance its ability to participate in producing standards that would better protect the public at large from fire. We recommend that the Administrator, U.S. Fire Administration, in conjunction with the Consumer Product Safety Commission, GSA, OSHA, and other federal agencies that the Fire Administration identifies as being relevant, examine whether the systematic collection and analysis of data on fires in federal workspace is warranted. If they determine that data collection and analysis are warranted, data that should be considered for collection and analysis include: the number of fires in federal workspace; property damage, injuries, and deaths resulting from such fires; and the causes of these fires, including any products involved. In addition, the agencies should discuss, among other topics deemed relevant, the availability of resources for implementing any data collection system and any needed authority to facilitate federal agencies’ cooperation in this effort. We provided copies of a draft of this report to the heads of the Federal Emergency Management Agency’s Fire Administration and GSA, as well as the Consumer Product Safety Commission and the Department of Labor. Because of its role in testing Omega sprinklers, we also provided a copy of the report to Underwriters Laboratories. Although Underwriters Laboratories had no comments on the draft, the other recipients of the draft provided comments via E-mail. These comments, and our responses to them, are discussed below. In commenting on our draft report, the Director of the Fire Administration’s National Fire Data Center agreed in principle with our recommendation by stating that Fire Administration officials would gladly meet with GSA and others to examine whether specialized data collection is warranted. We welcome the Fire Administration's proposal. In addition, the Fire Administration listed several obstacles to the creation of a complete and accurate fire incident reporting system: (1) its lack of resources, (2) its lack of authority to require other federal agencies to report fires, and (3) its lack of on-site management and control over an existing fire incident reporting system, the National Fire Incident Reporting System (NFIRS). Moreover, the Fire Administration does not specifically collect data on the number and causes of fires in federal office facilities, and no indication exists that the fire problem in federal facilities differs significantly from the overall national fire experience in similar workplace environments. We agree that data on federal fires are not currently collected, and we would cite this lack of information as a significant reason for exploring the need for a system to report the number and causes of fires in federal space. We further agree that a lack of resources, of authority to compel fire incident reporting, and of management over reporting may pose serious obstacles to improved fire incident reporting; therefore, we urge that the Fire Administration address these factors with other agencies when it meets with them to discuss the need for more specialized reporting on fires in federal work space. GSA senior program officials commented on a draft or our report. They requested that we delete a statement in our draft report that GSA could not provide us with complete information on fires that caused over $100,000 damage in federal facilities it manages. GSA said that our statement was not germane. We declined to make this change because the statement is germane to our discussion about a lack of information on fires in the federal workplace. GSA’s inability to provide the information we requested serves to illustrate this very point. In addition, we added information in our report regarding GSA’s explanation that it had cancelled a previous requirement for its regional offices to report smaller fires to a central repository. GSA explained that such reporting was labor intensive and time consuming, and analyses of this information could not yield specific fire trends. We agree with GSA that some reporting requirements may be labor intensive, time consuming, and not helpful. Therefore, in our view, as stated above and as reflected in our recommendation, the Fire Administration should address these factors with GSA and other agencies when it meets with them to discuss the need for more specialized reporting on fires in federal work space. GSA did not comment on the recommendation in the draft of our report. In addition, Department of Labor officials provided technical and clarifying comments, all of which we incorporated into our report. However, they did not comment on the recommendation. The Department of Labor’s Bureau of Labor Statistics Assistant Commissioner, Office of Safety and Health, provided additional data regarding the number of federal employees who died as a result of fires or explosions from 1992 through 1999, clarifying that most of these fatalities occurred outside of federal buildings. The Department’s Occupational Safety and Health Administration’s Acting Director for Policy provided additional information, which we incorporated into our report, about the extent of its involvement in the Omega sprinkler case and the rationale for the actions it took. The Consumer Product Safety Commission stated that its comments were editorial in nature, and we revised our report to incorporate these comments. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the cognizant congressional committees; the Administrator, General Services Administration; the Chairman, Consumer Product Safety Commission; the Secretary of Labor; and the Administrator, Federal Emergency Management Agency. We will also make copies available to others on request. If you have any questions about this report, please contact me at (202) 512- 4907. Key contributors to this report were Geraldine Beard, Ernie Hazera, Bonnie Pignatiello Leer, Bert Japikse, and John Rose. Our report (1) provides information on the federal government’s reliance on private voluntary fire standards and testing products against those standards and (2) discusses whether data that are available about fire incidents and their causes in civilian federal facilities are sufficient to protect federal workers from the threat of fire. To examine the government’s reliance on fire safety standards and testing, we reviewed policies and procedures regarding how standards-setting organizations and independent laboratories establish fire safety standards and test products, as well as the roles of federal agencies and other interested parties in these processes. We contacted standards- development organizations, including Factory Mutual Research, Underwriters Laboratories, Southwest Research Institute, the American National Standards Institute (ANSI), and the American Society for Testing and Materials. We also obtained information regarding how testing and standards-setting laboratories and organizations consider fire incident data and other information about fire hazards when revising fire safety standards and testing procedures. We obtained and analyzed regulatory and statutory criteria regarding the federal role in fire safety standards and testing. We interviewed federal officials from the General Services Administration (GSA), the National Institute of Standards and Technology, the U.S. Fire Administration, the Consumer Product Safety Commission, and the Department of Labor, as well as officials from standards- development organizations. We also interviewed fire protection officials, including officials from the International Association of Fire Fighters, the International Association of Fire Chiefs, and the Fairfax County, VA, Fire Department to obtain information on setting standards and testing products. To examine whether data are available about incidents and causes of fires in civilian federal facilities, we contacted GSA, the manager of about 40 percent of all civilian, federal office space. However, GSA does not routinely collect information about all fires that occur in federal facilities. Therefore, we obtained and analyzed fire protection incident data from the Fire Administration and the National Fire Protection Association (NFPA). The U.S. Fire Administration maintains the National Fire Incident Reporting System, which is the world’s largest national annual database of fire incident information. State participation is voluntary, with 42 states and the District of Columbia providing reports. The data in the National Fire Incident Reporting System comprise roughly one half of all reported fires that occur annually. NFPA annually surveys a sample (about one- third) of all U.S. fire departments to determine their fire experiences during the year. NFPA uses this annual survey together with the National Fire Incident Reporting System to produce national estimates of the specific characteristics of fires nationwide. Through a review of the databases, we found that there was not sufficient detail to determine which of the fires reported occurred in federal facilities. In addition, the fire departments do not document the name brands of any product that might have been involved in a fire. However, NFPA was able to provide information about fires that have occurred in office space (federal and nonfederal) from 1993 through 1998. Finally, we did not conduct a reliability assessment of NFPA’s database or the National Fire Incident Reporting System. We also attempted to determine the number of civilian federal employees who may have been injured or killed as a result of a fire-related incident while at work. In this regard, we obtained information from the Bureau of Labor Statistics’ Census of Fatal Occupational Injuries (CFOI) regarding civilian federal employee fatalities from 1992 through 1999. The federal government and the states work together to collect work-related fatality data from workers’ compensation reports, death certificates, news stories, and other sources for CFOI. All 50 states participate in CFOI. The Bureau of Labor Statistics was able to provide information from CFOI describing the number of civilian federal employees fatally injured due to fire-related incidents while at work. We also obtained information from the Office of Workers’ Compensation Programs from 1992 through April 2001 regarding civilian federal employees or their families who have filed for workmen’s compensation as a result of an injury or fatality due to a fire-related incident while at work. However, the data represent only those incidents for which a civilian federal employee or the family filed a claim. With the limited data available from the fatal injuries census and Office of Workers’ Compensation Programs, we were unable to do an analysis of the number of claims filed due to bombings, such as the April 1995 Murrah Federal Building bombing in Oklahoma City, OK, and the August 1998 bombing of the U.S. Embassy in Dar Es Salaam, Tanzania. In addition, according to CFOI, the fatality data do not include fatalities due to bombings, such as the Oklahoma City bombing and the Dar Es Salaam bombing. When a fatality is reported, CFOI requires that Assaults and Violent Acts, Transportation Accidents, Fires, and Explosions reports take precedence in the reporting process. When two or more of these events occur, whoever inputs the information selects the first event listed. The Bureau of Labor Statistics classified the Oklahoma City bombing deaths as homicides under the Assaults and Violent Acts category. In addition, the Office of Workers’ Compensation Programs was able to provide information on the number of injuries to civilian federal employees that its Dallas District Office reported for 1995 as resulting from explosions. According to the Office of Workers’ Compensation Programs, it is likely that many of these injuries resulted from the Oklahoma City bombing. Furthermore, the databases do not contain any details of fires. We used the fatality data from CFOI, because it is the more comprehensive source of federal employee fatality information. Finally, we did not conduct a reliability assessment of the Bureau of Labor Statistics’ CFOI database or the database of the Office of Workers’ Compensation Programs. We also obtained information about fire incidents related to consumer products by contacting the Consumer Product Safety Commission. The Commission maintains several databases that allow it to conduct trend analyses of incidents involving various types of products, including the National Electronic Injury Surveillance System, a Death Certificate File, the Injury or Potential Injury Database, and the In-Depth Investigation File. In addition, the Commission maintains a library (paper files) of information on products that have been recalled. However, none of these sources contained information that would identify information about federal facilities, federal employees, or product brand names, with the exception of those that have been recalled. To examine the quality and limitations of these data, we reviewed relevant documents and interviewed officials from organizations that compile and report the data, including the National Fire Protection Association, Fire Administration, Consumer Product Safety Commission, Occupational Safety and Health Administration, Bureau of Labor Statistics, Office of Workers’ Compensation Programs, and National Institute of Standards and Technology. As requested, we examined details about reporting incidents and concerns involving Omega sprinkler heads and how standards-development organizations, federal agencies, and others responded to reports about the failures of these devices. We contacted officials from, and in some cases obtained documentation from, the Fairfax County (VA) Fire Department. We also contacted various federal regulatory agencies or agencies that used or were indirectly involved in using Omega sprinklers, including GSA, the Consumer Product Safety Commission, Occupational Safety and Health Administration, National Institute of Standards and Technology, Architect of the Capitol, Smithsonian Institution, and Department of Veterans Affairs. We also contacted officials from various laboratories that had tested Omega sprinklers, including Underwriters Laboratories, Factory Mutual, and the Southwest Research Institute. We also interviewed officials from the Marriott Corporation, which, along with Fairfax County, had publicized the problems associated with the sprinklers. As requested, we also reviewed concerns about the possible flammability of information technology equipment. In this regard, we inquired and obtained information about such factors as the types of flame retardants currently used in the casings of information technology equipment and concerns about the environmental and health impacts of these substances, the standards used to mitigate the flammability of information technology equipment, and the tests used to determine the flammability of this equipment. Our sources of information were the American Chemistry Council; the Great Lakes Chemistry Council; the Information Technology Industry Council; the National Association of State Fire Marshals; SP (a private testing laboratory in Sweden); the National Fire Protection Association; Underwriters Laboratories; and federal agencies, including the U.S. Consumer Product Safety Commission and the U.S. Department of Commerce’s National Institute of Standards and Technology. We conducted our work from December 2000 through August 2001 in accordance with generally accepted government auditing standards. American Society for Testing and Materials Southwest Research Institute Underwriters Laboratories, Inc.
Developing fire protection standards and testing products against them are critical to promoting fire safety. Business offices, including federal facilities, experience thousands of fires, more than $100 million in property losses, and dozens of casualties each year. Knowing the number and types of fires in the workplace, as well as their causes, is critical to understanding and reducing fire risks. Some private-sector groups track the number and causes of fires in different types of buildings. Such information is used to manage risk and reduce property damage, injuries, and deaths. However, the federal government collects little information on the fire risks in its facilities. As a result, the federal government cannot provide standards-development organizations with timely information that could be used to develop or revise fire safety standards, testing procedures, and certification decisions. Collecting and analyzing such data would help the government to better protect its employees and would contribute to the production of better standards to protect the public from fire.
You are an expert at summarizing long articles. Proceed to summarize the following text: The telephone remains an essential communication tool for business, government, and the general public. The public switched telephone network (PSTN), an interconnected network of telephone exchanges over which telephone calls travel from person to person, is the backbone of the communications architecture that enables the transmission of voice and data communications. In general terms, the PSTN is the public communications system that includes the networks of local and long distance telephone carriers, as well as cellular networks and satellite systems. To connect one wireline (also known as landline) telephone to another, the telephone call is routed through various switches at telephone exchanges that are operated by local and long-distance telephone carriers. As a caller dials another party’s number, the transmission from one caller to the other is made through a telephone company’s facility, known as the central office, over copper wires or fiber-optic cables to the called party’s telephone. Over time, the PSTN has evolved from an analog system to one that is almost entirely digital and able to support voice and data transmissions made from wireline and wireless devices. Wireless networks, which include cellular and satellite-based systems, among other systems, are an important and growing element of the communications infrastructure. Cellular and satellite-based systems and networks provide an alternative to wireline networks because they are potentially accessible from any point on the globe without the cost of installing a wire or cable. Rather than relying on wired connections, wireless devices (such as cellular telephones) are essentially sophisticated radio devices that send and receive radio signals. These devices connect to a wireless network—which may also interact with the PSTN, depending on the type of connection—that enables the wireless telephone to connect to another wireless or wireline telephone. Wireless networks operate on a grid that divides large geographical areas (such as cities) into smaller cells that can range from a few city blocks to several miles. Each cell contains or is adjacent to a base station equipped with one or more antennas to receive and send radio signals to wireless devices within its coverage area, which can range from less than a mile to 20 miles from the base station. When a caller turns on a wireless device, the device searches for a signal on an available channel from a nearby base station to confirm that service is available. At that time, the base station assigns a radio frequency (also known as radio channels) to the wireless device from among the group of frequencies that the base station controls. Each base station is wirelessly linked to a mobile switching office, as well as a local wireline telephone network. The mobile phone switching office directs calls to the desired locations, whether to another wireless device or a traditional wireline telephone. If a wireless caller is connecting with another wireless telephone, the call may go through the wireline network to the recipient’s wireless carrier, or it may be routed wholly within the wireless network to the base station that is nearest the called party. On the other hand, when the wireless caller is connecting to a wireline phone, the call travels to the nearest base station and is switched by the caller’s wireless carriers to a wireline telephone network. The call then becomes like any other phone call and is directed over the PSTN to the destination number. Because both voice and data transmissions have become common functions in daily life, an effective communications infrastructure that includes voice and data networks is essential to the nation’s ability to maintain communications to enable public health and safety during a natural disaster, such as a hurricane, or a man-made disaster, such as a terrorist attack. Over the years, voice and data networks have evolved separately, with voice networks relying on circuit-switching methods while data networks largely use packet-switching techniques. Thus, a user requiring voice, data, and videoconferencing services may have to use three separate networks—a voice network, a data network, and a videoconferencing network. The telecommunications industry has begun to address the limitations of legacy communications infrastructure (such as the PSTN) to provide integrated voice, data, and video services. Technological advances in these networks have led to a convergence of the previously separate networks used to transmit voice and data communications. These new converged networks—commonly referred to as next-generation networks—are capable of transmitting both voice and data on a single network and eventually are to be the primary means for voice and data transmissions. Converged voice and data networks use technology that is based on packet switching which involves breaking a message (such as an ongoing videoconference, images, or voice conversation) into packets, or small chunks of data. Using the packet’s destination address, computer systems called routers determine the optimal path for the packets to reach their destination where they are recombined to form the original message. In doing so, packets can be transmitted over multiple routes rather than via a predetermined circuit, which, in turn, can help to avoid areas that may be congested or damaged, among other things. For example, information sent over the Internet is packet-switched, the transmission of which is defined by Internet protocol (IP). Wireline and wireless carriers have begun transforming their networks to route voice data this way, called Voice over Internet Protocol (VoIP) rather than circuit-switched methods. The adoption of VoIP and other technological advances is changing the way in which people communicate and, as a result, are likely to become central to the future of NS/EP communications. Figure 1 shows a comparison between how information is transmitted via packet switching versus circuit switching. Industry analysts have said that although the transition to converged networks is well underway, they expect the process to take many years. Furthermore, NCS projects that half of the existing circuit-switched network will be transitioned to packet-based network by 2015 with the remainder reaching full transition by 2025. Despite the evolution in telecommunications technology, congestion in the wireline and wireless telephone networks occurs. Damage or destruction of infrastructure, or extreme demand for service, can result in outages or congestion on the wireline and wireless networks which can impede or obstruct successful communications. During periods of congestion, the caller may encounter signs that the network is congested such as (1) a fast busy signal and (2) a prerecorded message alerting the caller that all circuits are busy. Given the importance of telecommunications to coordinating response and recovery efforts, it is essential that NS/EP officials successfully complete their calls even when there is damaged infrastructure or network congestion. For example, nationwide telecommunications congestion and failures during the September 11, 2001, attacks and Hurricane Katrina in 2005 were due, in part, to both damaged infrastructure and high call volume. Additionally, high call volume that has the potential to create network congestion can occur independent of emergencies. For example, Mother’s Day has historically generated the highest volume of telephone calls of any day of the year. This increased call volume can create network congestion and cause call delay or disruption during normal operations; this congestion would also reduce the likelihood NS/EP personnel would be able to successfully place calls in the event of an emergency during this period. A similar issue exists for text messaging, wherein high volumes of text transmissions can create network congestion. For instance, on New Year’s Eve, a spike in the number of text messages transmitted in the minutes immediately preceding and following midnight could overload cellular networks. The effects of this congestion could be severe for emergency responders in the event they needed to coordinate planning for or response to an emergency at that time. As part of the creation of DHS under the Homeland Security Act of 2002, NCS was transferred to DHS from the Department of Defense. Within DHS, NCS is organized as part of the Office of Cyber Security and Communications and has a fiscal year 2009 budget of $141 million. While the Secretary of Homeland Security has overall responsibility for the broader NCS organization, the duties are delegated to the NCS Manager who has primary responsibility for day-to-day activities of the NCS, including coordinating the planning and provisioning of communications services that support NS/EP needs. Central to its functions are the partnerships that NCS has established with federal, state, and local government entities, and with the service providers and equipment vendors that provide wireline and wireless communications services to support NS/EP communications. For example, NCS has long-standing relationships with industry groups such as the National Security Telecommunications Advisory Committee (NSTAC)—a presidentially appointed committee of industry leaders—that help keep it abreast of changes in the commercial telecommunications marketplace. The committee provides industry-based analyses and recommendations to the President and executive branch regarding telecommunications policy and proposals for enhancing national security and emergency preparedness. Since joining DHS when DHS became operational in March 2003, federal policies provided that NCS’s responsibilities include, among other things, serving as the lead coordinating agency for communications issues (defined as emergency support function no. 2, or ESF-2), under the National Response Framework. As part of this responsibility, when significant impact to the communications infrastructure occurs or is expected, NCS is to serve as one of the primary agencies to (1) support the restoration of the communications infrastructure and (2) coordinate the deployment of federal communications support to response efforts. As part of its ESF-2 role, NCS conducts and/or supports training and exercises intended to test and improve response and recovery capabilities needed in the event of an emergency or disaster. For example, NCS has supported exercises that model emergency scenarios that include potential and actual impacts to the communications infrastructure. In addition to its ESF-2 responsibilities, NCS serves as the Sector-Specific Agency to lead the federal government’s efforts to protect critical communications infrastructure. In this regard, NCS works with industry that owns and operates the vast majority of communications infrastructure to develop strategies to protect against and mitigate the effects of natural disasters or manmade attacks against critical communications infrastructure. As part of this function, NCS is working with industry to develop a risk assessment methodology for use in assessing the communications sector’s overall exposure including the threats, vulnerabilities, and consequences of an incident such as a natural disaster or man-made attack. Within NCS, the National Coordinating Center for Telecommunications (NCC), which serves as the operational component, is an industry- government collaborative body that coordinates the restoration and provisioning of NS/EP communications services during crises or emergencies. The NCC consists of officials from 24 government agencies and 49 companies including eight industry members that are co-located at the center (such as AT&T, Sprint, and Verizon) as well as nonresident members that comprise the telecommunications sector—wireless companies, cable companies, internet service providers, satellite providers, and communications equipment manufacturers and suppliers, among others. Since January 2000, the center also functions as the Telecommunications Information Sharing and Analysis Center to allow information sharing between representatives of the telecommunications companies. During a disruption to telecommunications services, the NCS, through the NCC, coordinates with both resident and nonresident members with the goal of restoring service as soon as possible. According to NCS, this partnership allows both industry and government to work in close proximity, helping to ensure that NCS successfully executes its mission. For example, during the 2008 hurricane season, the NCC worked with its government and industry partners to identify communications assets and infrastructure in the impacted areas and develop pre- and post- landfall strategies and response activities to help ensure availability of communications. In order to overcome network congestion, NCS has implemented priority calling programs to provide NS/EP personnel within all levels of government, as well as the private and non-profit sectors, with communications services during incidents of national security or emergency that can overwhelm the telecommunications network. The two primary programs NCS provides to deliver priority calling are the Government Emergency Telecommunications Service (GETS) and the Wireless Priority Service (WPS). NCS has undertaken a number of outreach efforts to help increase participation in these priority calling programs and has designed controls to help ensure the use of these programs is only for authorized personnel and purposes. NCS has implemented two main programs intended to overcome busy networks during periods of congestion or network failure due to abnormally high usage or infrastructure damage; the GETS program provides wireline priority calling, and WPS provides wireless priority calling for authorized NS/EP officials. According to NCS, it established GETS in conjunction with the nation’s telecommunications industry to meet White House requirements for a nationwide voice and limited data service intended for authorized personnel engaged in NS/EP missions. GETS is designed to provide priority treatment in the wireline portions of the PSTN during an emergency or crisis situation when the PSTN is congested and the probability of completing a call by normal means has been significantly decreased. For example, during the 1995 Oklahoma City Bombing—one of the earliest uses of GETS in an emergency event—a high call volume of three times more than the usual volume resulted in an overload of the telephone network in the Oklahoma City area, according to NCS. During this emergency event, officials from the federal government and the private sector were able to successfully complete about 300 calls using the GETS service. According to a senior official from the Florida Division of Emergency Management, GETS was also used in Florida during Hurricane Katrina. Prior to hitting the Gulf Coast, the hurricane made landfall in South Florida, damaging the communications infrastructure and resulting in network congestion that prevented Florida emergency management officials from completing calls. According to this official, GETS allowed Florida emergency management officials to circumvent the congested lines and successfully complete calls. To activate a GETS call, subscribers follow a three-step process similar to that of using a traditional calling card. First, subscribers must dial the universal access number by using equipment such as a standard desk phone, payphone, secure telephone, cellular phone, VoIP telephone, or facsimile. Next, a tone prompts the subscriber to enter their GETS personal identification number (PIN) found on the calling card distributed to the subscriber. (Figure 2 shows the GETS calling card that is provided to each authorized NS/EP subscriber.) Lastly, the subscriber is prompted to enter a destination telephone number. Once the calling party’s identity is authenticated (via the PIN), the call receives priority treatment that increases the probability of call completion in damaged or congested networks. GETS is designed to achieve a probability that 90 percent of calls made via the PSTN will be successfully completed—that is, establish a connection with the intended called party—during periods of network congestion or outage. The service achieves a high probability of call completion through a combination of features such as re-routing GETS calls around network blockage areas, routing calls to a second or third carrier if the first carrier’s network is congested, and queuing pending GETS calls for up to 30 seconds, among other things. Subscribers can place local, long distance, and international calls; however, it is not possible to use GETS to dial a toll-free destination number. When using GETS, subscribers are billed by the wireline carrier at a rate of $0.07 to $0.10 per minute for calls within the United States and its territories. As of April 2009, the program had grown to more than 227,000 subscribers, according to NCS. As significant increases in wireless telephone subscribers occurred in the mid-1990s, the concept for a wireless priority capability first emerged, according to NCS; however, it was in the wake of the events of Tuesday, September 11, 2001, that the Executive Office of the President, through the National Security Council, directed NCS to implement a wireless priority capability. According to NCS, in the aftermath of the terrorist attacks, wireless carriers experienced significant difficulties trying to cope with the unprecedented call volume. The reported increase in the number of phone calls in the Washington, D.C., New Jersey, and New York City areas made between 9:00 a.m. and 12:00 p.m. was 2 to 10 times the number on an average Tuesday. The resulting effort became WPS, which is a subscription-based service designed to help increase the probability of call completion for NS/EP personnel that rely on wireless devices—typically, a cell phone—while performing duties related to emergency response and recovery. To that end, WPS provides nationwide wireless priority calling capabilities, from call initiation through to when a connection is established with the called party, to NS/EP personnel during natural or man-made disasters or emergencies that result in network congestion or outages in the nation’s wireless networks. Like the average U.S. consumer, NS/EP personnel have great flexibility in choosing a wireless carrier for wireless communications services. In order to assure that WPS capabilities are accessible by the majority of wireless services that could be used by NS/EP personnel, NCS has taken steps to ensure that the nationwide and regional wireless carriers that provide services to the greatest number of wireless customers upgrade their networks to support WPS functionalities. As a result, authorized WPS subscribers are able to access WPS in nearly all the major wireless markets in the continental United States and its territories. Currently, WPS is supported by all the nationwide wireless carriers (AT&T, Sprint Nextel, T-Mobile, and Verizon Wireless). Additionally, regional carriers (such as Cellcom and Cellular South) that can help to provide WPS coverage in geographically remote or sparsely populated areas are at varying stages of updating their networks to support WPS. To initiate a WPS call, authorized subscribers must dial *272 plus the destination number from their WPS-enabled cell phone. If all radio channels in the caller’s area are busy, the call will be placed in queue for up to 28 seconds for access to the next available local radio channel. WPS subscribers receive additional priority based on their office or position to ensure that communications are first available for senior leadership (see app. V for a description of how this priority is determined). While WPS provides priority access to the next available radio channel, it does not guarantee call completion as a WPS call may encounter further congestion while being routed through the wireline or wireless portions of the PSTN. Therefore, according to NCS, WPS is most effective when used in conjunction with GETS because GETS is also designed to help activate priority calling features in the wireless network in addition to the wireline network. Thus, using a GETS calling card after activating WPS can help to ensure a higher probability of call completion for calls placed from a cellular telephone to another cellular or wireline telephone number. As with GETS, WPS subscribers incur expenses as part of their subscription; however, the WPS fee structure is more expensive. In addition to wireless calling plan fees, WPS subscribers must pay (1) a one- time activation fee of up to $10.00, (2) a monthly service fee of up to $4.50, and (3) a $0.75 per minute fee when WPS is invoked by dialing the WPS code, *272. These fees help wireless carriers to recoup the costs associated with providing NS/EP calling features in their respective wireless networks, according to NCS. As of April 2009, there are approximately 93,000 WPS subscribers, according to NCS. NCS priority calling programs are primarily intended for officials with responsibilities for coordinating the functions critical to the planning, management, and response to national security and emergency situations—particularly during the first 24 to 72 hours following an emergency. According to NCS, participants in its priority programs come from federal, state, local, or tribal government, and private industry or nonprofit organizations. In order to subscribe to GETS and WPS, applicants must prove that their organization is engaged in activities essential to NS/EP functions including (1) national security leadership; (2) national security posture and U.S. population attack warning; (3) public health, safety, and maintenance of law and order; (4) public welfare and maintenance of national economic posture; and (5) disaster recovery. Furthermore, these individuals must demonstrate that they perform a function that is critical to the planning, management, and response to national security and emergency situations. At the federal government level, personnel that qualify to subscribe to the GETS and WPS service range from staff in the Executive Office of the President to members of Congress and officials in federal departments and agencies. Nonfederal representatives such as state governors, mayors, police and fire chiefs, as well as personnel engaged in restoration of services such as telecommunications and electricity, are among those who can qualify to use the priority calling programs. Appendix V provides further details about the types of positions and functions that generally qualify for access to the GETS and WPS programs. According to NCS, the number of personnel in the public and private sectors that perform functions critical to national security and emergency preparedness range from about 2 to 10 million people. In planning for future growth in its programs, NCS estimates that the communications network can successfully support up to 2 million priority subscribers. To that end, NCS has plans underway to achieve up to 2 million GETS subscribers. NCS officials have not yet finalized this goal or a goal for WPS subscribers but indicated that the WPS goal may be about 225,000 subscribers. As of April 2009, NCS has 227,614 active subscribers in the GETS program. For WPS, there were 92,820 active subscribers. As table 1 shows, the federal government accounts for about 46 percent of active GETS subscribers and 72 percent of active WPS subscribers. NCS has undertaken several outreach efforts to help increase awareness of and participation in its priority calling programs across essential NS/EP personnel. These efforts include, for example, attending emergency management conferences, writing articles for emergency management and telecommunications publications, as well as deploying outreach coordinators to promote NCS’s priority calling programs. For example, since 1995, NCS has participated in various conferences hosted by the National Emergency Management Association (NEMA) and the International Association of Emergency Managers to facilitate its outreach and marketing efforts. At these conferences, NCS operates display booths and distributes marketing materials and may conduct presentations to help increase awareness about the benefits of its priority calling programs. NCS officials and/or contract personnel attend approximately 30 conferences annually that target federal, state, local, and industry NS/EP members. NCS officials told us that it has enlisted all but 1 of the 50 state emergency operations centers to participate in GETS and/or WPS because of initial contacts made at events hosted by NEMA. Similarly, to expand its outreach to other essential emergency personnel who also rely on wireline and wireless communications services during emergencies such as those from water, gas, and electric companies, NCS has attended conferences and other events that attract this target audience. In addition to attending conferences to reach general NS/EP personnel, NCS has implemented targeted outreach efforts to groups such as governors and state homeland security advisors; critical infrastructure facilities, such as nuclear power plant operations centers, national and regional airport traffic control centers; and federal officials who serve as the designated continuity coordinator within their respective agency. NCS officials report that they have generally made progress in enlisting these groups in its priority calling programs. For example, in 2008 NCS enlisted 56 of 71 federal continuity coordinators in the GETS program. NCS also worked with the Nuclear Regulatory Commission and the Federal Aviation Administration to ensure that GETS cards are available at all nuclear facilities and at all national and regional airports respectively. In 2005, NCS began deploying regional outreach coordinators to promote NCS’s priority calling programs to emergency management officials and other key decision makers (such as governors) that coordinate emergency response and recovery and continuity of government in state and local government. NCS credits the addition of the regional outreach coordinators as a key reason for significant growth in enrollment rates across all NS/EP categories since 2005. Despite the outreach efforts NCS has undertaken to increase participation in its priority calling programs, WPS fees are a barrier to participation in the program, according to NCS. For example, as of October 2008, while the majority of federal continuity coordinators enrolled in the GETS program, only 44 percent or 31 of 71 federal continuity coordinators are WPS subscribers. Additionally, while 24 of 56 state homeland security advisors subscribe to GETS, only 10 subscribe to WPS, and only 8 governors subscribe to WPS while 43 subscribe to GETS. The subscriber levels for the GETS program are more than twice that of the WPS program as shown in table 2. For each WPS-activated device, subscribers pay an initial activation fee of $10, a monthly fee of $4.50 as well as a usage fee of $0.75 per minute. In 2006, NCS commissioned a study to examine barriers to WPS participation, among other things. According to NCS, the survey results found that program cost was the single largest impediment to participating in WPS. Similarly, our work showed that WPS fees can be a burden particularly for NS/EP members at the state and local government level due to limited financial resources. At least one-third of 37 state and local government entities that we spoke with—including some who subscribe to WPS—stated that WPS fees affected the extent to which they participate in the program. For example, an official from the Oregon Emergency Management Division stated his organization’s participation in the WPS is relatively low because the overall WPS costs can become very expensive when calculated across all subscribers in a particular agency. Another official from Ohio Emergency Management Division stated that his organization does not participate in the program due to budget constraints even though they consider WPS to be more beneficial than GETS because the wireless component is more widely used among staff performing emergency management functions. In light of concerns about WPS subscription costs, NCS has been exploring ways to minimize the burden of program fees for its intended subscribers. For example, NCS examined the feasibility of the federal government subsidizing all or part of the WPS fees; however, DHS and OMB determined that this may not be feasible because of questions about the federal government’s ability to sustain these costs in the future. Further, NCS has had discussions with the wireless carriers to explore ways to eliminate or defray the costs; however, the wireless carriers maintain that the fees are necessary to operate and maintain WPS capabilities in their networks in order to comply with the NCS requirements. Nevertheless, some carriers have made arrangements with WPS subscribers to provide WPS as part of a bundled telecommunications service package, which, according to NCS, can defray the costs. NCS officials have stated that they plan to continue to explore ways to address the WPS cost issue as it believes doing so can help increase participation in the WPS program. Federal internal control standards state that documented policies and procedures to control access to agency resources and records to authorized individuals are essential to accountability and safeguarding assets, and NCS has developed and implemented policies and procedures to help ensure that access to its programs is limited to authorized subscribers. NCS has standard operating procedures that document how potential subscribers can gain access to its priority calling programs. To be approved for a GETS card and/or WPS service request, the NCS contractor must be able to confirm that the request is from an organization that performs any of five NS/EP functions mentioned earlier in this report. If the organization’s NS/EP status is unclear (such as chemical suppliers, radio and TV stations, or housing shelters), the organization must obtain sponsorship from NCS, 1 of the 24 NCS member agencies, or through the emergency management agency in the state or locality in which they operate. Once approved, the organization must identify a primary point- of-contact (POC) and an alternate POC, if available. Within each organization, the POC is the primary liaison between NCS and individual GETS and WPS subscribers. The POC is responsible for (1) determining who should have access to the GETS and WPS service within their organization; (2) processing all GETS and WPS service requests; (3) notifying NCS of changes to subscriber account data such as changes in name, telephone number, or eligibility status; (4) reviewing and certifying monthly subscriber calling data; (5) familiarizing subscribers with GETS and WPS functionalities, and (6) annual verification of subscriber eligibility. As evidenced by their responsibilities, NCS relies on the POCs to manage almost all aspects of subscriber accounts. However, through an annual verification process, NCS seeks to ensure that POCs provide a current account of subscribers who meet the eligibility requirements. NCS will make multiple attempts over a 90-day period to ensure the POC responds to its request to validate subscriber information, according to NCS officials and failure to do so can result in cancellation of the subscribers’ account. NCS officials told us that they designed these verification procedures to help ensure that only eligible subscribers have access to NCS’s priority programs. From our review of selected GETS and WPS records as a limited check on whether current positions meet eligibility criteria, we found that the GETS and/or WPS accounts for former members and delegates of the U.S. House of Representatives and the U.S. Senate in the 109th Congress were terminated in accordance with NCS’s procedures. However, when we reviewed accounts for 15 immediate past heads of federal departments and agencies as of August 2008, we found 4 of 15 instances where these officials’ GETS and/or WPS accounts were not terminated. We brought this to NCS’s attention and officials told us that these accounts were terminated effective July 2009. Further, NCS plans to institute new processes that are to include more frequent monitoring of GETS and WPS accounts that coincide with administration changes to ensure that the subscriber’s account status is appropriately updated. In addition to verifying whether a subscriber is authorized to enroll in NCS’s programs, telephone carriers as well as NCS and its contractors have applied fraud detection mechanisms intended to protect against fraudulent calls in their networks as well as others that are unique to the GETS and WPS services. For example, carriers have fraud detection for general telephone use that also detects fraud for GETS and WPS services. These detection mechanisms include detection of a single PIN being used simultaneously from multiple originating phone numbers and calls of long duration, among other things. NCS and its contractor said that they have also instituted procedures to determine the legitimacy of calls and to take corrective action, which may include disabling the GETS and WPS account in question. According to NCS, it has rarely found actual cases of fraud and abuse. For example, although there were 45 reported cases of potential fraudulent calls in 2008, through further investigation NCS determined that the calls were legitimate and the reports typically resulted from calls placed by authorized subscribers conducting test calls or participating in preparedness exercises. Even if fraudulent calls were made using GETS and WPS services, the implications would likely be minimal due to two factors. First, the subscriber levels for GETS and WPS, which currently stand at more than 227,000 and about 93,000 respectively, are well below the capacity of the system. For example, according to NCS, the GETS system was designed to support up to 2 million subscribers, however, the current subscriber level—227,000 GETS subscribers—is well below the intended capacity. Second, the potential financial implications for the federal government would be nominal as NCS does not bear the costs for GETS and WPS charges for nonfederal subscribers. State and local governments as well as private and nonprofit organizations bear all of the costs related to the usage of the GETS and WPS programs. In general, NCS may cover GETS charges for federal departments and agencies up to an annual budget threshold; however, federal agencies may be responsible for these costs in the event of fraudulent or abusive calling activity. Federal and nonfederal WPS subscribers are responsible for all associated costs. The delivery of NCS’s priority calling services faces challenges related to the inherent vulnerabilities of the communication infrastructure such as downed phone lines, damaged cell towers, and broken circuits and switches. Therefore, NCS seeks to build redundancy into the communication capabilities and services it provides and has explored satellite technology to overcome such challenges. However, methods for implementation and evaluation of its related satellite pilot were unclear and NCS subsequently terminated the pilot. In addition, NCS faces the challenge of keeping pace with the rapid evolution in telecommunications technology and it is working with the telecommunications industry to ensure that NS/EP communications requirements are integrated into the next-generation communications networks. However, NCS’s planning efforts to update its programs as technology evolves could be strengthened. In December 2007, NCS launched a satellite pilot program to provide an alternative means to support NS/EP communications to help circumvent network congestion or outages in the PSTN. According to NCS, because GETS and WPS leverage PSTN-based infrastructure to enable communications for NS/EP personnel, these programs can be limited in their ability to provide services when damage renders the PSTN infrastructure inoperable, such as it did in certain regions affected by Hurricane Katrina. In February 2004, the National Security Telecommunications Advisory Council (NSTAC) issued a report to the Executive Office of the President recommending that NCS develop a satellite capability to facilitate NS/EP communications. The communications challenges that arose during the 2005 Gulf Coast hurricanes due to flooding and loss of power, among other things, underscored the need for a communications capability that could transcend these infrastructure issues, and NCS observed that satellite networks appeared to be the least disrupted communications service during this event. To that end, 3 years following the 2005 Gulf Coast Hurricanes, NCS launched the first of two phases of the satellite pilot program intended to enable unclassified voice connectivity during emergencies that leverages satellite infrastructure independent of the PSTN. As part of the pilot, according to NCS officials, NCS is to provide participants with a wall-mounted unit that consists of battery backup and surge protection and a satellite phone. According to NCS officials, one objective of the pilot is to evaluate two voice communications capabilities via satellite technologies: push-to-talk communication functions and GETS priority calling using a satellite phone. Push-to-talk is a radio-like function, similar to that of a walkie-talkie or three-way radio, with which a group of users would connect back-and-forth with each other from their individual satellite phones at the push of a button without having to make individual calls. NCS also plans to use the pilot to test the ability to make GETS priority functions to call a wireline or cellular telephone number using a satellite phone. According to NCS, calls made from a satellite phone to a cellular or wireline telephone can bypass congested or damaged areas of the PSTN, as such calls can be routed via satellite networks to a less congested area of the PSTN, thus increasing the likelihood of call completion. However, because these calls are still expected to travel through the wireline and wireless portions of the PSTN to reach their destination, they could face congestion while trying to connect to the PSTN. To bypass such congestion, NCS officials stated that the GETS priority calling features must be supported on the satellite networks, which currently they are not. By inserting priority calling functionality in satellite networks, GETS calls that originate from a satellite phone will have a greater likelihood of being successfully routed through the PSTN in times of network congestion. NCS officials also told us that other objectives for the pilot include determining the extent to which satellite communications meet NS/EP needs and educating NS/EP personnel about the availability of satellite communications for use in emergency situations. Although the pilot began in December 2007 and is estimated to last 3 years and cost $1.9 million, as of May 2009 NCS could provide little documentation to explain its objectives for the pilot, and how it planned to meet those objectives. For example, while NCS officials provided briefing slides to elaborate on the pilot program and describe some high-level program objectives, these slides lacked key program information such as a methodology for evaluating pilot results to determine whether the intended pilot objectives were met, and milestones for pilot implementation. Specifically, although the briefing slides noted the planned number of sites to be included in the pilot, it did not specify when the site selection would be completed, when sites would begin participating in the pilot, or the data that would be collected and analyzed to evaluate pilot performance. According to NCS, the pilot was to include up to 65 participating sites comprising emergency operations centers supporting federal and state government, and NCS officials stated they had initially identified six sites and conducted an evaluation of additional candidate sites. However, NCS officials could not provide any detailed information about what criteria or rationale was used to determine which sites to include in the pilot. For instance, while NCS officials told us they evaluated sites based on two factors (effects of disaster scenarios and population served by the respective location), they did not provide any documentation that outlined these details or demonstrated how these two factors would help it determine if the pilot objectives were met. In addition, as part of phase two of the satellite pilot, NCS officials said they intended to use lessons learned from the experience of phase one of the pilot to migrate the satellite capability to another NCS technology initiative already underway; however, NCS launched the pilot program without the benefit of completing a methodology to evaluate the pilot. In addition, NCS could not provide documentation as to how the results of the pilot would be evaluated and used to inform future program decisions such as future rollout. Exacerbating the absence of program planning documents, is that key staff originally involved in the pilot have since left NCS resulting in the loss of institutional knowledge about the original decisions and planning for the pilot. In April 2009, officials told us that the pilot had been placed on hold as they were reassessing various aspects of the pilot such as conducting a cost-benefit analysis to determine which satellite provider and equipment to use. After reassessing the pilot, NCS subsequently terminated the pilot in May 2009, according to NCS officials. NCS officials acknowledged that the pilot program needed improved planning and metric documentation and noted that NCS took a number of issues into consideration including the current availability of push-to-talk capability among existing satellite service providers in making the decision to end the pilot. NCS is mandated by presidential directive to support the use of technological advances and evolutionary communications networks for NS/EP communications functions assigned to NCS, including programs it provides to maintain continuity of communications. GETS and WPS are designed to operate on the circuit-based PSTN platform, while packet- based IP networks are increasingly used and expected to eclipse the use of circuits in telecommunications, according to representatives from the telecommunications industry. As a result, NCS and its GETS and WPS subscribers face the risk that these services will not work within these next-generation networks. To avoid disruption or degradation of service, NCS plans to migrate existing GETS and WPS priority calling features from circuit-based networks to public telephone packet-based networks to assure that the programs will be operable on new technologies available from wireline and wireless carriers. NCS’s efforts to integrate new and existing NS/EP services into next-generation networks (NS/EP NGN) consist of two primary components: (1) priority voice communications and (2) priority data communications that includes priority treatment for the transmission of e-mail, streaming video, text messaging, and Internet access, among other things. NCS has taken steps to assess how the evolution of technology will affect the provision of its priority calling services and to plan for these changes. In addition, because NCS’s programs are largely dependent on the telecommunications industry, which owns and operates most of the communications infrastructure on which GETS and WPS operate, NCS has partnered with industry to inform and implement these changes. According to NCS, adding the priority voice communications component of NS/EP NGN is less challenging than adding data services because while priority calling programs exist (GETS and WPS), priority data programs do not. NCS officials estimate that at least one of the three major carriers (AT&T) will begin supporting priority communications via VoIP by 2010 and the remaining carriers (Sprint and Verizon) by 2014. However, less is known about supporting priority data communications and, consequently, this effort is more challenging, according to NCS officials. The challenge to develop priority data services is not a new issue; in 2006 we reported that the obstacles to offering the service include both technical and financial challenges. For example, the commonly used version of Internet protocol (known as IPv4) does not guarantee priority delivery and has certain security limitations that may not adequately protect information from being monitored or modified while in transit via the Internet. Though the next version (IPv6) has features that may help prioritize the delivery of data in the future and provide enhanced security, it is not yet widely adopted. Also, in March 2006, the NSTAC reported that while the NS/EP NGN initiative is expected to offer improvements for NS/EP communications, the security challenges are likely to have an operational impact on the transmission of NS/EP communications if not adequately addressed. Specifically, they noted that robust user authentication methods are needed in order to enable NS/EP personnel to share information in a secure manner. While these authentication methods are to be available through IPv6, they are not available through IPv4, which is the more widely used version. In April 2009, NCS officials told us they have not yet finalized what types of authentication methods or which IP version would support the NS/EP NGN, though they plan to request additional information from industry experts about how to address authentication issues. In our 2006 report, we noted that NCS had previously requested information from private companies on the potential for prioritizing services, and found that there was no offering for a priority service, nor was there any consensus on a standard approach to prioritization. Although, NCS, in conjunction with international standards bodies, completed the first set of engineering standards for priority VoIP in December 2007, as of May 2009, standards had not yet been established to support prioritized NS/EP NGN data communications. Moreover, NCS could not provide further detail as to how its planning efforts account for the different capabilities of the available technology, and the associated challenges. In addition to NCS not fully detailing how it plans to mitigate existing challenges, it also could not provide details about key program elements such as, the estimated total costs, and a timeline for implementation of the NS/EP NGN initiative. Officials said the information was not yet finalized. Our previous work on acquisition and technology investment management has shown that undertaking such efforts is strengthened by first ensuring that (1) an acquisition approach, such as the one for NS/EP NGN, is based on available technologies that support the intended capability; (2) cost estimates are realistic; and (3) risks have been identified and analyzed, and corresponding mitigation plans have been developed. NCS officials told us they planned to develop program plans that included this information, but as of May 2009 these documents were in the early stages of development, and officials stated they were finalizing cost and schedule estimates for the initiative, which may be greater than previously projected. In addition, for the last 2 years, Congress has raised questions about the absence of detailed program information such as costs of planned investments for some of NCS’s programs, and NCS has faced difficulties in justifying its budget requests. For example, during the appropriations process for fiscal years 2008 and 2009, the House and Senate Committees on Appropriations raised questions about the intended investments in NS/EP NGN. Because of the lack of explanation about the significant increase in funds requested for fiscal year 2008 compared to the previous year, the House and Senate Committees on Appropriations stated that NCS had not adequately justified funding for the NS/EP NGN effort. Consequently, Congress appropriated $21 million—about 60 percent less than requested—to DHS for NS/EP NGN. In addition, the House of Representatives Committee on Appropriations directed DHS to brief them on the planned expenditures for NS/EP NGN in fiscal year 2008. Again, for the fiscal year 2009 budget request for NS/EP NGN, the House of Representatives Committee on Appropriations raised questions about the lack of a thorough explanation of (1) information about planned investments, (2) clarity about how the initiative aligns with DHS’s homeland security goals, and (3) information about the total costs to complete the initiatives. As a result, Congress withheld half of the fiscal year 2009 funding for NS/EP NGN until NCS completes an expenditure plan to be approved by the House and Senate Committees on Appropriations that identifies the strategic context, specific goals and milestones, and planned investments. Although NCS had planned to submit the expenditure plan to the Committees on Appropriations in January 2009, they have not done so, and as of May 2009, the plan was still being reviewed internally. Based on technological and planning challenges, NCS officials told us that in 2008 it began taking steps to restructure its acquisition approach to focus first on voice with data to follow much later. However, as noted by Congress in its response to NCS’s fiscal year 2009 budget request, little is known about this restructuring, including key program information such as what capabilities will be delivered, total costs, and milestones. Moreover, despite requirements from Congress to articulate its strategy for the NS/EP NGN initiative, as of May 2009 NCS had not yet clearly defined program objectives and total costs, among other things. While NCS officials told us that they expect increased costs and schedule delays, they have not provided any further details or plans to mitigate these challenges, and it is unclear when important technological and program details of the restructuring will be finalized. In February 2009, NCS hired a new manager whose responsibilities include NS/EP NGN, who stated the need to plan for these issues and develop corresponding program plans that outline the NS/EP NGN acquisition approach including costs, milestones, and risk mitigation plans. GAO and commercial best practices show that incorporating cost information and strategies to mitigate program and technical challenges are essential to successfully meeting program objectives and minimizing the risk of cost overruns, schedule delays, and less than expected performance. As NCS moves forward with the NS/EP NGN effort, clearly defining and documenting its technical approach to achieve program objectives within the constraints imposed by known challenges—such as the limitations of available technologies and NCS’s dependence on the telecommunications industry—could help provide reasonable assurance that an executable approach is in place to meet current and future NS/EP communications needs. Furthermore, such planning could provide a sound basis for determining realistic cost and schedule estimates and provide key stakeholders such as Congress with information they need to make funding decisions over time. NCS has been developing its strategic plan since 2007, and although officials have stated that a strategic plan could help inform their efforts, it has not been finalized. In addition, while NCS has generally linked the performance of its programs to broader agency and department goals, the performance of two of NCS’s core responsibilities is not measured. Finally, focusing program evaluation efforts on outcomes, gauging progress, incorporating past performance, and clarity can improve the usefulness of NCS’s performance measures. NCS has undertaken strategic planning for its programs and documented some key elements of strategic planning—such as a statement of the agency’s mission, strategic goals, and objectives—across a range of documents and sources. For example, the mission statement is documented in program documents such as NCS’s Annual Reports, and NCS officials told us they have identified 21 strategic objectives that align with its three strategic goals (information on the three strategic goals and some of the related objectives is shown in table 3). However, this information has not been incorporated into a strategic plan. Furthermore, NCS officials stated that these goals and objectives are being revised, but they did not provide a date when this would be finalized. Additionally, NCS’s congressional budget justification documents for fiscal years 2007 through 2009 contain planned milestones and spending for various program initiatives. In June 2008, we reported that efforts were under way to draft a strategic plan for the NCS, and recommended that DHS establish milestones for completing the development and implementation of the strategic plan. DHS agreed with our recommendation and stated that it was taking steps toward finalizing the strategic plan. However, as of April 2009, the plan, which has been in draft since mid-2007, had not yet been finalized and NCS officials could not provide a date for when this would occur. A draft strategic plan for fiscal years 2007 to 2013 did not include some of the key elements associated with effective strategic plans. For example, while the plan included NCS’s mission, strategic goals and high-level objectives, it did not include a discussion of the resources needed to achieve these goals and objectives. Although NCS intends to enhance its priority communications offerings to keep pace with emerging technology (such as priority data in an IP environment), it has not yet finalized the total costs to do so. In addition, the draft plan did not identify external factors that could affect achievement of strategic goals (such as management or technological challenges). Moreover, the plan did not articulate how current and planned initiatives such as the NS/EP NGN and the satellite pilot program fit into the broader agency goals. Our past work has discussed the importance of strategic planning as the starting point for results-oriented management. Strategic plans are to articulate the mission of an organization or program, and lay out its long- term goals and objectives for implementing that mission, including the resources needed to reach these goals. Leading management practices state that federal strategic plans include six key elements: (1) a comprehensive mission statement, (2) strategic goals and objectives, (3) strategies and the various resources needed to achieve the goals and objectives, (4) a description of the relationship between the strategic goals and objectives and performance goals, (5) an identification of key external factors that could significantly affect the achievement of strategic goals, and (6) a description of how program evaluations were used to develop or revise the goals and a schedule for future evaluations. As we have previously reported, strategic plans are strengthened when they include a discussion of management challenges facing the program that may threaten its ability to meet long-term, strategic goals. While NCS has completed some key aspects of strategic planning, critical elements such as the key external factors that could affect achievement of its mission—for example, challenges affecting the NS/EP NGN initiative— have not yet been documented and NCS has not committed to incorporating these elements in its strategic plan. A strategic plan that captures these key elements in a centralized way would help inform stakeholders, such as departmental leadership, Congress, and the administration about NCS’s priorities and plans and assist stakeholders in making efficient and effective program, resource, and policy decisions. In addition, because NCS has experienced frequent turnover in leadership, such a plan would be beneficial for new agency management during transition periods. For example, since January 2007, there have been two directors and one acting director as well as three different staff serving in the capacity of Chief for the Technology and Programs Branch—a position that oversees the day-to-day operations regarding NS/EP NGN, among other initiatives. NCS has five performance measures which relate to three aspects of GETS and WPS—the number of subscribers, priority call completion rates in emergencies, and cost to support GETS and WPS subscribers. While NCS has not documented how its performance measures link to NCS’s and DHS’s strategic goals and objectives, we used various documents, such as DHS’s fiscal year 2008 to 2013 strategic plan, to determine that NCS’s five performance measures link to agency and department strategic goals and objectives (see figure 3, which illustrates the connection between DHS’s mission to NCS’s performance measures). For example, NCS’s performance measure to track the call completion rate of priority calls is linked to its strategic goal of ensuring availability of communications as well as to DHS’s strategic objective to ensure continuity of government communications. Consistent with our past work on performance management, linking performance measures with strategic goals and objectives in this way provides managers and staff with a roadmap that shows how their day-to-day activities contribute to achieving broader DHS and NCS goals. While NCS’s performance measures generally link to overall goals and objectives, NCS’s performance measures focus exclusively on its priority calling programs, and NCS does not have measures to assess the performance of its other two primary responsibilities—serving as the ESF- 2 coordinator and the lead federal agency for critical infrastructure protection for the communications sector. Although NCS officials acknowledged that they do not have such measures and noted that they could be helpful, these officials did not commit to developing such measures. While we have previously reported that agencies do not need to develop performance measures that cover all of their activities, OMB requires that performance measures reflect a program’s mission and priorities. Furthermore, we have also reported that an agency’s performance measurement efforts are strengthened when they sufficiently cover its core activities. NCS’s critical infrastructure protection and ESF- 2 responsibilities are key components of the agency’s mission to help ensure that NS/EP communications are available during disasters or emergencies, and are articulated in NCS’s strategic goals (see table 3). For example, NCS, in conjunction with the telecommunication industry is responsible for conducting risk assessments of the nation’s critical communication infrastructure; according to Executive Order 13,231, as amended, communications infrastructure is critical not only to emergency preparedness, but all aspects of U.S. national security and economy. Without the benefit of performance measures that cover these functions, NCS may be limited in its ability to assess its overall effectiveness in meeting all three of its strategic goals. Moreover, developing performance measures for these mission-critical functions would help strengthen and inform future program and budget decisions, improve critical program activities, and as we have previously reported, help verify that NCS’s resources are being used responsibly. Of its five performance measures, NCS has identified two as outcome measures, two as output measures, and one as an efficiency measure (see table 4 for more information on each of these measures). While OMB guidance defines output measures (such as the number of products or services delivered) as a description of the level of activity provided over a period of time, it asserts program performance is most effectively measured by focusing on how those outputs support the achievement of desired outcomes—the intended results of carrying out a program or activity. NCS’s two output measures—the number of GETS subscribers and the number of WPS subscribers—could be strengthened to focus on outcomes, more effectively gauge progress toward achieving results, and set more reliable targets. In addition, one of NCS’s outcome measures, the call completion rate, does not clearly illustrate the measures’ intended purpose. OMB guidance emphasizes the use of outcome measures as a more meaningful indicator of performance and encourages agencies to translate existing measures that focus on outputs into outcome measures, or at least demonstrate that measured outputs would logically lead to intended outcomes. Currently, neither of NCS’s output measures fully demonstrates how it supports NCS in the achievement of the intended outcomes of the GETS and WPS programs, which, as articulated in one of NCS’s strategic goal, is to ensure the availability of communications capabilities for all NS/EP officials. For example, NCS told us that the long- term goal for the GETS program may be to reach 2 million subscribers; however, NCS has not demonstrated how reaching 2 million subscribers achieves the result of ensuring the availability of communications capabilities for NS/EP officials that could benefit from the use of the GETS service. According to NCS officials, NCS based this number on an internal study that identified 2 million subscribers as the capacity level that the PSTN can support. However, NCS could not provide a rationale as to how 2 million subscribers appropriately quantifies the population of NS/EP personnel critical to NCS achieving its desired results. Therefore, it is unclear whether achieving 2 million GETS subscribers means that all the NS/EP personnel who have the greatest need for access to priority calling capabilities are enlisted in the program thereby enabling them to make calls that can help to coordinate planning for national security incidents and emergencies and facilitate continuity of government under these conditions—a key function of the GETS program. In addition, NCS officials have told us that the agency has an unofficial long-term goal of 225,000 subscribers for the WPS program. Although NCS officials noted that this number has not been finalized, the measure also does not portray how well or if WPS is achieving its desired program outcome. Furthermore, NCS has not been able to provide information regarding how it developed this WPS subscriber goal or describe how it will do so in the future. Our past work, along with federal guidance, has discussed the importance of using a series of output and outcome goals and measures to depict the complexity of the results that agencies seek to achieve. We recognize that it can be difficult to develop outcome goals and corresponding measures. Nonetheless, by further articulating how NCS’s measures support the intended outcome articulated in its strategic goal—ensuring availability of communications for NS/EP functions—, NCS and its stakeholders could more effectively gauge the extent to which subscriber levels in GETS and WPS reflect if communications capabilities are available to all critical NS/EP personnel as intended. NCS’s progress can be better measured through annual performance targets that track subscriber levels to demonstrate how overall subscriber goals for GETS and WPS lead to program outcomes. This would help to better illustrate NCS’s annual progress toward achieving its desired results. Furthermore, although both of NCS’s output measures reflect the number of subscribers in each program for a given year, the measures do not reflect whether NCS’s annual achievement demonstrate significant or marginal progress toward reaching 2 million subscribers, and NCS has not defined a time by which it hopes to achieve this goal. In its GETS and WPS performance measures, NCS states annual results as an output of the number of subscribers in a particular year—for example, 208,600 GETS subscribers in fiscal year 2008. These output measures do not capture percentage increases in the number of subscribers from year to year to help measure performance changes in achieving any long-term goal for subscribers. According to OMB guidance, performance over time is to be expressed as a tangible, measurable objective, against which actual achievement can be compared, such as a quantitative standard, value, or rate. For example, for NCS’s performance measure related to the percent of federal continuity coordinators with access to priority calling programs—NCS tracks change over time by showing a rate of annual progress toward enlisting these particular officials in the GETS and WPS programs. In doing so, NCS can provide insight as to the extent to which this group can successfully place calls to help facilitate continuity of government at the federal level—particularly in the event of network congestion during emergencies. Although NCS has reported ongoing or planned targeted outreach efforts to similar groups that play a leadership role in coordinating emergency response and continuity of government such as governors or mayors, they have not developed similar performance measures to track their annual progress in enlisting and maintaining these subscribers. NCS has not finalized its overall goal for the number of GETS and WPS subscribers or set a timeline for when it plans to achieve its unofficial goals for the number of GETS and WPS subscribers. Based on GETS enrollment levels over the last 3 fiscal years, at current rates NCS may not achieve its unofficial subscriber goals until somewhere between 2015 and 2047. OMB guidance states that performance goals are to be comprised not only of performance measures and targets, but also include time frames for achieving these goals. In addition, OMB guidance states that targets are to consider past performance, adjusted annually as conditions change, such as funding levels and legislative constraints. However, NCS did not consider past performance when setting annual performance targets for several of its performance measures. As a result, the targets are not ambitious or based on reliable baselines. For example, NCS did not modify its targets for the number of GETS subscribers for fiscal years 2007 and 2009 based on actual results achieved in the previous fiscal year. According to OMB performance guidance, baselines are the starting point from which gains are measured and targets set; and performance targets are to be ambitious. Our past work has also emphasized the importance of baselines and multiyear goals particularly when results are expected to take several years to achieve. As detailed in table 4, for fiscal year 2006, NCS reported a target of 118,000 GETS subscribers and achieved 158,669, which also surpassed its 2007 goal. However, NCS did not update its fiscal year 2007 goal of 155,000 when it was achieved in 2006. Similarly, in fiscal year 2008, NCS set a target of 185,000 subscribers and achieved 208,600 subscribers, which surpassed the fiscal year 2009 goal. However, as of April 2009, the goal remained at 204,000 subscribers even though NCS exceeded this level in the previous fiscal year. Similarly, the target level for another measure—the average cost to maintain a priority telecommunications service subscriber—has not been modified to reflect the actual results of the prior year. NCS began using this measure in fiscal year 2007 and has exceeded its target reductions in cost for the 2 years that the measure has been in place. For fiscal years 2008 and 2009, the average cost targets were $15.63 and $14.22, respectively; however, NCS reported that the average cost to maintain a priority service subscriber in 2008 was $13.70, surpassing targeted reductions for both 2008 and 2009. As with the target for the subscriber measures, the average cost target was not modified to build upon actual results of the prior fiscal year. Furthermore, the baseline upon which each annual average cost goal is determined is the number of GETS and WPS subscribers. While officials cite reductions in operating costs as one reason for exceeding the target, they also stated that the achievement was more a function of the fact that they exceeded the projected number of GETS subscribers. As a result, because the annual GETS subscriber performance measure is not composed of ambitious targets from year to year, the baseline it provides for determining the average cost target is unreliable. Without considering changes in this baseline information—in this case, number of subscribers—valid comparisons to measure improvement over time cannot be made. Considering past performance in setting targets could help NCS develop a true sense of continued improvement in enlisting priority service subscribers and reducing costs to service the subscribers. Finally, while NCS has implemented an outcome-oriented measure to assess the effectiveness of its priority calling programs during periods of congestion, the information the measure intends to convey—priority service call completion rate—is not consistent with the methodology used to calculate the results. Specifically, the measure is intended to capture and measure combined call completion rates for GETS and WPS. However, wireless carriers collect the relevant information that NCS reports via this measure, and under current processes for capturing attempted WPS calls, wireless carriers are unable to identify all attempted WPS calls that are not completed. Our previous work holds that performance measures should be clearly stated in order to ensure that the name and definition of the measure are consistent with the methodology used to calculate it. Furthermore, OMB guidance states that agencies are required to discuss the completeness and reliability of their performance data, and any limitations on the reliability of the data. As the call completion measure does not provide clear information about program performance and limitations, NCS risks overstating the completion rate for WPS and the use of this measure may affect the validity of managers’ and stakeholders’ assessment of WPS performance in comparison to the intended result. NCS officials agreed that opportunities exist to strengthen this measure to ensure that it accurately reflects the activity being measured, and stated they are taking steps to work with carriers that support WPS services to develop a solution that would allow them to track the full range of WPS calls. However, in the meantime, NCS has not committed to revising the measure to accurately reflect the activity being monitored. The events of September 11, 2001, and the 2005 hurricane season dramatically demonstrated how catastrophic man-made and natural disasters can disrupt communication capabilities and highlight the need for essential NS/EP officials to be able to communicate during and in the aftermath of such events. NCS continues to recognize the need to keep pace with technological changes and look for ways to better meet NS/EP personnel’s current and future communications needs as evidenced by the development of its NGN initiative. Information such as costs, available technology, and future capabilities for these types of initiatives are unknown, and as such require thoughtful planning to most effectively allocate current and future resources. These efforts to ensure that the communication capabilities it provides to NS/EP personnel will be operable on and leverage next-generation networks could benefit from better planning. By clearly defining its acquisition approach for the initiative and developing mitigation plans to address known risks and technical challenges, NCS can help minimize cost overruns and schedule delays, and more importantly help ensure that it is developing services that meet the emerging communication needs of the NS/EP community. Strategic plans are an essential element in results-oriented program management, and provide agencies and stakeholders a common set of operational principles with which to guide actions and decisions. Although DHS stated that it was taking steps to finalize its strategic plan in response to our June 2008 recommendation, it has not yet finalized the plan which has been in draft since mid-2007 or committed to incorporating key elements of a strategic plan. We continue to believe that our prior recommendation has merit and that NCS could benefit from completing a strategic plan. A strategic plan that includes identifying strategic goals and objectives, the resources needed to achieve those goals and objectives, and a description of the relationship between planned initiatives and strategic goals could serve as the foundation to help NCS align its daily activities, operations, program development, and resource allocation to support its mission and achieve its goals. As NCS undertakes a variety of new initiatives and attempts to strengthen existing programs, finalizing its strategic plan will also help strengthen NCS’s ability to efficiently and effectively allocate resources, inform key stakeholders, and provide agency and congressional decision makers the ability to assess NCS’s programs and initiatives. As part of strategic planning, it is important that related performance measures are linked and support NCS strategic goals, as well as DHS’s strategic goal of ensuring continuity of communications. In the absence of performance measures for the key functions NCS performs as the lead for the federal government’s efforts to protect critical communications and as the coordinator for ESF-2, NCS cannot reasonably measure or demonstrate how these core program activities are contributing to achieving all three of its strategic goals and DHS’s overall mission of providing continuity of communications. For a performance measure to be used effectively, it is essential that a measure’s definitions, and its intended use, are consistent with the methodology used to calculate it. While NCS acknowledges that its primary performance measure for its priority calling programs—call completion rate—does not capture all WPS calls completed and is exploring ways to capture the full spectrum of uncompleted, by not revising the measure in the meantime to accurately portray what is being measured, NCS continues to inaccurately measure performance and provide potentially misleading information to decision makers. Similarly, by not adjusting the performance targets that intend to measure number of subscribers and average costs to build upon and reflect previous years’ results, NCS cannot make valid comparisons to measure improvement over time, and cannot ensure whether performance goals are reasonable and appropriate. Beyond adjusting targets for the number of subscribers, opportunities exist to make these measures more outcome oriented to reflect the progress in reaching NCS’s ultimate goals for the number of subscribers to its GETS and WPS programs. However, without clearly defining or demonstrating how its ultimate subscriber goals achieve the result of ensuring the availability of communications capabilities for NS/EP personnel who need these services, it will remain difficult to measure progress. To its credit, NCS has identified federal continuity coordinators as critical NS/EP personnel needing access to its programs and has developed an outcome measure to track progress in enlisting and maintaining this group of subscribers. However, without similar measures for other groups that play a significant role in coordinating emergency response and continuity of government, NCS will not be in a position to evaluate its efforts to reach out, target, and ultimately provide priority calling programs to these groups. To help ensure that NCS management has sufficient information needed to assess and improve NCS’s programs and new initiatives and to effectively support budget decisions, we recommend that the Secretary of DHS direct the Manager of the NCS to take the following three actions: Develop program plans for the NS/EP NGN initiative that outline an acquisition approach based on available technologies, realistic cost estimates, and that include mitigation plans to address identified challenges and risks. Follow best practices for strategic planning in finalizing the NCS strategic plan including identifying the resources needed to achieve its strategic goals and objectives and providing a description of the relationship between planned initiatives such as the NS/EP NGN and strategic goals. Strengthen NCS’s performance measurement efforts by (1) developing measures to cover all core program activities, (2) exploring opportunities to develop more outcome-oriented measures, (3) ensuring performance measure baselines are reliable and based upon past performance, (4) and improving the clarity of its call completion measure. We provided DHS a draft of this report for review and comment. DHS provided written comments on August 7, 2009, which are summarized below and presented in their entirety in appendix VI. DHS also provided technical comments, which we incorporated where appropriate. DHS disagreed with the recommendation in our draft report that it develop an evaluation plan for its satellite program that includes milestones for continued implementation and a methodology for assessing the results of the pilot before moving forward with the program. Specifically, DHS noted that the pilot program, which was on hold at the time of our review, was now complete. However, at the conclusion of our field work, our understanding from the NCS Director was that the pilot was on hold and that NCS was reassessing various aspects of the pilot such as conducting a cost-benefit analysis to determine which satellite provider and equipment to use. In light of this discrepancy, we subsequently obtained clarification on the status of the pilot. Our discussion with DHS revealed that the pilot program was terminated rather than completed. In providing clarification, DHS stated that it agreed with our assessment that the pilot program needed improved planning and metrics documentation and that NCS took a number of issues into consideration including the current availability of push-to-talk capability among existing satellite service providers to determine whether the pilot should be continued. Given these considerations, as well as the issues that we identified such as lack of program objectives, documentation and metrics, NCS terminated the pilot. According to NCS, about $900,000 had already been spent or obligated to support various activities for the pilot program. According to NCS officials, the remaining $1 million for the pilot will be reprogrammed and any funds that had already been obligated but not yet spent will be deobligated and also reprogrammed for other priority communications services. Thus, based on the termination of the pilot, we withdrew our recommendation and have modified our report to reflect the current status of the pilot. DHS concurred with our recommendation that it develop program plans for the NS/EP NGN initiative that outline an acquisition approach based on available technologies, realistic cost estimates, and that include mitigation plans to address identified challenges and risks. Although it concurred with our recommendation, DHS also reported that NCS currently follows a structured approach in the design and implementation of program plans and that it assesses industry trends to help determine program enhancements and mitigation plans. Developing program plans for the NS/EP NGN initiative as we recommended can help NCS minimize cost overruns and schedule delays and help ensure that it is developing services that meet the needs of the NS/EP community. DHS concurred with our recommendation that NCS follow best practices for strategic planning in finalizing the NCS strategic plan including identifying the resources needed to achieve its strategic goals and objectives and providing a description of the relationship between planned initiatives, such as the NS/EP NGN, and strategic goals. DHS stated that all NCS activities are directly linked to its mission and associated performance measures. Finalizing its strategic plan as we have recommended will help provide decision makers with information to help them assess NCS’s programs and initiatives. With regard to our recommendation that NCS strengthen its performance measurement efforts by (1) developing measures to cover all core program activities, (2) exploring opportunities to develop more outcome-oriented measures, (3) ensuring performance measure baselines are reliable and based upon past performance, and (4) improving the clarity of its call completion measure, DHS concurred. Specifically, DHS reported that NCS will continue to develop performance measures. Taking action to strengthen its performance measures as we recommended should help NCS improve its ability to evaluate its efforts to reach out, target, and provide priority calling programs. DHS also commented on the report’s discussion of subscriber database accuracy, stating that it disagreed with what it viewed as our assertion that NCS should be able to easily determine whether certain individuals serving in public positions were still entitled to be GETS subscribers, as well as our expectation that NCS terminate access for individuals regardless of whether the subscriber’s organization has notified NCS to do so. DHS also highlighted the steps that NCS takes to help ensure agency points of contact keep NCS’s subscriber database updated. We modified the report to better recognize the role agency Points of Contacts play in updating NCS’s database. DHS also noted that the report suggested that NCS’s outreach efforts are limited to a select number of activities and noted that NCS also meets with other governmental bodies. We have modified our report to clarify the discussion that these are examples of outreach efforts that are not intended to be inclusive of all of NCS’s efforts. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Homeland Security, and any other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8777, or jenkinswo@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in Appendix VII. William O. Jenkin Director, Homeland Security and Justice Issues s, Jr. The National Communications System (NCS) was established by a memorandum signed by President Kennedy in 1963, in the wake of the communications challenges that arose during the Cuban Missile Crisis when, according to NCS, delays in sending and receiving communications between the United States and foreign governments involved in the crisis threatened to further complicate the crisis. The original memorandum which has been amended and superseded over time, called for establishing a national communications system by linking together, and improving the communications assets of various federal agencies. Such a system is to provide the necessary communications for the federal government under all conditions ranging from normal conditions to domestic emergencies and international crises. Today, Executive Order 12,472 is the primary federal guidance in force that dictates the composition and functions of the NCS. Executive Order 12,472 defined the NCS as those telecommunications assets owned or leased by the federal departments, agencies, or entities that comprise the NCS that can meet the national security and emergency preparedness (NS/EP) needs of the federal government together with a management structure that could ensure that a national telecommunications infrastructure is developed that is responsive to NS/EP needs, among other things. Executive Order 12,472 which was amended by Executive Order 13,286 on February 28, 2003, provided that NCS’s mission is to assist the President, the National Security Council, the Homeland Security Council, the Directors of the Office of Science and Technology and Office of Management and Budget in, among other responsibilities, “the coordination of the planning for and provision of NS/EP communications for the Federal government under all circumstances, including crisis or emergency, attack, recovery, and reconstitution.” The NCS organization structure largely consists of federal entities. However, the telecommunications industry serves in an advisory capacity to the federal government on matters regarding NS/EP communications. A description of the roles and responsibilities of the entities that comprise the NCS organization follows. See figure 4 for an illustration of the current NCS management structure. Executive Office of the President (EOP). Within the EOP, the National Security Council (NSC), the Homeland Security Council (HSC), the Office of Science and Technology Policy (OSTP), and the Office of Management and Budget (OMB) have varying responsibilities for setting the policy direction for NS/EP communications and providing oversight of the NCS. For example, in consultation with the Executive Agent and a group of federal telecommunications officers (known as the NCS Committee of Principals), the EOP helps to determine NS/EP telecommunications requirements. NCS Executive Agent. Pursuant to the Homeland Security Act of 2002, the functions and responsibilities of the NCS Executive Agent were transferred to the Secretary of Homeland Security. Among other things, the Executive Agent is responsible for ensuring that the NCS conducts unified planning and operations, in order to coordinate the development and maintenance of an effective and responsive capability for meeting the domestic and international NS/EP telecommunications needs for the federal government as well as ensuring coordination with emergency management activities of the Department of Homeland Security (DHS). Additionally, the Executive Agent designates the NCS Manager and oversees related activities including the delivery of priority communications programs (such as Government Emergency Telecommunications Service (GETS) and the Wireless Priority Service (WPS)). Office of the Manager, NCS. The Office of the Manager, NCS (OMNCS) falls under the Office of Cyber Security and Communications which is part of the National Protection and Programs Directorate within DHS. The responsibilities of the NCS Manager include, among other responsibilities, preparing for consideration by the NCS Committee of Principals and the Executive Agent: recommendations on an evolutionary telecommunications architecture to meet current and future NS/EP needs; and plans and procedures for the management, allocation and use, including the establishment of priorities or preferences, of federally owned or leased telecommunications assets under all conditions of crisis or emergency. Additionally, the NCS Manager is responsible for implementing and administering any approved plans or programs as assigned, including any system of priorities and preferences for the provision of communications service, in consultation with the NCS Committee of Principals and the Federal Communications Commission (FCC), to the extent practicable or otherwise required by law or regulation. Further, the NCS Manager is to conduct technical studies or analyses for the purpose of identifying improved approaches which may assist in fulfilling NS/EP telecommunications objectives, among other things. Additionally, in consultation with the NCS Committee of Principals and other appropriate entities of the federal government, the NCS Manager is to ensure that, where feasible, existing and evolutionary industry, national, and international standards are used as the basis for federal telecommunications standards. The OMNCS also includes the National Coordinating Center—a joint industry-government entity—which assists in coordinating the initiation and restoration of NS/EP communications services and is involved in critical infrastructure protection of telecommunications assets. NCS Committee of Principals. According to NCS, this collaborative body, chaired by the NCS Manager comprises of the key telecommunications officers of those agencies designated by the President that own or lease telecommunications assets of significance to national security or emergency preparedness, and other executive entities which bear policy, regulatory, or enforcement responsibilities of importance to NS/EP telecommunications capabilities. Currently, the NCS Committee of Principals includes representatives from 24 federal departments and agencies—known as the NCS Member Agencies. In accordance with Executive Order 12,472, the NCS Committee of Principals, among other things, provides comments and recommendations to the National Security Council, the Director of OSTP, the OMB Director, the NCS Executive Agent, or NCS Manager regarding ongoing or prospective activities of the NCS. According to NCS, the NCS Committee of Principals, in accordance with its bylaws, has established subgroups such as the NCS Council of Representatives to help support the work activities of the NCS. Further, the NCS Committee of Principals established other groups such as the Priority Services Working Group to analyze the potential impact of future technologies on priority services programs and examine the outreach efforts for the GETS and WPS programs, among other things. The National Security Telecommunications Advisory Committee (NSTAC). The NSTAC was established in 1982 by Executive Order 12,382 to serve as an advisory committee to the President on matters related to NS/EP communications and may comprise of no more than 30 industry leaders appointed by the President. The NSTAC members are usually chief executive officers, from the telecommunications companies, network service providers, information technology firms, finance, and aerospace companies. As we previously reported, over the course of its longstanding relationship with the NSTAC, the NCS has worked closely with NSTAC member companies during emergency response and recovery activities following a terrorist attack or natural disaster. For example, after the September 11, 2001, terrorist attacks, NSTAC member companies immediately coordinated with NCS to assist with communication restoration efforts despite the fact that some of their network infrastructure had been among the most severely damaged. As we have previously reported, the NCS and NSTAC share information on a variety of issues including federal policies related to NS/EP communications and changes in the telecommunications marketplace. The NSTAC has also issued multiple reports addressing a wide range of policy and technical issues regarding communications, information systems, information assurance, critical infrastructure protection, and other NS/EP communications concerns. For example, in 2006, NSTAC issued a report that identified challenges related to NS/EP communications and provided recommendations to the President intended to help ensure that next generation network initiatives meet NS/EP user’s need, among other things. As provided under Executive Order 12,382, the NSTAC has established subgroups such as the Industry Executive Committee to help it it carry out its functions. carry out its functions. These subgroups may be composed, in whole or in part, of individuals who are not members of the NSTAC. To analyze the extent to which the National Communications System (NCS) provides priority communications programs, we reviewed relevant legislation, regulations and other documentation that outline NCS responsibilities in ensuring the continuity of communication including the Homeland Security Act of 2002, Executive Orders 12,472 and 13,231, and NCS Directive 3-10. We also reviewed budget requests, annual reports, the Performance Assessment Rating Tool (PART) reports submitted to the Office of Management and Budget (OMB), and other documentation related to NCS activities. We also obtained and reviewed relevant agency documents such as internal briefings, program planning documents, and standard operating procedures that describe how Government Emergency Telecommunications Service (GETS) and the Wireless Priority Service (WPS) operate and the capabilities that each program delivers. We obtained information on the mechanisms NCS utilizes to collect, track and analyze the performance of GETS and WPS. In addition, we obtained and analyzed data on the performance of GETS and WPS during select emergency or national special security events such as the 1995 Oklahoma City Bombing, the September 11, 2001, attacks, Hurricane Katrina in 2005, and the 2009 Presidential Inauguration, among others. We also interviewed NCS officials to obtain information on the agency’s role in ensuring continuity of communications, the types of priority communications capabilities it provides to the national security and emergency preparedness (NS/EP) community—specifically through the GETS, WPS, and Telecommunications Service Priority (TSP) programs—as well as the types of challenges, if any, the agency may face in providing these services. We interviewed officials from the Federal Communications Commission (FCC) to obtain information on the agency’s role in providing emergency communications, including how it works with NCS in providing priority communications capabilities. Furthermore, we interviewed telecommunications industry representatives from AT&T, Qwest Communications, and Verizon that are among the U.S. telephone carriers that provide NS/EP communications services. Although their views cannot be generalized to all telecommunications companies that provide NS/EP communications, the information we obtained helped to enhance our understanding of their role in providing emergency communications and their views on the impact the next generation network (NGN) technology transition may have on NCS’s priority communication programs. We also interviewed NS/EP officials from a non-probability sample of 15 states and 13 localities to obtain their perspectives and views on the NCS and its priority communication programs. Specifically, we obtained information from these officials regarding (1) their awareness of the NCS and the GETS, WPS, and TSP programs; (2) the extent they had utilized these programs in responding to an emergency situation and/or in their training and exercise activities; and (3) their perspectives on the benefits of these priority calling programs and potential barriers to participation. In selecting these states and localities, we considered a variety of factors including (1) the frequency and types of declared disasters by the Federal Emergency Management Agency (FEMA), (2) geographic dispersion, and (3) topographical factors that could affect the functionality of communications. The selected states and localities represent a range of natural disasters, terrains, climates, and population densities and also include areas that have recently experienced high-profile natural disasters or man-made attacks. While the perspectives of the officials we interviewed cannot be generalized to reflect the views of NS/EP emergency management officials in all states and localities, we believe the perspectives of the officials in these locations provided us with an overview and useful information on the NCS and the priority communications programs it provides. To determine how NCS enlists subscribers and controls access to its priority programs, we collected and analyzed documentation, and interviewed NCS officials (1) on subscriber eligibility criteria, (2) to determine NCS’s outreach efforts to enlist new subscribers for its priority calling programs, and (3) to identify its internal controls for controlling access to these programs. With regards to NCS’s outreach efforts, we obtained and reviewed documentation such as brochures, newsletters, and conference schedules on NCS outreach efforts including its use of regional outreach coordinators and its awareness booth deployments at various emergency management conferences. We also attended several NCS user- focused meetings and obtained documentation which detailed NCS efforts to attract new subscribers and provide support to current subscribers. To determine what internal controls NCS utilizes to grant and control access to its priority calling programs, we obtained the NCS standard operating procedures for GETS and WPS programs which outlined the procedures and processes to participate in the programs including the eligibility criteria, the approval process, and the re-validation process. We also obtained NCS standard operating procedures and compared them with criteria in Standards for Internal Control in the Federal Government. To determine whether NCS adhered to its procedures for terminating access for subscribers who no longer meet the programs’ eligibility criteria, we reviewed a nonprobability sample of records for 76 former federal and 9 former state government officials including former members of the U.S. Senate as well as members and delegates of the U.S. House of Representatives for the 109th Congress; immediate past heads of federal departments and agencies as of August 2008; and immediate past governors of U.S. states and territories as of August 2008, which is when we obtained the subscriber data. We selected these groups because they served in public positions that would allow NCS to easily determine that their positions ended, and in turn, work with the subscriber’s organization to update account status, as appropriate. Although the results of our work cannot be generalized to evaluate the effectiveness of controls used for all NCS program subscribers, the information obtained provided us with useful information about the extent to which subscriber records for these groups were terminated following a change in the subscriber’s eligibility status. Because the subscriber database, in its entirety, is classified, we have limited our reporting of the results of our analysis to only nonclassified information; however, this does not affect our findings. To assess the reliability of these data, we reviewed the data for obvious problems with completeness or accuracy and interviewed knowledgeable agency officials and contract support staff about the data quality control processes and reviewed relevant documentation such as the database dictionary that describes the data fields in the subscriber database. When we found discrepancies (such as duplicate records), we brought them to the attention of NCS officials and its contract support staff to better understand the nature of the discrepancies and resulting impact on our work. We performed electronic testing on the data and found the data to be sufficiently reliable for the purposes of this report. To determine what challenges can affect NCS’s delivery of its priority communications programs, we interviewed relevant NCS officials who have responsibilities for these programs. We also obtained information and reviewed documentation from the agency regarding its efforts to implement the Satellite Priority Service pilot program, as well as its efforts to leverage NGN technology in its priority communication programs. We compared this information with our previous work on pilot program planning and technology acquisition. To assess NCS’s overall planning and evaluation efforts, we interviewed NCS officials and reviewed relevant documentation regarding their strategic planning efforts and the mechanisms they use to evaluate their services. Specifically, we reviewed and analyzed NCS’s draft strategic plan to determine the extent to which the plan outlined the agency’s short and long term strategic goals and objectives, the associated time frames with their identified goals and objectives, the current status of the goals and objectives and internal and external factors that may affect their ability to achieve their goals and objectives. We also obtained and reviewed the OMB Performance Assessment Rating Tool, NCS’s Congressional Budget Justifications, and other documents that outlined the performance measures utilized to assess the extent they are achieving their goals and objectives; and planned milestones and spending for their priority calling programs. To assess the effectiveness of NCS planning efforts, we compared their efforts with federal best practices contained in our past reports which discussed the importance of strategic planning. We also utilized guidance from OMB Circular A-11, and related federal legislation, such as the Government Performance and Results Acts of 1993, which identifies the six key element of a strategic plan. In addition, we interviewed NCS officials about their strategic planning efforts and the mechanisms they use to monitor and evaluate their services. While NCS is not required to explicitly follow these guidelines, the guidelines do provide a framework for effectively developing a strategic plan and the basis for program accountability. We conducted this performance audit from June 2007 through August 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence provides a reasonable basis for our findings based on our audit objectives. The Telecommunications Service Priority (TSP) program provides priority provisioning and restoration of telecommunications services that support emergency operations facilities for certain federal, state, and local governments and other entities. Such services include equipment used to transmit voice and data communication by wire, cable, and satellite, among other things. During and following an emergency event, wireless and wireline carriers may receive numerous requests for new telecommunications service as well as for the restoration of existing services. Under this program, telecommunications carriers and their partners (collectively referred to as service vendors) are required to restore national security and emergency preparedness (NS/EP) telecommunications services that suffer outage, or are reported as unusable or otherwise in need of restoration, before non-NS/EP services. As with Government Emergency Telecommunications Service (GETS) and the Wireless Priority Service (WPS), certain government agencies and other groups are identified as having specific NS/EP responsibilities that qualify them for priority provisioning and restoration of services. However, unlike GETS and WPS, for which new subscriptions can be requested and approved during emergency response and recovery activities, authorization to receive TSP priority services must be in place before it is needed. Although the federal government does not charge a fee, telecommunications service providers (such as wireless carriers and cable and satellite providers) may charge an initial startup fee of up to $100 per circuit and a monthly fee of up to $10 per circuit. The National Communications System (NCS) reported that as of fiscal year 2008, over 1,000 organizations have registered more than 191,000 circuits under the TSP program. Telecommunications personnel have traditionally faced difficulties in accessing disaster areas in order to make TSP repairs to communications assets. According to telecommunications representatives that are part of the National Coordinating Center for Telecommunications (NCC) within NCS, access for repair crews to disasters areas has been an issue dating back to Hurricane Hugo in 1989, and during the aftermath of Hurricane Katrina. For example, an independent panel formed to examine the telecommunications challenges during Hurricane Katrina, reported that inconsistent and unclear requirements for repair crews and their subcontractors to gain access to the affected area impeded their efforts to make necessary repairs including those that they are required to complete under the TSP program. The panel reported that there were no mechanisms in place to issue credentials to those who needed them prior to Hurricane Katrina making landfall. Consequently, personnel from telecommunications companies were unable to gain access to repair some communications assets in the disaster area because they lacked the necessary credentials to access these areas. For example, during Hurricane Katrina, Louisiana authorities, among others, provided credentials to telecommunications repair crews to permit them access to certain affected areas; however, telecommunications personnel reported that within disaster areas, credentials that permitted access through one checkpoint would not be honored at another. In addition these personnel reported that in some cases the checkpoints required different documentation and credentialing before granting access to repair personnel. As a result, repair personnel had to carry multiple credentials and letters from various federal, state, and local officials authorizing their access to the disaster area. Furthermore, telecommunications personnel were unclear about which government agency had the authority to issue the necessary credentials. Similarly, repair crews reported that other factors delayed or interrupted the delivery of TSP services, such as the enforcement of curfews and other security procedures intended to maintain law and order. Although the full scope of these credentialing issues is outside NCS’s jurisdiction, under the communications annex of the revised 2008 National Response Framework, NCS is to coordinate with other emergency support function 2 (ESF-2) support agencies, among others, to ensure that telecommunications repair personnel have access to restore communications infrastructure in the incident area. To help facilitate this, NCS has taken steps to work with federal, state, and local government agencies as well as the private sector to identify solutions. For instance, NCS has coordinated with emergency management officials in Georgia and Louisiana to develop standard operating procedures to ensure access for critical infrastructure workers during emergencies or disasters. NCS officials also told us that they have begun to catalog the access procedures for various states and localities that could be provided to telecommunications personnel in order to facilitate access to damaged infrastructure in the aftermath of an emergency or disaster. In addition, other federal agencies, such as the Federal Emergency Management Agency (FEMA), have also taken steps to address this issue. For example, in November 2008, FEMA released for comment credentialing guidelines for essential personnel who need access to disaster areas in order to facilitate response, recovery and restoration efforts. The guidelines are intended to provide a uniform approach at the state and local level to provide telecommunications repair personnel, among others with access and credentials needed to enter a disaster area in order to expedite the restoration of communication capabilities. Government Emergency Telecommunications Service (GETS) and the Wireless Priority Service (WPS) are designed to achieve a probability that 90 percent of calls made using these services will successfully connect. The ability to communicate is critical to coordinating emergency response and recovery efforts during the first 72 hours following an emergency; however, the availability of communications can be disrupted by increased call volume or outages that occur in wireline and wireless networks. According to NCS, telephone calls made without the use of GETS or WPS during nonemergency periods generally result in a 99 percent likelihood of successful completion—that is the (1) called party answers the call, (2) called number rings but is not answered, or (3) called number responds with a busy signal. However, during a disaster or emergency event, NCS officials stated that the public switched telephone network (PSTN) can experience up to 10 times the normal call volume. Conversely, without using GETS or WPS, approximately 9 out of every 10 calls would not complete during a time period when the PSTN is highly congested. NCS’s priority calling programs services have been used to facilitate communications across the spectrum of emergencies and other major events dating back to the 1995 Oklahoma City Bombings through the recent 2009 Presidential Inauguration. GETS and WPS usage has varied greatly during disasters or emergencies as the programs have evolved and the programs have generally achieved call completion rates that range from 68 percent to 99 percent. For example, during the 1995 Oklahoma City bombings, of 429 GETS calls attempted 291 calls that may not have otherwise been completed due to network overload reached the intended destination number and resulted in a call completion rate of about 68 percent. In contrast, during Hurricane Katrina in 2005, the number of GETS calls attempted was 28,556, of which 27,058 (or 95 percent) were successfully completed (see table 5). Additionally, GETS and WPS capabilities were also used during the 2003 power outage that affected New York City and other areas. During this event, there were fewer GETS and WPS calls made in comparison to other events; however, the call completion rates for the duration of the event were 92 percent and 82 percent respectively. The National Communications System (NCS) uses five broad categories to determine who may be eligible to participate in its priority calling programs such as the Government Emergency Telecommunications Service (GETS) and the Wireless Priority Service (WPS). Eligible subscribers may include personnel from federal, state, local, or tribal government; as well as private industry and or non-profit organizations (see table 6 below for further detail on each of these categories). In addition, these categories are used to prioritize WPS calls in order to further ensure that communications are first available for senior executive leaders and policy makers at the federal, state, and local government level. The Federal Communications Commission (FCC), in response to NCS’s request, established these priority levels that are used to determine which WPS calls are to receive the first available channel with level five receiving the lowest priority (though all levels receive priority over non-WPS callers). In the event of an emergency and network congestion, the mobile switching center queues the call according to the subscriber’s priority level and call initiation time. For example, authorized staff from the Executive Office of the President would receive priority over national security and emergency preparedness (NS/EP) officials who have responsibility for public health and law enforcement if they placed calls at the same time. NCS has not determined whether a similar approach is required for the GETS program; however, if it is determined that a similar approach is needed—NCS believes it can apply the WPS approach to the GETS program. Table 6 also shows the priority level for each user category. In addition to the contact named above, Kirk Kiester, Assistant Director, and Candice Wright, Analyst-in-Charge, managed this review. Mark Abraham, Flavio Martinez, and Daniel Paepke made significant contributions to the work. David Alexander and Arthur James assisted with design, methodology, and data analysis. Sally Williamson provided assistance in report preparation. Pille Anvelt provided assistance with the report’s graphics.
Government functions and effective disaster response and management rely on the ability of national security and emergency preparedness (NS/EP) personnel to communicate. The Department of Homeland Security's (DHS) National Communications System (NCS), is responsible for ensuring continuity of NS/EP communications when network congestion or damage occurs. As requested, GAO assessed the (1) priority communication programs NCS provides, how it enlists subscribers, and to what extent NCS controls access to these programs; (2) challenges that can affect delivery of these programs; and (3) extent to which NCS plans for and evaluates its services. GAO reviewed NCS program documents, such as annual reports and access control procedures and data on program subscribers. GAO also interviewed officials from NCS and select state and local government entities. GAO compared NCS performance measures to federal best practices. NCS has two programs to provide NS/EP personnel with priority calling service when telephone networks are congested or damaged--the Government Emergency Telecommunications Service (GETS) and the Wireless Priority Service (WPS). NCS has undertaken several efforts, such as outreach at industry conferences, to increase participation in and control access to these programs. According to NCS, though outreach efforts have helped to increase overall enrollment, it is working to further address possible cost barriers to participation in WPS, such as discussing options with wireless carriers to help defray costs. In addition, NCS has implemented policies and procedures to ensure that access to its priority programs are limited to authorized users. GAO's review of select GETS and WPS subscriber data revealed that, of the 85 records we examined, NCS generally followed its policies and procedures to limit GETS and WPS access to authorized subscribers. NCS is taking steps to address inherent challenges in the communications environment--such as network congestion. For example, NCS initiated a satellite pilot program to allow NS/EP officials to circumvent severely damaged or congested traditional telephone networks. However, methods for implementation and evaluation of the pilot were unclear and NCS subsequently terminated the pilot. NCS is also working to provide priority voice and data NS/EP communications as part of the evolving telecommunications networks, but it has not finalized an acquisition approach based on available technologies, costs, or plans to mitigate technological and other challenges to deliver such capabilities. The lack of this information has led to congressional restrictions on NCS's funding. As NCS attempts to ensure that GETS and WPS services can operate in these evolving networks, an acquisition approach that includes this information will provide NCS officials and Congress with essential information to most effectively allocate resources and guide decision making. Although DHS agreed with GAO's June 2008 recommendation to complete the NCS strategic plan, NCS has not finalized its strategic plan which has been under development since 2007. Furthermore, existing performance measures do not cover all of its core responsibilities, as suggested by best practices, and certain performance measures could be strengthened. For example, NCS does not have a measure to gauge its performance in two of its key federal roles--critical infrastructure protection for communications under DHS's National Infrastructure Protection Plan as well as coordinating communications issues under the National Response Framework. Furthermore, NCS does not use prior years' enrollment levels to help determine increases, if any, to be made to future year's goals for user enrollment. Fully and accurately measuring performance is critical to ensuring the agency and key stakeholders--such as Congress--base program and resource decisions on actual performance.
You are an expert at summarizing long articles. Proceed to summarize the following text: OSD has not issued a policy, nor has DOD developed doctrine, to address exposures of U.S. troops to low levels of chemical warfare agents on the battlefield. DOD officials explained that low-level exposures were not addressed because there was no validated threat and no consensus on what constituted low-level exposures or whether they produced adverse performance or health effects in humans. Nevertheless, some entities within DOD are preparing chemical defense strategies and developing technologies that are expected to address low-level exposures. OSD has not issued a policy on the force protection regarding low-level chemical weapon agent exposures, and DOD has not developed doctrine that addresses low-level exposures to chemical warfare agents, either in isolation or combination with other contaminants that would likely be found on the battlefield. DOD officials have characterized the primary intent of existing NBC doctrine for battlefield management as enabling mission accomplishment by ensuring force preservation rather than force protection. The operational concept that underlies NBC doctrine and drives chemical warfare defense research, development, and acquisition has been to “fight through” the chemical and biological threat and accomplish the mission, with the assumption that overwhelming conventional capabilities will enable U.S. forces to prevail on the battlefield. Thus, the focus on massive battlefield chemical weapon use has framed the concepts of the role of chemical and biological defense in warfare. In a battlefield scenario, the NBC defense goal is to ensure that chemical exposures to the troops result in less than 1 percent lethalities and less than 15 percent casualties, enabling the affected unit to remain operationally effective. Nevertheless, DOD doctrine differentiates between possible high-level chemical warfare threats in foreign battlefield scenarios and low-level chemical exposures in domestic chemical weapon storage and destruction facilities. In a domestic chemical storage scenario, facilities and procedures are required to ensure that unprotected workers would receive no more than an 8-hour occupational exposure limit and that the adjacent civilian population would receive no more than a 72-hour general population limit, both of which are not expected to result in any adverse health effects. According to DOD, its doctrine does not address low-level exposures on the battlefield because there is no (1) validated threat, (2) definition of low-level exposures, (3) or consensus on the effects of such exposures. Moreover, if low-level exposures were to be addressed, DOD officials said that the cost implications could be significant. For example, increased costs could result from the need for more sensitive chemical detectors, more thorough decontamination systems, or more individual and collective protection systems. However, no studies have been done to evaluate the potential cost implications of expanding policy and doctrine to address low-level exposure concerns for force protection. OSD officials said that any future low-level requirements would need to compete for funds with an existing list of unfunded chemical and biological defense needs. In October 1997, the Presidential Advisory Committee on Gulf War Veterans’ Illnesses noted that existing DOD doctrine addresses only exposure to debilitating or lethal doses of nerve or mustard chemical warfare agents on the battlefield. The Committee subsequently recommended that DOD develop doctrine that addresses possible low-level subclinical exposure to chemical warfare agents. Specifically, the Committee recommended that DOD’s doctrine establish requirements for preventing, monitoring, recording, reporting, and assessing possible low-level chemical warfare agent exposure incidents. In his February 1998 testimony before the House Committee on Veterans’ Affairs, the Special Assistant to the Deputy Secretary of Defense for Gulf War Illnesses stated that DOD does not believe there is a need for doctrine concerning low-level chemical exposures but that DOD would consider taking action if research indicates a need for such doctrine. DOD officials said that there is no validated low-level threat and that the probability of encountering low-level contaminated conditions on the battlefield is minimal. If low-level chemical exposures were to occur, the officials stated that the exposures would likely be inadvertent and momentary—resulting from residual contamination after the use of high-dose chemical munitions. DOD experts on the storage and release of chemical warfare agents have asserted that only in a laboratory could agent dosages exist at a low concentration more than momentarily. Nevertheless, DOD has studied how the intentional use of low doses of chemical warfare agents could be used to achieve terrorist and military objectives. DOD raised concerns over the intentional use of low-level chemical warfare agents in its 1997 study, Assessment of the Impact of Chemical and Biological Weapons on Joint Operations in 2010, which analyzed the impact of state-sponsored terrorist attacks using chemical warfare agents. The study’s threat scenario, which was not validated by any intelligence agency, entailed chemical warfare agents being spread thinly, avoiding lethal levels as much as possible, for the purpose of stopping U.S. military operations and complicating detection and cleanup. The study found that massive battlefield use of chemical and biological weapons is no longer the most likely threat and that U.S. forces must be able to counter and cope with limited, localized chemical and biological attacks, including attacks delivered by asymmetrical means. This study exposed serious vulnerabilities to the U.S. power projection capabilities that could be exploited by the asymmetrical employment of chemical and biological weapons both in the United States and in foreign theaters of operation. The study also found that the U.S. intelligence capability to determine small-scale development and intent to use chemical or biological weapons, particularly for limited use, is inadequate. Shortfalls include insufficient ability to collect and assess indications and warnings of planned low-level chemical and biological attacks. The report concluded that OSD should significantly increase its level of attention to vulnerabilities posed by an enemy using asymmetrical and limited applications of chemical and biological weapons. The absence of an OSD policy or DOD doctrine on low-level exposures is partly attributable to the lack of a consensus within DOD on the meaning of low level. DOD officials responsible for medical chemical defense, nonmedical chemical defense, NBC doctrine, and NBC intelligence provided varying definitions of low-level exposure, including the Oxford Dictionary definition, no observable effects, sublethal, and 0.2 LD. Despite the differing responses, each one can be depicted as a location along the lower end of a chemical warfare agent exposure and effects continuum. (App. IV describes physiological effects from increasing levels of chemical warfare agent exposures.) Figure 1 shows that one end of the continuum is extremely high exposures that result in death, and the other end is no or minimal exposures that result in no performance or health effects. Between these extremes is a range of exposures and resulting effects. In addition to a lack of consensus on the definition or meaning of low-level exposures, there is a lack of consensus within DOD and the research community on the extent and significance of low-level exposure effects. These differences result from several factors. First, the chemical warfare agent dose-response curves can be quite steep, leading some DOD officials and researchers to question the concern over a very narrow range of sublethal dose levels. Second, the extrapolation of findings from studies on the effects of chemical warfare agent exposures in animals to humans can be imprecise and unpredictable. Third, the impacts of different methods of chemical warfare agent exposure, such as topical, injection, and inhalation, may result in varied manifestations and timings of effects, even with comparable concentrations and subject conditions. For example, many of the effects attributable to chemical warfare agent exposure are subjective and either do not occur or cannot be measured in many animal species. Fourth, the preponderance of information on the combined effects of low-level exposures is lacking. Nearly all research on low-level effects addresses single agents in isolation; defining low levels of an agent when present in combination with other battlefield contaminants has not been addressed. In addition, most research has involved single, acute exposures with observations made over several hours or days. Few studies have examined the possible long-term effects of continuous or repeated low-level exposures. Last, research is not yet conclusive as to what level of exposure is militarily or operationally significant. The impact of a specific symptom resulting from chemical warfare agent exposure may vary by the military task to be performed. For example, miosis (constriction of the eye’s pupil) may have a greater adverse impact on a pilot or a medical practitioner than a logistician. Nonetheless, the dose and effects data are only some of the many factors considered in risk analyses conducted by military commanders. DOD officials told us that trade-offs among competing factors are more often than not based on professional judgment of persons with extensive knowledge based on military and technical education, training, and experience rather than an algorithm with numerical input and output. Despite the lack of an OSD policy on low-level exposures, some elements within DOD have begun to address issues involving such exposures. In describing DOD’s NBC defense strategy for the future, the Chairman of the Joint Service Materiel Group noted that the presence of low levels of chemical warfare agents will be one of the factors to consider before sending U.S. troops to a contingency. Specifically, the future strategy will no longer be primarily shaped by the occurrence of mild physiological effects, such as miosis, but rather the possible long-term health effects to U.S. forces. Lessons learned from the Gulf War are reflected in DOD’s NBC defense strategy, which focuses on the asymmetrical threat. Gulf War Syndrome and low-level threats are identified as two of the concerns to be addressed in the future NBC defense strategy. The Group Chairman added that traditionally the de facto low-level definition has been determined by DOD’s technical capability to detect the presence of an agent. However, the Chairman stated that the low-level concept in future chemical defense strategies will need to be defined by the medical community and consider the long-term health effects of battlefield environments. The Joint Service Integration Group—an arm of the Joint NBC Defense Board that is responsible for requirements, priorities, training, and doctrine—is working with the services to create a joint NBC defense concept to guide the development of a coherent NBC defense program. One of the central tenets of the proposed concept is to provide effective force protection against exposure threats at the lower end of the continuum, such as those from terrorism and industrial hazards. Also, the proposed concept envisions a single process for force protection to provide a seamless transition from peacetime to wartime. Even though the levels and types of threat can differ, a single overall process can meet all joint force protection needs. Thus, the NBC joint concept will address threats against DOD installations and forces for both peacetime and military conflicts. In addition, the joint concept will provide a conceptual framework for defense modernization through 2010, but the specific programs and system requirements necessary for the implementation of the concept will not be articulated. The services are concurrently identifying NBC defense joint future operational capabilities to implement the joint concept. Several of these capabilities relate to low-level exposure, such as (1) improving detection limits and capabilities for identifying standard chemical warfare agents by 50 percent, (2) lowering detection sensitivity limits and detection response times for identifying standard chemical warfare agents by 50 percent, and (3) lowering detection response time for standard biological agents by at least 50 percent. Even in the absence of adopted joint force operational capabilities, DOD is incorporating low-level capabilities in the design of new chemical defense equipment. For example, the Joint Chemical Agent Detector, currently under development, is expected to provide an initial indication that a chemical warfare attack has occurred and detect low-level concentrations of selected chemical warfare agents. The detector will replace currently fielded systems that have a limited ability to provide warning of low-dose hazards from chemical warfare agents. The operational requirements for the detector specify that it will be able to detect low-level concentrations of five nerve agents and two blister agents. However, the low-level requirement necessitates trade-offs between the breadth of agents that the detector can identify and its ability to monitor low-level concentrations for a select few agents. Thus, the next-generation chemical warfare agent detector is expected to have a capability to detect lower chemical warfare agent concentrations in more locations. In the absence of policy—or additional research on low-level effects—it cannot be known whether the current, less capable detectors would have the appropriate capabilities to meet the requirements of a low-level exposure doctrine. Research on animals and humans conducted by DOD and others has identified some adverse psychological, physiological, behavioral, and performance effects of low-level exposure to some chemical warfare agents. Nonetheless, researchers do not agree on the risk posed by low-level exposures and the potential military implications of their presence on the battlefield, whether in isolation or in combination with other battlefield contaminants. DOD has no research program to address the remaining uncertainties regarding the performance and health effects of low-level exposures to chemical warfare agents; however, two new research initiatives are currently under consideration. The majority of the chemical warfare agent research has been on organophosphate nerve agents and related pesticides. At low doses, nerve agents produce a wide range of effects on the central nervous system, beginning with anxiety and emotional instability. Psychological effects in humans from nerve agent VX on skin have been noted earlier than physical effects (e.g., nausea and vomiting) or appeared in the absence of physical effects. The psychological effects were characterized by difficulty in sustaining attention and slowing of intellectual and motor processes. Doses considerably below the LD can degrade performance and alter behavior. These performance and behavioral effects have clear military implications because affected service personnel exposed to chemical warfare agents might not only lose the motivation to fight but also lose the ability to defend themselves and carry out the complex tasks frequently required in the modern armed forces. Moreover, the detrimental effects of exposure to single doses of nerve agents may be prolonged. Concern about low-level chemical warfare agent effects predate Operation Desert Storm. In the 1980s, the Air Force conducted research on the bioeffects of single and repeated exposures to low levels of the nerve agent soman due to concerns about the effects of low-level chemical agent exposures on vulnerable personnel—such as bomb loaders, pilots, and medical personnel—who may be required to work in low-level contaminated environments. The Air Force found that the nerve agent degraded performance on specific behavior tasks in the absence of obvious physical deficits in primates. Thus, even for extremely toxic compounds, such as organophosphate nerve agents, which have a steep dose-response curve, task performance deficits could be detected at low levels of exposure that did not cause any overt signs of physical toxicity. This research was unique because low-level exposures were thought at that time to be unlikely or unrealistic on the battlefield. Table 1 shows examples of research conducted or funded by DOD on the behavioral and performance effects of organophosphate nerve agents. The research examples reveal that sublethal exposures of an agent can have a variety of effects (depending on the species, exposure parameters, time, and combination of exposures) and produce measurable, adverse effects on physiology and behavior (both motor and cognitive performance). In our prior report on Gulf War illnesses, we summarized research on the long-term health effects of chemical warfare agents, which were suspected of contributing to the health problems of Gulf War veterans. The report cited research suggesting that low-level exposure to some chemical warfare agents or chemically related compounds, such as certain pesticides, is associated with delayed or long-term health effects. Regarding delayed health effects of organophosphates, we noted evidence from animal experiments, studies of accidental human exposures, and epidemiological studies of humans that low-level exposures to certain organophosphorus compounds, including sarin nerve agents to which some U.S troops may have been exposed, can cause delayed, chronic neurotoxic effects. We noted that, as early as the 1950s, studies demonstrated that repeated oral and subcutaneous exposures to neurotoxic organophosphates produced delayed neurotoxic effects in rats and mice. In addition, German personnel who were exposed to nerve agents during World War II displayed signs and symptoms of neurological problems even 5 to 10 years after their last exposure. Long-term abnormal neurological and psychiatric symptoms, as well as disturbed brain wave patterns, have also been seen in workers exposed to sarin in manufacturing plants. The same abnormal brain wave disturbances were produced experimentally in nonhuman primates by exposing them to low doses of sarin. Delayed, chronic neurotoxic effects have also been seen in animal experiments after the administration of organophosphate. In other experiments, animals given a low dosage of the nerve agent sarin for 10 days showed no signs of immediate illness but developed delayed chronic neurotoxicity after 2 weeks. Nonetheless, some DOD representatives in the research community have expressed considerable doubt that low-level exposures to chemical warfare agents or organophosphates pose performance and long-term health risks—particularly in regard to the likelihood that low-level exposures are linked to Gulf War illnesses. These doubts stem from the lack of a realistic scenario, the lack of adverse long-term health effects observed in studies of controlled and accidental human exposure or animal studies, and results that are viewed as incompatible with the principles of biology and pharmacology. Researchers we interviewed did agree that the work that has been done to date is lacking in several aspects, including (1) the effects of exposure to low levels of chemical warfare agents in combination with other agents or contaminants likely found on future battlefields; (2) extrapolation of animal models to humans; (3) the breadth of agents tested, types of exposure routes, and length of exposure; and (4) the military or operational implications of identified or projected low-level exposure effects. of soman while the agent is in the blood and before it can affect the central nervous system. Therefore, for each nerve agent there may be a threshold of exposure below which no effects will result. one DOD scientist, “Research can improve our understanding of the relationships among the many factors, such as effects, time of onset of effects, duration of effects, concentration, duration of exposure, dosage, and dose. Improved estimates of effects in humans resulting from exposure to chemical warfare agents are a requirement that has existed since World War I.” Consistent with that assessment, the Army’s Medical Research and Materiel Command is proposing a science and technology objective to establish a research program on the chronic effects of chemical warfare agent exposure. Because previous research efforts have emphasized the acute effects of high (battlefield-level) exposures, there is little information on the repeated or chronic effects of low-dose exposures. The Command’s research effort is in response to this lack of information and joint service requirements for knowledge of the effects on personnel in sustained operations in areas that may be chemically contaminated, thus creating the possibility of a continuous low-level exposure. Additionally, the Joint Service Integration Group has tasked a panel of experts to determine an accepted definition for low-level chemical warfare agent exposure. The panel has proposed a series of research efforts to the Joint NBC Defense Board to analyze the relationships among dose, concentration, time, and effects for the purpose of determining safe exposure levels for sustained combat operations. DOD has funded two National Academy of Sciences studies to support the development of a long-term strategy for protecting U.S. military personnel deployed to unfamiliar environments. These studies will provide guidance for managing health and exposure issues, including infectious agents; vaccines; drug interactions; stress; and environmental and battlefield-related hazards, such as chemical and biological agents. One study is assessing approaches and technologies that have been or may be used by DOD in developing and evaluating equipment and clothing for physical protection and decontamination. The assessment is to address the efficacy of current policies, doctrine, and training as they relate to potential exposures to chemical warfare agents during deployments. The second study is assessing technology and methods for detection and tracking of exposures to a subset of harmful agents. This study will assess tools and methods to detect, monitor, and document exposures to deployed personnel. These studies do not address issues of risk management; those will be the focus of a third study. Although DOD and congressional interest concerning the effects of low-level chemical exposure increased after events in the 1991 Gulf War, relatively limited funding has actually been expended or programmed in DOD’s RDT&E programs in recent years to address issues associated with low-level chemical exposure on U.S. military personnel. However, DOD has developed proposals to fund two low-level research efforts, which are under consideration for implementation. For fiscal years 1996 through 2003, DOD has been appropriated in excess of $2.5 billion for chemical and biological defense RDT&E programs. (See app. V for general DOD chemical and biological program funding allocations and trends for fiscal years 1990 through 2003). Fiscal year 1996 was the first time that RDT&E funding for all of DOD’s chemical and biological defense programs was consolidated into six defensewide program element funding lines. These program elements are (1) basic research, (2) applied research, (3) advanced technology development, (4) demonstration and validation, (5) engineering and manufacturing development, and (6) management support. Table 2 shows total actual and projected research funding by RDT&E program element for fiscal years 1996 through 2003. Three low-level research efforts—totaling about $10 million—were included in DOD’s fiscal year 1997 and 1998 chemical and biological defense RDT&E programs. These research efforts represented about 1.5 percent of the approximately $646 million in combined obligational authority authorized for chemical and biological defense RDT&E for these 2 fiscal years. Funding for the largest of the three—an $8-million effort in the fiscal year 1998 program that dealt with chemical sensor enhancements—was provided by the Conference Committee on DOD Appropriations. Another fiscal year 1998 effort—costing almost $1.4 million—involved the development of sensitive biomarkers of low-dose exposure to chemical agents. The remaining effort, included in the fiscal year 1997 program, developed in vitro and in vivo model systems to evaluate the possible effects of low-dose or chronic exposures to chemical warfare agents. This project cost approximately $676,000. DOD officials told us that these projects were not part of a structured program to determine the performance and health effects of low-level exposures. However, two elements within DOD have proposed multiyear research programs on low-level issues. DOD has requested funding for the U.S. Army Medical Research and Materiel Command’s science and technology objective on the chronic effects of chemical warfare agent exposure. If approved, this research program is projected to receive an average of about $2.8 million annually in research funds for fiscal years 1999 through 2003. The purpose of this undertaking would be to investigate the effects of low-dose and chronic exposure to chemical agents to (1) gain a better understanding of the medical effects of such exposure, (2) provide tools for a medical assessment of personnel, and (3) develop protocols for subsequent protection and treatment. Figure 2 reflects DOD’s programmed RDT&E funding for fiscal years 1999 through 2003 and shows the proposed science and technology objective in relation to other research program efforts. Another research program involving low-level chemical exposures will be proposed in the near future to the NBC Defense Board for approval. A panel of experts, tasked by DOD to study the issue of defining low-level and chronic chemical exposure, has proposed a series of research efforts to be undertaken over the next several years to address the definitional dilemma surrounding this issue. Funding levels for this effort have not been established. DOD’s current NBC policy and doctrine do not address exposures of U.S. troops to low levels of chemical warfare agents on the battlefield. NBC defense doctrine is focused on ensuring mission accomplishment through the prevention of acute lethal and incapacitating effects of chemical weapons and is not designed to maximize force protection from exposure to clinical and subclinical doses. Moreover, DOD has no chemical defense research plan to evaluate the potential performance effects of low-level exposures or the implications they may have for force protection. Even though research funded by DOD and others has demonstrated adverse effects in animal studies, the literature does not adequately address the breadth of potential agents; the combinations of agents either in isolation or in combination with battlefield contaminants; the chronic effects; animal-human extrapolation models; or the operational implications of the measured adverse impacts. We recommend that the Secretary of Defense develop an integrated strategy for comprehensively addressing force protection issues resulting from low-level chemical warfare agent exposures. The strategy should address, at a minimum, the desirability of an OSD policy on the protection of troops from low-level chemical warfare agent exposures; the appropriateness of addressing low-level chemical warfare agent exposures in doctrine; the need for enhanced low-level chemical warfare agent detection, identification, and protection capabilities; the research needed to fully understand the risks posed by exposures to low levels of chemical warfare agents, in isolation and in combination with other contaminants that would be likely found on the battlefield; and the respective risks, costs, and benefits of addressing low-level chemical warfare agent exposures within DOD’s chemical and biological defense program. In oral comments on a draft of this report, DOD concurred with our recommendation that the Secretary of Defense develop a “low-level” strategy but disagreed with the implied priority order. DOD stated that it is also concerned with force protection and the possible impact that low-level chemical agent exposures might have on a service member’s health and emphasized that a valid data-based risk assessment must serve as the foundation for any change in policy or doctrine. In addition, DOD provided us with updated plans and proposals to develop an overall requirements and program strategy for low-level chemical agent monitoring. DOD agreed that the absence of an OSD policy or a DOD doctrine on low-level exposures is partially attributable to the absence of a consensus within DOD on the meaning of low level. However, DOD expressed concern that we did not assert a working definition of low level as it might apply to a force projection or battlefield scenario. DOD disagreed with our selection of examples of low-level research illustrated in table 1, stating that the studies were more appropriately categorized as “low dose” rather than low level. Finally, DOD believed that we misinterpreted the report, Assessment of the Impact of Chemical and Biological Weapons on Joint Operation in 2010, by failing to understand that the asymmetrical application of chemical agents does not equate to “low level” for the purpose of producing casualties, but rather for the purpose of disrupting operations by the mere detectable presence of these agents at levels that may have no medical effects. In our recommendation, we listed a number of elements that should be addressed in developing such a strategy, but we purposely did not articulate a priority order beginning with research. Rather, we advocate that DOD develop a strategy to analyze policy, doctrine, and requirements based on existing information and to reassess policy, doctrine, and requirements as the results of a low-level research program are reported. We did not define low level in our report because the definition requires an interpretation of both exposure effects data and military risk and performance data—analyses best performed by DOD. Furthermore, because a consensus of the meaning or definition of low level is lacking, we find no basis for DOD’s characterization of the research examples in table 1 of the report as “low dose,” rather than “low level.” Regarding the 2010 Study, we disagree with DOD’s statement that there may not be medical effects for low-level chemical agents. Rather our work shows that low-level exposure can have medical effects that cannot only result in casualties, but also disrupt operations. The plan of action and low-level toxicological and technical base efforts provided by DOD did not fully address the strategy that the report discusses. The strategy will require a plan of action incorporating medical and tactical analyses, as well as the nonmedical research and development projects described by DOD. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to other congressional committees and the Secretary of Defense. We will also make copies available to others on request. If you have any questions concerning this report, please call me at (202) 512-3092. Major contributors to this report were Sushil Sharma, Jeffrey Harris, Foy Wicker, and Betty Ward-Zukerman. The scope of our study was limited to chemical defense and low-level exposures that may cause adverse effects on performance. To determine the extent to which low-level exposures are addressed in doctrine, we reviewed Department of Defense (DOD) documents and interviewed agency officials. We asked questions designed to elicit the treatment of low-level issues within the nuclear, biological, and chemical (NBC) doctrinal architecture (i.e., Joint Publication 3-11; field manuals; training circulars; and tactics, techniques, and procedures). After determining that low-level issues were not addressed in the war-fighting doctrine, we asked representatives of the doctrinal, intelligence, and research communities why low-level issues were not addressed and under what circumstances they would be addressed. To identify research on the performance effects of low-level exposure of chemical warfare agents, we reviewed relevant government and academic research (published and unpublished) and interviewed researchers within and outside of DOD. To identify relevant literature, we interviewed DOD officials currently responsible for prioritizing chemical and biological defense research needs. We also interviewed DOD researchers at the Army’s primary center of medical chemical defense research and development (the Army Medical Research Institute for Chemical Defense) and nonmedical chemical research and development (the Edgewood Research, Development, and Engineering Center at the Aberdeen Proving Ground). We interviewed staff at the laboratory used by the Air Force to conduct low-level exposure effects on animals before the Army was designated as executive agent for chemical defense and the Air Force’s effort ceased. We sought historic programmatic information from the Naval Medical Research and Development Command, which funded portions of the Air Force’s low-level animal studies. We monitored ongoing DOD-funded Gulf War illnesses research that addresses potential long-term health effects from low-dose or chronic chemical exposures. Last, we discussed current research with leading academics in the field. We reviewed the compilation of relevant low-level research literature to characterize coverage (variety and combinations of agents or contaminants), methodologies employed, and effects observed. These observations were discussed and validated in our interviews with researchers in chemical defense, both within and outside of DOD. In addition, we employed a research consultant from academia to review the literature to substantiate both the comprehensiveness of our compilation and the validity of our conclusions. To determine what portion of the chemical defense budget specifically addresses low-level exposures, we reviewed DOD documents and interviewed DOD program officials. We examined DOD planning and budget documents, including the NBC defense annual reports to Congress and joint service chemical and biological defense program backup books for budget estimates. In addition, we analyzed chemical defense-related data for fiscal years 1991 through 1999 contained in DOD’s Future Years Defense Program—the most comprehensive and continuous source of current and historical defense resource data—to identify annual appropriation trends and ascertain the level of funds programmed and obligated for research, development, test, and evaluation (RDT&E), as well as procurement, and the destruction of chemical munitions. We interviewed DOD officials to verify our observations about low-level efforts and to obtain information about potential programs currently being developed to expand DOD’s efforts to understand the effects of chronic and low-level exposure of chemical warfare agents on military personnel. We contacted the following organizations: Armed Forces Radiobiological Research Institute, Bethesda, Maryland; Defense Intelligence Agency, Washington, D.C.; DOD Inspector General, Washington, D.C.; Department of Energy, Washington, D.C.; Edgewood Research, Development, and Engineering Center, Aberdeen Proving Ground, Maryland; Israel Institute for Biological Research, Ness-Zonia, Israel; Joint Program Office, Biological Defense; Falls Church, Virginia; National Ground Intelligence Center, Charlottesville, Virginia; National Research Council, Washington, D.C.; Office of the Secretary of Defense, Washington, D.C.; Oregon Health Sciences University, Portland, Oregon; University of Texas Health Center at San Antonio, San Antonio, Texas; University of Texas Southwest Medical Center, Dallas, Texas; Air Force Armstrong Laboratory, Brooks Air Force Base, Texas; Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio; Army Chemical School, Fort McClellan, Alabama; Army Medical Research and Materiel Command, Frederick, Maryland; Army Medical Research Institute of Chemical Defense, Aberdeen Proving Navy Bureau of Medicine and Surgery, Washington, D.C.; and Walter Reed Army Institute of Research, Washington, D.C. We performed our review from September 1997 to May 1998 in accordance with generally accepted government auditing standards. No common names exist for these agents. The institutional structure and responsibilities for NBC defense research, requirements, and doctrine derive from provisions in the National Defense Authorization Act for Fiscal Year 1994. The act directed the Secretary of Defense to assign responsibility for overall coordination and integration of the chemical and biological program to a single office within the Office of the Secretary of Defense. The legislation also directed the Secretary of Defense to designate the Army as DOD’s executive agent to coordinate chemical and biological RDT&E across the services. The Joint NBC Defense Board, which is subordinate to the Under Secretary for Acquisition and Technology, provides oversight and management of the NBC defense program within DOD. The NBC Board approves joint NBC requirements; the joint NBC modernization plan; the consolidated NBC defense program objective memorandum; the joint NBC research, development, and acquisition plan; joint training and doctrine initiatives; and the joint NBC logistics plan. The Joint Service Integration Group and the Joint Service Materiel Group serve as subordinates to the NBC Board and execute several of its functions. Both groups are staffed with representatives from each of the services. The Joint Service Integration Group is responsible for joint NBC requirements, priorities, training, doctrine, and the joint modernization plan. The Joint Service Materiel Group is responsible for joint research, development, and acquisition; logistics; technical oversight; and sustainment. These two groups and the NBC Board are assisted by the Armed Forces Biomedical Research Evaluation Management Committee, which provides oversight of chemical and biological medical defense programs. The Committee is co-chaired by the Assistant Secretary of Defense for Heath Affairs and the Director, Defense Research and Engineering. Figure III.1 illustrates the relationships among the various organizations responsible for NBC defense. USD (A&T) ATSD (NCB) DATSD (CBM) Loss of consciousness, convulsions, flaccid paralysis (lack of muscle tone and an inability to move), and apnea (transient cessation of respiration) This appendix provides general information on the funding trends for DOD’s Chemical and Biological Defense Program for fiscal years 1990-97 and 1998-2003. Funding is shown in four categories: disposal, which includes the costs associated with the chemical stockpile disposal program; RDT&E; procurement; and operations and maintenance, including the costs for military personnel. After the end of the Cold War, DOD funding for chemical and biological programs increased from about $566 million in fiscal year 1990 to almost $1.5 billion in fiscal year 1997. These funds include all military services and the chemical munitions destruction program. Adjusted for inflation, the total program funding has more than doubled (see fig. V.1) over that period and is programmed to continue growing—peaking in fiscal year 2002 with a total obligational authority in excess of $2.3 billion (see fig. V.2). Agent that inhibits the enzyme acetylcholinesterase. Transient cessation of respiration. Symptoms as observed by a physician. Process based on perception, memory, and judgment. Effects resulting from a specific unit of exposure. Difficult or labored respiration. Waste material discharged into the environment. Lack of muscle tone and an inability to move. Gray unit of radiation. Kilogram. Median lethal dose. Milligram. Constriction of the pupil of the eye. Toxins that exert direct effects on nervous system function. Family of chemical compounds that inhibit cholinesterase and can be formulated as pesticides and nerve agents. Measures designed to preserve health and prevent the spread of disease. Nasal secretions. Manifestations of an exposure that are so slight as to be unnoticeable or not demonstrable. Microgram. Agent that produces vesicles or blisters. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) approach for addressing U.S. troop exposures to low levels of chemical warfare agents during the Gulf War, focusing on: (1) the extent to which the DOD doctrine addresses exposures to low levels of chemical warfare agents; (2) the extent to which research addresses the performance and health effects of exposures to low levels of chemical warfare agents, either in isolation or combination with other agents and contaminants that would be likely found on the battlefield; and (3) the portion of resources in DOD's chemical and biological defense research, development, test, and evaluation (RDT&E) program explicitly directed at low-level chemical warfare agent exposures. GAO noted that: (1) DOD does not have an integrated strategy to address low-level exposures to chemical warfare agents; (2) it has not stated a policy or developed a doctrine on the protection of troops from low-level chemical exposures on the battlefield; (3) past research indicates that low-level exposures to some chemical warfare agents may result in adverse short-term performance and long-term health effects; (4) DOD has no chemical defense research program to determine the effects of low-level exposures; (5) less than 2 percent of the RDT&E funds in DOD's chemical and biological defense program have been allocated to low-level issues in the last 2 fiscal years; (6) DOD's nuclear, biological, and chemical (NBC) doctrine is focused on mission accomplishment by maximizing the effectiveness of troops in a lethal NBC environment; (7) it does not address protection of the force from low-level chemical warfare agent exposures on the battlefield; (8) according to officials, DOD does not have a doctrine that addresses low-level exposures because there is no: (a) validated low-level threat; (b) consensus on the definition or meaning of low-level exposures; or (c) consensus on the effects of low-level exposures; (9) past research by DOD and others indicates that single and repeated low-level exposures to some chemical warfare agents can result in adverse psychological, physiological, behavioral, and performance effects that may have military implications; (10) the research, however, does not fully address the effects of low-level exposures to a wide variety of agents, either in isolation or combination with other agents and battlefield contaminants; chronic effects; reliability and validity of animal-human extrapolation models; the operational implications of the measured adverse impacts; and delayed performance and health effects; (11) during the last 2 fiscal years, DOD has allocated nearly $10 million, or approximately 1.5 percent of its chemical and biological defense RDT&E budget of $646 million, to fund research and development projects on low-level chemical warfare agent exposure issues; (12) however, these projects were not part of a structured DOD research program focused on low-level effects; and (13) DOD does not have a chemical and biological defense research program designed to evaluate the potential effects of low-level chemical warfare agent exposures, but funding is under consideration for two multiyear research programs addressing low-level effects.
You are an expert at summarizing long articles. Proceed to summarize the following text: The proliferation of weapons of mass destruction is one of the most serious dangers confronting the United States today and will likely continue to be so for the foreseeable future. Responsibility for thwarting this proliferation is shouldered by numerous federal agencies and by many individual departments within these agencies. Each of these departments brings a specific perspective, strength, and knowledge base to bear on an aspect of the large and complex proliferation problem. NNSA and its Nonproliferation and Verification Research and Development Program (R&D program) are key players in the United States’ nonproliferation efforts. NNSA derives its important role from its unique understanding and expertise related to nuclear weapons and nuclear power, based in large measure on the world-class research, design, and engineering capabilities to be found in the multidisciplinary DOE national laboratories that conduct basic and applied research in many areas—from high-energy physics to advanced computing. As of May 31, 2002, the Nonproliferation and Verification R&D Program’s 220 projects were in various developmental stages: from research conducted to develop an idea and assess the feasibility of producing a prototype, to field demonstrating a prototype prior to its transfer to an end user. Some examples of successful research projects conducted by NNSA’s Nonproliferation and Verification R&D Program include: The development of ground-based technology for detecting in real time short-lived radioactive gases released during nuclear explosions and satellite-based detectors that are sensitive to x-ray, gamma ray, and neutron emissions. These projects were developed by the Nuclear Explosion Monitoring research area. Detection equipment, developed by the Proliferation Detection research area, that was fitted into an aircraft and flown over the World Trade Center site to monitor air samples for hazardous chemicals. A decontamination formulation that was used to assist the cleanup of congressional office buildings contaminated with anthrax and equipment to detect the presence of chemical agents in the Washington, D.C., Metro subway system was developed by the Chemical and Biological National Security research area. Nearly 75 percent of the $1.2 billion that NNSA’s R&D program was appropriated over the past 5 years was distributed to Los Alamos, Sandia, and Lawrence Livermore National Laboratories. According to program officials, these laboratories received the majority of the funding because most of the needed expertise for the program’s projects is resident at these laboratories. The remaining funding was distributed to other DOE laboratories and facilities. NNSA’s R&D program received a total appropriation of $322 million in fiscal year 2002, with the most funding spent on R&D of Proliferation Detection projects. From fiscal year 1998 through fiscal year 2002, $1.2 billion was appropriated to NNSA’s R&D program. There was little annual variation in the program’s funding between fiscal year 1998 and fiscal year 2001, averaging about $218 million per year. (See fig. 1.) However, the program received a significant increase in fiscal year 2002, and was appropriated about $323 million—including $78 million the program received in the $40 billion emergency supplemental appropriations act passed in the wake of the September 11, 2001, terrorist attacks. Of the $1.2 billion appropriated to NNSA’s R&D program from fiscal year 1998 through fiscal year 2002, nearly 75 percent was distributed for R&D efforts at three of DOE’s nuclear weapons laboratories—Sandia and Los Alamos National Laboratories in New Mexico ($352.4 million and $313.6 million, respectively) and Lawrence Livermore National Laboratory in California ($228.2 million). (See table 1.) Fourteen percent was distributed to other national laboratories, including, among others, Pacific Northwest National Laboratory in Washington ($85.0 million) and the Oak Ridge National Laboratory and Y-12 Plant in Tennessee ($35.1 million). Six percent was distributed to universities, industry (including small businesses), and other governmental agencies. For example, nearly $240,000 was obligated to the U.S. Army for chemical and biological agent detection research. Finally, about 5 percent or $58.8 million has been spent from fiscal year 2000 through fiscal year 2002 to build the NISC at Los Alamos National Laboratory. This center (that NNSA estimates will cost a total of $63 million before construction is complete in fiscal year 2003) will provide consolidated office and laboratory space for nonproliferation R&D activities that are currently housed in 47 different structures—many of which, according to NNSA, are old and substandard—across the 43-square mile Los Alamos National Laboratory. In fiscal year 2002, R&D activities in the Proliferation Detection research area received 37 percent of the $323 million appropriated to NNSA’s R&D program. The Chemical and Biological National Security research area received 26 percent and the Nuclear Explosion Monitoring research area received 23 percent. (See fig. 2.) The Proliferation Detection research area received about $119 million in fiscal year 2002. The largest single amount ($11.2 million) was obligated to Lawrence Livermore National Laboratory for R&D of remote spectroscopy technology. While many of the specific applications and characteristics of this technology are classified, the systems developed are used by several defense and intelligence agencies in a variety of arms control and treaty verification activities. The technology developed is particularly useful in identifying chemical releases associated with proliferation activities. For example, these systems can be used to detect chemical signatures of agents released on a battlefield. One of these systems was also used at the World Trade Center site after the September 11, 2001, terrorist attacks to monitor for hazardous chemicals that might affect construction workers. Chemical and Biological National Security R&D efforts received $81.1 million in fiscal year 2002. Of this amount, $39.1 million was spent on demonstration programs of integrated chemical and biological detection systems. Examples of these systems include the chemical agent detection system installed in one station of the Washington, D.C., Metro subway system and a biological agent detection system that was deployed at the 2002 Winter Olympic Games in Salt Lake City, Utah. R&D of Nuclear Explosion Monitoring technologies received $75.6 million in fiscal year 2002. Of this amount, $54.5 million was spent primarily at Los Alamos and Sandia National Laboratories to provide satellite sensors for monitoring nuclear explosions in the earth’s atmosphere and in space. These sensors are installed on U.S. Air Force Global Positioning System satellites and on Defense Support Program early warning satellites. The remaining funds in this research area were spent developing and installing ground-based sensors for monitoring nuclear explosions in the atmosphere, underground, and underwater and for developing computer software used by the operator of the U.S. system for monitoring nuclear explosions—the Air Force Technical Applications Center—to analyze data obtained from these sensors. In contrast to the Nuclear Explosion Monitoring research area, the Proliferation Detection and the Chemical and Biological National Security research areas lack a process to identify users’ needs and do not have sufficient information to oversee project progress. For these latter two research areas, input from specific users is often not sought prior to funding research projects because the research in these two areas is, in many cases, considered to be long-term and the feasibility of the resulting technology is usually unknown. In addition, although required to have project life-cycle plans and quarterly reports that contain detailed information on a project’s time frames, milestones, users, and deliverables, we found that many of these plans and reports for the two research areas lacked these data. Furthermore, NNSA’s R&D program management information system is not designed to capture whether projects are on time or within budget, eliminating an important tool that program managers could use to monitor their projects. In the Nuclear Explosion Monitoring research area, specific R&D projects originate in a classified presidential directive that sets broad guidelines for a U.S. system for monitoring nuclear explosions. These broad guidelines are then refined through an interagency process that includes agencies of the Department of Defense and the intelligence community to leverage resources and prevent duplication. Specific requirements for technologies are then transmitted to the Nuclear Explosion Monitoring research area and specific statements of work and memorandums of understanding are signed between the research area and users of the technology—primarily the Air Force Technical Applications Center—that specify each party’s responsibilities. The Air Force Technical Applications Center has the operational responsibility for ground-based and satellite-based sensor systems that provide technical data for verification of nuclear test ban treaties and nuclear explosion monitoring. The Nuclear Explosion Monitoring research area in NNSA’s R&D program is the principal developer of technology for the Air Force Technical Applications Center. As such, the two parties enjoy a close relationship. This relationship has been facilitated by the fact that some of the test ban treaties the Center is responsible for monitoring—such as the 1974 Threshold Test Ban Treaty between the then Soviet Union and the United States that prohibited underground nuclear explosions above a yield of 150 kilotons—contain detailed monitoring and verification procedures. In addition, operational requirements documents for the U.S. system for monitoring nuclear explosions also contain detailed technical guidelines for researchers conducting R&D for NNSA’s program to follow. In the Proliferation Detection and the Chemical and Biological National Security research areas, the process for identifying users’ needs and developing R&D projects differs from Nuclear Explosion Monitoring. Instead of beginning with formal, detailed requirements, projects in these research areas often are of a more exploratory nature, requiring several years of work before usable technologies are mature and ready for real world application. User input is often not sought prior to funding such research because, according to program managers and national laboratory officials we spoke with, users are often focused on their immediate operational needs and are unable to define requirements for technology whose feasibility is still unknown. In February 2000 and again in March 2002, advisory committees to NNSA reported that the diverse environment of users—such as the federal government, the intelligence community, law enforcement, and others— makes the task of transferring the knowledge and technology developed by the NNSA R&D program especially challenging. To maximize the prospects for successful transfer, the advisory committees recommended that communications with potential users should be opened as early as possible and proceed through all phases of the work (research, development, and demonstration). According to the advisory committees, it is important that in the earliest phases of concept formulation, prospective users be made aware of the potential technological and scientific advances. In addition, uncertainties need to be communicated as well to minimize surprises. The February 2000 advisory committee report recognized the need for exploratory projects designed to see whether a technical idea with a plausible application to a nuclear, chemical, or biological nonproliferation mission is feasible. In these cases, seeking input from a user of the technology might not be necessary until technical feasibility has been proven. However, the advisory committee also reported that, in general, users should be involved at the earliest stages of the R&D process and guidelines should be established to define when exceptions to this are allowed. In addition, involving users at such an early stage may achieve unexpected benefits. For example, the March 2002 advisory committee report notes that “brainstorming with potential end- users can sometimes lead to innovative ideas for new technologies.” In response to the February 2000 advisory committee report, NNSA’s R&D program reported that it recognized the importance of involving potential end users of the technology at the earliest date and that it would continue to emphasize that relationship. Part of the Proliferation Detection research area—the former Deterring Proliferation research area—has begun within the past year to establish a process of regular project reviews with user participation. Under this process, program managers and potential users conduct regular reviews of each project before key decisions are made, such as whether to proceed from exploratory research into product development. The reviews examine how well the project is linked to user needs, the strength of the researchers’ scientific or technical approach, and the researchers’ ability to carry out the project effectively and efficiently. Users are also involved in broader planning initiatives in this area. For example, program managers consulted with officials from the Department of Defense, Department of State, Coast Guard, Customs Service, and agencies of the intelligence community, among others, when preparing a “strategic outlook” for the research area as well as science and technology “roadmaps” that are intended to guide future R&D activities in this research area. However, this system has not yet been adopted in the remainder of the Proliferation Detection research area—the projects conducting R&D of long-range detector technologies, for example—or in the Chemical and Biological National Security research area. Program officials told us that they are looking at ways of adopting the system across the entire program. To determine whether strategic and annual performance goals for effective and efficient use of resources are being met, standards for internal control in the federal government require that program managers have access to relevant, reliable, and timely operational and financial data. In 1999, the National Research Council examined ways to improve project management at DOE. Specifically, the Research Council reported that DOE’s project documentation was not up to the standards of the private sector and other government agencies. The Research Council recommended that DOE should mandate a reporting system that provides the data necessary for each level of management to track and communicate the cost, schedule, and scope of a project. To monitor the progress of NNSA R&D projects by headquarters program managers, participating laboratories are required to submit, on an annual basis, project life-cycle plans. These plans are supposed to contain detailed statements of work that describe the project’s contributions to overall program goals, scientific and technical merit, and the specific tasks to be accomplished. In addition, laboratories are required to submit quarterly reports that indicate all projects’ progress to date, issues and problems encountered, milestones and schedules, and cost data. However, in the Proliferation Detection and the Chemical and Biological National Security research areas, these plans and reports are often missing these data, and the program management information system is not designed to track whether projects are on time or budget, eliminating an important tool that could be used to track projects, improve communications across the program, and provide transparency to other agencies and the Congress. Project life-cycle plans for the 10 projects funded in the Nuclear Explosion Monitoring research area in fiscal year 2002 all contain information on the project’s objectives and users of the technology. They also contain annual statements of work that detail time frames, milestones, and specific deliverables. Quarterly reports for projects in this research area detail project expenditures, progress in meeting milestones, and deliverables completed. Thus, program managers at headquarters have information to monitor projects in this research area and the primary user of these technologies—the Air Force Technical Applications Center—reports that time frames and milestones are routinely met. Detailed information to monitor project progress is more limited in the Proliferation Detection research area. Of the 124 projects funded in fiscal year 2002, over half of the projects’ life-cycle plans are missing information on potential users of the technology, time frames and milestones, and/or detailed statements of work that specify deliverables to be produced. For example, a project at Lawrence Livermore National Laboratory to detect nuclear materials in transit received $1.2 million in fiscal year 2002, but the project life-cycle plan for this project contained no information on users of the technology, the schedule of the project, or how the funds were to be expended. In addition, many of the life-cycle plans make no distinction between users that potentially would receive the technology and users that are actually involved in the R&D process. Moreover, some projects’ life-cycle plans have not been recently updated to show the actual completion of project deliverables. For example, Sandia National Laboratory has received nearly $120 million since fiscal year 1993 to develop and demonstrate space-based imaging technology for nonproliferation treaty monitoring and other national security and civilian applications. However, its project life-cycle plan has not been updated with the dates deliverables were received or milestones that were accomplished since 1999. Project monitoring is even more difficult in the Chemical and Biological National Security research area. Rather than funding projects individually, as is done in the other research areas, annual funding for projects in this area is consolidated into a single allotment for each national laboratory conducting research. As a result, projects’ life-cycle plans and quarterly reports are consolidated into a single report encompassing all chemical and biological R&D activities at a specific laboratory. Obtaining project specific expenditure, time frame and milestone, and deliverable data from this consolidated report is difficult. As a result, officials from this research area were unable to provide us with even a list of their ongoing projects. According to the program manager for the Chemical and Biological National Security research area, this problem will be addressed when individual project reporting is implemented in fiscal year 2003. NNSA’s R&D program maintains a program-management information system to track the distribution of funding from NNSA headquarters to individual projects at the national laboratories. However, because project funding for chemical and biological R&D is consolidated into allotments for entire laboratories, financial information for individual projects in the Chemical and Biological National Security research area is not readily available. According to the program manager of the Chemical and Biological National Security research area, individual project financial information will be added to the project management information system in fiscal year 2003. Moreover, the system is not designed to capture on an individual-project, research-area, or programwide basis, whether individual projects are on time or within budget. While in some cases this information is available in projects’ life-cycle plans and quarterly reports, these documents are only updated periodically, and program managers lack a system that can provide, on a continuous basis, data on project expenditures and schedules. Instead, program managers rely on other means, such as personal interaction with project leaders at the national laboratories and other types of project records, to obtain this information. Officials from federal, state, and local agencies that use technology developed by NNSA’s R&D program, in general, found the technology useful and said that they had an effective relationship with the program. However, some questioned whether the program is achieving the right mix of long- and short-term research. DOE national laboratory officials told us that this conflict between short- and long-term priorities has created a gap in which the most important immediate needs of users may be going unaddressed in favor of an advanced technology that can only be delivered over the long-term. Of the 13 agencies we contacted, all have found the technology received from NNSA’s R&D program useful and told us that they enjoyed an effective working relationship with the program. For instance, the Navy Special Reconnaissance Program works with NNSA’s R&D program in the research and development of sophisticated imagery technology that is used on Navy aircraft deployed throughout the world. A Navy official said that this imagery technology is routinely used to collect critical intelligence for policy makers and that the Navy has a very effective relationship with NNSA. He told us that the Navy regards scientists in this program as the foremost experts on these complex systems and that similar efforts conducted by the private sector do not compare in terms of capability and quality. Similarly, Utah Department of Health officials said the biological detection equipment demonstrated by the R&D program at the 2002 Winter Olympics constituted an important tool in its counterterrorism efforts at the event. These officials told us that they especially appreciated that they were always treated as an important client by NNSA’s R&D program. For example, unlike many private vendors that approached the department with chemical and biological detection technology, NNSA’s R&D program was willing to share important validation data with the department to verify that the technology would perform as intended. Likewise, an official with the Washington Metropolitan Area Transit Authority indicated that he had been impressed by the collaborative work involving the R&D program and other federal agencies and considered this collaboration a model relationship between federal and local agencies. Other federal agencies that told us NNSA’s technologies are useful included the Department of State, Defense Intelligence Agency, Central Intelligence Agency, Air Force Technical Applications Center, Department of Transportation, and Federal Transit Administration. Some of these agencies also told us that they have been approached by the R&D program with technologies that they neither requested nor found particularly useful for their missions. Such comments were made by officials with the Department of State, Navy Special Reconnaissance Program, Office of the Assistant to the Secretary of Defense for Counter Proliferation Programs, Defense Intelligence Agency, and Washington Metropolitan Area Transit Authority. However, officials from these agencies also noted that, although the technologies were not requested or found useful for their missions, being approached by the program was useful. This is because the R&D program’s presentations helped them understand the capabilities of the program in the event that these technologies were needed in the future. Long-term R&D to develop capabilities to detect, prevent, and respond to terrorism using weapons of mass destruction is essential. However, some users questioned whether the program was achieving the right mix between long- and short-term research. Some said that, faced with the continuing threat of terrorists using weapons of mass destruction, NNSA’s R&D program needs to concentrate on communicating with and addressing the immediate needs of the user and first responder communities. For example, according to an official with the Washington Metropolitan Area Transit Authority, NNSA’s R&D program—along with other federal agencies conducting similar research—is not currently offering the Transit Authority assistance with its immediate need for post-attack chemical and biological decontamination technology tailored to a metropolitan subway system. An official with the Air Force Technical Applications Center stated that the focus of the R&D program needs to be on users’ immediate needs rather than long-term advanced research. This official added that the longer a project continues, the more likely that personnel changes or programmatic inefficiencies would limit opportunities for the eventual completion of the project and the successful transfer of technologies to users. Officials from NNSA’s R&D program disagreed, telling us that the program is better able to address short-term requirements only because it has been conducting advanced research on the concepts underlying technologies required by the users. Often, this type of advanced research is long-term in nature. Two officials with the Sandia and Los Alamos National Laboratories told us that this conflict between short- and long-term priorities has created a gap in which the most important immediate needs of users or highest risks may be going unaddressed in favor of an advanced technology that can only be delivered over the long-term. According to these officials, there is a disconnect between what the users and the laboratories believe is the laboratories’ mission. The laboratories believe that, by focusing on the long-term, the R&D program is able to anticipate users’ long-term needs and look beyond users’ immediate requirements. Users feel that they have urgent short-term needs that cannot wait for long-term development. According to a national laboratory official, the philosophy of the laboratories must change. This official indicated that research emphasis must be placed on those areas where the greatest risks exist, such as from chemical or biological attack. He strongly cautioned that, although long- term research is important, it is imperative that the usefulness of this research be clearly established in advance and as quickly as possible, given counterterrorism technology’s crucial importance in the current war against terrorism. To better set priorities and define its role in the post-September 11th counterterrorism R&D efforts, the director of NNSA’s R&D program said that he would welcome additional guidance from the Office of Homeland Security and is working to better “advertise” the program’s projects and capabilities to the Office of Homeland Security. We found that such advertisement has met with limited success. For instance, the President’s fiscal year 2003 homeland security budget did not discuss NNSA’s role in the research and development of detection technology for chemical and biological agents, although other federal efforts such as those conducted by the Department of Defense and the National Institutes of Health were specifically addressed. In addition, the fiscal year 2003 homeland security budget stated that DOE was not involved in bioterrorism research and development even though NNSA’s R&D program is requesting $35 million for bioterrorism research in its fiscal year 2003 budget. In our September 2001 report, we noted that federal R&D programs to combat terrorism are coordinated in a variety of ways, but this coordination is limited by a number of factors, raising the potential for duplication of efforts among different federal agencies. This limited coordination also raises the possibility that immediate needs may not be adequately addressed. For example, officials with the Utah Department of Health told us the federal community has only been responsive in providing technology to detect attacks and has not offered assistance in responding to an attack that would include tracking secondary exposure, population quarantine, decontamination, and cleanup. Therefore, we recommended in the September 2001 report that a national counterterrorism R&D strategy be developed with the participation of federal agencies and state and local authorities to reduce duplication and leverage resources. This strategy is especially important as the President and the Congress work toward the organization of a new Department of Homeland Security that, as currently envisioned, will assume leadership of federal counterterrorism R&D activities. As proposed, the Chemical and Biological National Security research area and the nuclear smuggling and homeland security activities of the Proliferation Detection research area would be transferred from NNSA to the proposed Department of Homeland Security. NNSA’s Nonproliferation and Verification R&D Program has developed numerous successful technologies that aid the defense and intelligence communities and is an important player in the current U.S. effort to combat terrorism. While users are generally pleased with the technology the program has provided them, the program’s management information system for monitoring its projects—especially for the Proliferation Detection and the Chemical and Biological National Security research areas—does not provide adequate information to monitor project progress. Standards for internal control in the federal government require that important information such as progress in meeting milestones, costs, user feedback, and deliverables needs to be collected and made available more systematically to program managers and to external stakeholders such as the Congress. Improved project life-cycle plans, quarterly reports, and information systems that track project data could be useful for program managers to monitor the projects in their research areas and to better communicate project progress to users and to other agencies conducting R&D. It is important for the program to seek a balance between addressing the immediate R&D needs of users and looking beyond the horizon at advanced technologies for the future. Some users are concerned that the program’s focus is on long-term research. As a result, some feel that the most important immediate risks may be ignored in favor of long-term research activities being conducted at the national laboratories. While we agree that maintaining basic research capabilities is critical, the urgency of the current war on terrorism requires that NNSA’s R&D program clarify its role in relation to other agencies conducting R&D, systematically involve potential technology users in the R&D process, and seek a balance between short- and long-term activities. The ability of the program to successfully transfer new technologies to users could be strengthened by giving potential users opportunities to participate at every stage of the research and development process. Communicating with technology users and receiving clear guidance from the Office of Homeland Security—or the Department of Homeland Security, if established—on what the highest priorities are and how NNSA and the DOE national laboratories can play a role in addressing those priorities could assist program managers in their efforts to prioritize and plan future R&D work. To improve the Nonproliferation and Verification R&D Program’s management of its R&D efforts, we recommend that the Administrator of NNSA take the following actions: Ensure that all of the Nonproliferation and Verification R&D Program’s projects’ life-cycle plans and quarterly reports contain complete data on project objectives, progress in meeting milestones, user feedback, funding, and deliverables and upgrade the program’s project management information system to track all of this information to enhance program management by providing timely data to program managers and assist communications with users and other agencies conducting R&D. Work with the Office of Homeland Security (or the Department of Homeland Security, if established) to clarify the Nonproliferation and Verification R&D Program’s role in relation to other agencies conducting counterterrorism R&D and to achieve an appropriate balance between short-term and long-term research. In addition, to improve the program’s ability to successfully transfer new technologies to users, the program should, in cooperation with the Office of Homeland Security, allow users opportunities to provide input through all phases of R&D projects. We provided NNSA with a draft copy of this report for its review and comment. NNSA’s written comments are presented in appendix II. NNSA agreed with the draft report’s findings and recommendations. Specifically, NNSA said that it will apply the technical capabilities of NNSA and the national laboratories to work with agencies using technologies developed by the Nonproliferation and Verification R&D Program to focus on users’ short-term operational mission requirements while maintaining the program’s ability to meet users’ long-term needs. In addition, NNSA said that it is in the process of updating the program’s management information system and that its efforts to implement a corporate planning, programming, budgeting, and evaluation system will help address some of the program’s project management issues. We conducted our work from October 2001 through July 2002 in accordance with generally accepted government auditing standards. A detailed discussion of our scope and methodology is presented in appendix I. We are sending copies of this report to the Administrator, NNSA; the Secretary of Energy; the Secretary of Defense; the Secretary of State; the Director of Central Intelligence; the Director, Office of Homeland Security; the Director, Office of Management and Budget; appropriate congressional committees; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. This report examines the (1) funding the program received over the past 5 years and the program’s distribution of this funding to the national laboratories and, for fiscal year 2002, throughout its 3 research areas; (2) extent to which the program identifies users’ needs and monitors project progress; and (3) views of federal, state, and local agencies of the usefulness of program-developed technology, particularly in light of heightened homeland security concerns following September 11, 2001. To determine the amount of funding received by the National Nuclear Security Administration’s (NNSA) research and development (R&D) program from fiscal year 1998 through fiscal year 2002 and the program’s distribution of that funding to the national laboratories in the field, we examined each of the research area’s financial plans, quarterly project reports, and project life-cycle plans. In addition, we queried the R&D program’s project management information system for detailed information on each project in the R&D program. We also examined the R&D program’s funding projections for fiscal year 2003 and analyzed NNSA’s Future-Years Nuclear Security Program report to the Congress, dated March 2002, which shows funding estimates for fiscal years 2003 through 2007. We further reviewed the Office of Homeland Security’s fiscal year 2003 budget report that describes the level of funding various federal agencies, including NNSA, will be requesting to combat domestic terrorism. To determine the extent to which the R&D program identifies users’ needs and monitors project progress, we analyzed data from several different sources, including reports and memorandums generated by the R&D program office, independent reviews done on the R&D program by NNSA advisory committees, and procedures used in selecting specific R&D program projects for funding. With regard to R&D program office reports and memorandums, we reviewed, among other things, the NNSA Strategic Plan, dated February 2002, and strategic plans prepared by the Nuclear Explosion Monitoring area, dated January 2002; Chemical and Biological National Security research area, dated spring of 2000; and Deterring Proliferation area, dated December 2001. The Proliferation Detection research area had not yet prepared a strategic plan at the time of our review. In addition, we reviewed various memorandums outlining NNSA’s efforts to develop an integrated programming, planning, budgeting, and evaluation process. With regard to independent reviews done on the R&D program, we analyzed several specific studies. These analyses included the Institute for Defense Analysis’ study entitled The Organization and Management of the Nuclear Weapons Program, dated March 1997; the Department of Energy’s (DOE) Nonproliferation and National Security Advisory Committee’s review entitled DOE Research and Technology against the Threat of Weapons of Mass Destruction, dated February 2000; and the NNSA advisory committee’s report entitled Science & Technology in the NNSA Nonproliferation and Counterterrorism Programs, dated March 2002. To obtain the views of federal, state, and local agencies about the usefulness of the R&D program’s technology, we interviewed officials at the Department of Transportation, Office of Intelligence and Security; Department of State, Office of Technology and Assessments; Navy Special Reconnaissance Program; Defense Intelligence Agency; Central Intelligence Agency; United States Army Medical Research Institute of Infectious Diseases; Defense Threat Reduction Agency, Chemical and Biological Defense Directorate; Office of the Assistant to the Secretary of Defense for Counter Proliferation and Chemical and Biological Defense; Air Force Technical Applications Center; Federal Transit Administration; Utah Department of Health; Association of Public Health Laboratories, Infectious Disease Programs; and Washington Metropolitan Area Transit Authority, Counter-Terrorism Development. We also reviewed how the R&D program works in conjunction with other federal R&D programs by analyzing NNSA’s reports and statements, reports generated by other federal executive entities, and interviewing individuals who serve on interagency coordinating bodies. With respect to NNSA’s reports and statements, we analyzed NNSA’s Report to the Congress on the Organization and Operations of the National Nuclear Security Administration, dated February 25, 2002, and the statement by the Assistant Deputy Administrator for Nonproliferation Research and Engineering, NNSA, before the Senate Committee on Armed Services, Subcommittee on Emerging Threats and Capabilities, dated April 10, 2002. With regard to reports generated by other federal executive entities, we reviewed the Office of Management and Budget’s Fiscal Year 2001 Annual Report to Congress on Combating Terrorism and the Counterproliferation Program Review Committee’s report entitled Activities and Programs for Countering Proliferation and Nuclear, Biological, and Chemical Terrorism, dated October 2001. We also interviewed officials who serve on interagency coordinating bodies, including officials both within and outside NNSA. For instance, we discussed interagency coordination with the NNSA program managers for the Nuclear Explosion Monitoring and Proliferation Detection areas. We also discussed interagency coordination with officials at the Office of the Assistant to the Secretary of Defense for Counter Proliferation and Chemical and Biological Defense; Defense Threat Reduction Agency; and Department of State. We conducted our work from October 2001 through July 2002 in accordance with generally accepted government auditing standards.
The mission of the National Nuclear Security Administration's (NNSA) Nonproliferation and Verification Research and Development (R&D) Program is to conduct needs-driven research, development, testing, and evaluation of new technologies that are intended to strengthen the United States' ability to prevent and respond to nuclear, chemical, and biological attacks. In fiscal years 1998 through 2002, the Nonproliferation and Verification R&D program received an average of $218 million per year--a total of $1.2 billion. Nearly 75 percent of that total was distributed for R&D at three NNSA national laboratories. Two of the three research areas of the Nonproliferation and Verification R&D Program lack a formal process to identify users' needs, and the tools used to monitor project progress are inadequate. In terms of users, NNSA's role is to develop technologies for, and transfer them to, users in the federal government, the intelligence community, law enforcement, and others. The program requires that projects' life-cycle plans and quarterly reports contain detailed information on project time frames, milestones, users of technologies, and deliverables. Officials from federal, state, and local agencies that use the technology developed by NNSA's R&D program have found the technology useful, but some question whether the program is achieving the right mix of long-term and short-term research, especially after the terrorist attacks of September 11, 2001.
You are an expert at summarizing long articles. Proceed to summarize the following text: The U.S. export control system is about managing risk; exports to some countries involve less risk than to other countries and exports of some items involve less risk than others. Under United States law, the President has the authority to control and require licenses for the export of items that may pose a national security or foreign policy concern. The President also has the authority to remove or revise those controls as U.S. concerns and interests change. In 1995, as a continuation of changes begun in the 1980s, the executive branch reviewed export controls on computer exports to determine how changes in computer technology and its military applications should affect U.S. export control regulations. In announcing its January 1996 change to HPC controls, the executive branch stated that one goal of the revised export controls was to permit the government to tailor control levels and licensing conditions to the national security or proliferation risk posed at a specific destination. According to the Commerce Department, the key to effective export controls is setting control levels above the level of foreign availability of materials of concern. The Export Administration Act (EAA) of 1979 describes foreign availability as goods or technology available without restriction to controlled destinations from sources outside the United States in sufficient quantity and comparable quality to those produced in the United States so as to render the controls ineffective in achieving their purposes. Foreign availability is also sometimes associated with the indigenous capability of foreign sources to produce their own HPCs, but this meaning does not meet all the EAA criteria. The 1996 revision of HPC export control policy removed license requirements for most HPC exports with performance levels up to 2,000 MTOPS—an increase from the previous level of 1,500 MTOPS. For purposes of export controls, countries were organized into four “computer tiers,” with each tier after tier 1 representing a successively higher level of concern to U.S. security interests. The policy placed no license requirements on tier 1 countries, primarily Western European countries and Japan. Exports of HPCs above 10,000 MTOPS to tier 2 countries in Asia, Africa, Latin America, and Central and Eastern Europe would continue to require licenses. A dual-control system was established for tier 3 countries, such as Russia and China. For these countries, HPCs up to 7,000 MTOPS could be exported to civilian end users without a license, while exports at and above 2,000 MTOPS to end users of concern for military or proliferation of weapons of mass destruction reasons required a license. Exports of HPCs above 7,000 MTOPS to civilian end users also required a license. HPC exports to terrorist countries in tier 4 were essentially prohibited. The executive branch has determined that HPCs are important for designing or improving advanced nuclear explosives and advanced conventional weapons capabilities. It has identified high performance computing as having applications in such national defense areas as nuclear weapons programs, cryptology, conventional weapons, and military operations. According to DOD, high performance computing is an enabling technology for modern tactical and strategic warfare and is also important in the development, deployment, and use of weapons of mass destruction. It has also played a major role in the ability of the United States to maintain and increase the technological superiority of its warfighting support systems. HPCs have particular benefits for military operations, such as battle management and target engagement, and they are also important in meeting joint warfighting objectives like joint theater missile defense, information superiority, and electronic warfare. However, the executive branch has not, with the exception of nuclear weapons, identified how or at what performance levels, countries of concern may use HPCs to advance their own military capabilities. The House Committee on National Security in December 1997 directed DOE and DOD to assess the national security risks of exporting HPCs with performance levels between 2,000 and 7,000 MTOPS to tier 3 countries. In June 1998, DOE concluded its study on how countries like China, India, and Pakistan can use HPCs to improve their nuclear programs. According to the study, the impact of HPC acquisition depends on the complexity of the weapon being developed and, even more importantly, on the availability of high-quality, relevant test data. The study concluded that “the acquisition and application of HPCs to nuclear weapons development would have the greatest potential impact on the Chinese nuclear program—particularly in the event of a ban on all nuclear weapons testing.” Also, India and Pakistan may now be able to make better use of HPCs in the 1,000 to 4,000 MTOPS range for their nuclear weapons programs because of the testing data they acquired in May 1998 from underground detonations of nuclear devices, according to the DOE report. The potential contribution to the Russian nuclear program is less significant because of its robust nuclear testing experience, but HPCs can make a contribution to Russia’s confidence in the reliability of its nuclear stockpile. An emerging nuclear state is likely to be able to produce only rudimentary nuclear weapons of comparatively simple designs for which personal computers are adequate. We were told that DOD’s study on national security impacts has not been completed. We attempted to identify national security concerns over other countries’ use of HPCs for conventional weapons development. However, officials from DOD and other relevant executive branch agencies did not have information on how specific countries would use HPCs for missile, chemical, biological, and conventional weapons development. Based on EAA’s description of foreign availability, we found that subsidiaries of U.S. companies dominate overseas sales of HPCs. According to U.S. HPC exporters, there were no instances where U.S. companies had lost sales to foreign HPC vendors in tier 3 countries. The U.S. companies primarily compete against one another, with limited competition from foreign suppliers in Japan and Germany. We also obtained information on the capability of certain tier 3 countries to build their own HPCs and found it to be limited. Tier 3 countries are not as capable of producing machines in comparable quantity and of comparable quality and power, as the major HPC-supplier countries. The only global competitors for general computer technology are three Japanese companies, two of which compete primarily for sales of high-end computers—systems sold in small volumes and performing at advanced levels. Two of the companies reported no exports to tier 3 countries, while the third reported some exports on a regional, rather than country basis.One German company sells HPCs primarily in Europe but has reported a small number of sales of its HPCs over 2,000 MTOPS to tier 3 countries. One British company said it is capable of producing HPCs above 2,000 MTOPS, but company officials said it has never sold a system outside the European Union. Our findings in this regard were similar to those in a 1995 Commerce Department study of the HPC global market, which showed that American dominance prevailed at that time, as well. The study observed that American HPC manufacturers controlled the market worldwide, followed by Japanese companies. It also found that European companies controlled about 30 percent of the European market and were not competitive outside Europe. Other HPC suppliers also have restrictions on their exports. Since 1984, the United States and Japan have been parties to a bilateral arrangement, referred to as the “Supercomputer Regime,” to coordinate their export controls on HPCs. Also, both Japan and Germany, like the United States, are signatories to the Wassenaar Arrangement and have regulations that generally appear to afford levels of protection similar to U.S. regulations for their own and for U.S.-licensed HPCs. For example, both countries place export controls on sales of computers over 2,000 MTOPS to specified destinations, according to German and Japanese officials. However, foreign government officials said that they do not enforce U.S. reexport controls on unlicensed U.S. HPCs. A study of German export controls noted that regulatory provisions specify that Germany has no special provisions on the reexport of U.S.-origin goods. According to German government officials, the exporter is responsible for knowing the reexport requirements of the HPC’s country of origin. We could not ascertain whether improper reexports of HPCs occurred from tier 1 countries. Only one German company reported several sales to tier 3 countries of HPCs over 2,000 MTOPS, and U.S. HPC subsidiaries reported no loss of sales due to foreign competition. Officials of U.S. HPC subsidiaries explained that they primarily compete for sales in local markets with other U.S. HPC subsidiaries. None of these officials identified lost HPC sales to other foreign vendors in those markets. Further, none claimed to be losing sales to foreign vendors because of delays in delivery resulting from the subsidiary’s compliance with U.S. export control regulations. Because some U.S. government and HPC industry officials consider indigenous capability to build HPCs a form of foreign availability, we examined such capabilities for tier 3 countries. Based on studies and views of specialists, we found that the capabilities of China, India, and Russia to build their own HPCs still lag well behind those of the United States, Japan, and European countries. Although details are not well-known about HPC developments in each of these tier 3 countries, most officials said and studies show that each country still produces machines in small quantities and of lower quality and power than U.S., Japanese, and European computers. For example: China has produced at least two different types of HPCs, the Galaxy and Dawning series, both based on U.S. technology and each believed to have an initial performance level of about 2,500 MTOPS. Although China has announced its latest Galaxy’s capability at 13,000 MTOPS, U.S. government officials have not confirmed this report. India has produced a series of computers called Param, which are based on U.S. microprocessors and are believed by U.S. DOE officials to be capable of performing at about 2,000 MTOPS. These officials were denied access to test the computers’ performance. Over the past 3 decades Russia has endeavored to develop commercially viable HPCs using both indigenously developed and U.S. microprocessors, but has suffered economic problems and lacks customers. According to one DOE official, Russia has never built a computer running better than 2,000 MTOPS, and various observers believe Russia to be 3 to 10 years behind the West in developing computers. Commerce and DOD each provided one set of general written comments for both this report and our report entitled, Export Controls: Information On The Decision to Revise High Performance Computer Controls (GAO/NSIAD-98-196, Sept. 16, 1998). Some of those general comments do not relate to this report. Therefore, we respond to them in the other report. General comments relevant to this report are addressed below. Additional specific comments provided by Commerce on this report are addressed in appendix II. In its written comments, Commerce said that the report’s scope should be expanded to better reflect the rationale that led to the decision to change computer export control policy “from a relic of the Cold War to one more in tune with today’s technology and international security environment.” This report responds to the scope of work required by Public Law 105-85 (Nov. 18, 1997), that we evaluate the current foreign availability of HPCs and their national security implications. Therefore, this report does not focus on the 1995 decisions by the Department of Commerce. Our companion report, referred to above, assesses the basis for the executive branch’s revision of HPC export controls. Commerce commented that our analysis of foreign availability as an element of the controllability of HPCs was too narrow, stating that foreign availability is not an adequate measure of the problem. Commerce stated that this “Cold War concept” makes little sense today, given the permeability and increased globalization of markets. We agree that rapid technological advancements in the computer industry have made the controllability of HPC exports a more difficult problem. However, we disagree that foreign availability is an outdated Cold War concept that has no relevance in today’s environment. While threats to U.S. security may have changed, they have not been eliminated. Commerce itself recognized this in its March 1998 annual report to the Congress, which stated that “the key to effective export controls is setting control levels above foreign availability.” Moreover, the concept of foreign availability, as opposed to Commerce’s notion of “worldwide” availability, is still described in EAA and Export Administration Regulations as a factor to be considered in export control policy. Commerce also commented that the need to control the export of HPCs because of their importance for national security applications is limited. It stated that many national security applications can be performed satisfactorily on uncontrollable low-level technology, and that computers are not a “choke point” for military production. Commerce said that having access to HPCs alone will not improve a country’s military-industrial capabilities. Commerce asserted that the 1995 decision was based on a variety of research leading to the conclusion that computing power is a secondary consideration for many applications of national security concern. We asked Commerce for its research evidence, but it cited only a 1995 Stanford study used in the decision to revise HPC export controls. Moreover, Commerce’s position on this matter is not consistent with that of DOD. DOD, in its Militarily Critical Technologies List, has determined that high performance computing is an enabling technology for modern tactical and strategic warfare and is also important in the development, deployment, and use of weapons of mass destruction. High performance computing has also played a major role in the ability of the United States to maintain and increase the technological superiority of its war-fighting support systems. DOD has noted in its High Performance Computing Modernization Program annual plan that the use of HPC technology has led to lower costs for system deployment and improved the effectiveness of complex weapon systems. DOD further stated that as it transitions its weapons system design and test process to rely more heavily on modeling and simulation, the nation can expect many more examples of the profound effects that the HPC capability has on both military and civilian applications. Furthermore, we note that the concept of choke point is not a standard established in U.S. law or regulation for reviewing dual-use exports to sensitive end users for proliferation reasons. In its comments, DOD stated that our report inaccurately characterized DOD as not considering the threats associated with HPC exports. DOD said that in 1995 it “considered” the security risks associated with the export of HPCs to countries of national security and proliferation concern. What our report actually states is that (1) except for nuclear weapons, the executive branch has not identified how and at what performance levels specific countries of concern may use HPCs for national security applications and (2) the executive branch did not undertake a threat analysis of providing HPCs to countries of concern. DOD provided no new documentation to demonstrate how it “considered” these risks. As DOD officials stated during our review, no threat assessment or assessment of the national security impact of allowing HPCs to go to particular countries of concern and of what military advantages such countries could achieve had been done in 1995. In fact, an April 1998 Stanford study on HPC export controls also noted that identifying which countries could use HPCs to pursue which military applications remained a critical issue on which the executive branch provided little information. The Arms Control and Disarmament Agency (ACDA) provided oral comments on this report and generally agreed with it. However, it disagreed with the statement that “according to the Commerce Department, the key to effective export controls is setting control levels above the level of foreign availability of materials of concern.” ACDA stressed that this is Commerce’s position only and not the view of the entire executive branch. ACDA said that in its view (1) it is difficult to determine the foreign availability of HPCs and (2) the United States helps create foreign availability through the transfer of computers and computer parts. The Departments of State and Energy had no comments on a draft of this report. Our scope and methodology are in appendix I. Commerce’s and DOD’s comments are reprinted in appendixes II and III, respectively, along with an evaluation of each. We conducted our review between December 1997 and June 1998 in accordance with generally accepted government auditing standards. Please contact me on (202) 512-4128 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix IV. Section 1214 of the Fiscal Year 1998 National Defense Authorization Act (P.L. 105-85) required that we review the national security risks relating to the sale of computers with a composite theoretical performance of between 2,000 and 7,000 millions of theoretical operations per second (MTOPS) to end users in tier 3 countries. Accordingly, we examined the executive branch’s actions to assess the risks of these sales. As required by the act, we also reviewed the foreign availability of computers with performance levels at 2,000 to 7,000 MTOPS and the impact on U.S. exporters of foreign sales of these computers to tier 3 countries. To determine the executive branch’s actions to assess or analyze the national security risks of allowing high performance computers (HPC) to be provided to countries of proliferation and military concern, we reviewed the Department of Defense (DOD) and the Department of Energy (DOE) documents on how HPCs are being used for nuclear and military applications. We discussed high performance computing for both U.S. and foreign nuclear weapons programs with DOE officials in Washington, D.C., and at the Lawrence Livermore, Los Alamos, and Sandia National Laboratories. We also met with officials of the DOD HPC Modernization Office and other officials within the Under Secretary of Defense for Acquisition and Technology, the Office of the Secretary of Defense, the Joint Chiefs of Staff, and the intelligence community to discuss how HPCs are being utilized for weapons design, testing and evaluation, and other military applications. Additionally, we met with DOD and Institute of Defense Analyses officials to discuss the basis for identifying high performance computing on the Militarily Critical Technologies List, a compendium of technologies identified by DOD as critical for maintaining U.S. military and technological superiority. We also reviewed intelligence reports on the use of high performance computing for developing weapons of mass destruction. To determine foreign availability of HPCs, we reviewed the Export Administration Act (EAA) and the Export Administration Regulations for criteria and a description of the meaning of the term. We then reviewed market research data from an independent computer research organization. We also reviewed lists, brochures, and marketing information from major U.S. and foreign HPC manufacturers in France (Bull, SA), Germany (Siemens Nixdorf Informationssysteme AG and Parsytec Computer GmbH), and the United Kingdom (Quadrics Supercomputers World, Limited), and met with them to discuss their existing and projected product lines. We also obtained market data, as available, from three Japanese HPC manufacturers. Furthermore, we met with government officials in China, France, Germany, Singapore, South Korea, and the United Kingdom to discuss each country’s indigenous capability to produce HPCs. We also obtained information from the Japanese government on its export control policies. In addition, we obtained and analyzed from two Commerce Department databases (1) worldwide export licensing application data for fiscal years 1994-97 and (2) export data from computer exporters provided to the Department for all American HPC exports between January 1996 and October 1997. We also reviewed a 1995 Commerce Department study on the worldwide computer market to identify foreign competition in the HPC market prior to the export control revision. To identify similarities and differences between U.S. and foreign government HPC export controls, we discussed with officials of the U.S. embassies and host governments information on foreign government export controls for HPCs and the extent of cooperation between U.S. and host government authorities on investigations of export control violations and any HPC diversions of HPCs to sensitive end users. We also reviewed foreign government regulations, where available, and both foreign government and independent reports on each country’s export control system. To obtain information on the impact of HPC sales on U.S. exporters, we interviewed officials of American HPC firms and their subsidiaries and U.S. and foreign government officials. The following are GAO’s comments on the Department of Commerce’s letter, dated August 7, 1998. Commerce provided one set of written comments for this report and for a companion report, in which we discuss our analysis of the basis for the 1995 executive branch decision to revise export controls for HPCs. We addressed Commerce’s general comments relevant to this report on page 9 and its specific comments below. 1. Commerce stated that one key to effective export controls is setting control limits of items of concern above that which is widely available throughout the world. However, this wording is a change that contrasts with documentary evidence previously provided to us and to the Congress. In successive Export Administration Annual Reports, the Commerce Department stated that “the key to effective HPC export controls is setting control levels above foreign availability. . .” In addition, Commerce has provided us with no empirical evidence to demonstrate the “widespread availability” of HPCs, either through suppliers in Europe and Asia or a secondary market. 2. Commerce commented that a number of foreign manufacturers indigenously produce HPCs that compete with those of the United States. Our information does not support Commerce’s position on all of these manufacturers. For example, our visit to government and commercial sources in Singapore indicated that the country does not now have the capabilities to produce HPCs. We asked Commerce to provide data to support its assertion on foreign manufacturers, but it cited studies that were conducted in 1995 and that did not address or use criteria related to “foreign availability.” As stated in our report, we gathered data from multiple government and computer industry sources to find companies in other countries that met the terms of foreign availability. We met with major U.S. HPC companies in the United States, as well as with their overseas subsidiaries in a number of countries we visited in 1998, to discuss foreign HPC manufacturers that the U.S. companies considered as providing foreign availability and competition. We found few. Throughout Europe and Asia, U.S. computer subsidiary officials stated that their competition is primarily other U.S. computer subsidiaries and, to a lesser extent, Japanese companies. In addition, although requested, Commerce did not provide documentary evidence to confirm its asserted capabilities of India’s HPCs and uses. 3. Commerce stated that worldwide availability of computers indicates that there is a large installed base of systems in the tens of thousands or even millions. Commerce further stated that license requirements will not prevent diversion of HPCs unless realistic control levels are set that can be enforced effectively. While we agree, in principle, that increasing numbers of HPCs makes controllability more difficult, as our recommendation in our companion report suggests, a realistic assessment of when an item is “uncontrollable” would require an analysis of (1) actual data, (2) estimated costs of enforcing controls, and (3) pros and cons of alternatives—such as revised regulatory procedures—that might be considered to extend controls. Commerce did not perform such an analysis before revising export controls in 1995. In addition, although we requested that Commerce provide documentary evidence for its statement that there is a large installed base of HPCs in the millions, it did not provide such evidence. 4. Commerce stated that most European governments do not enforce U.S. export control restrictions on reexport of U.S.-supplied HPCs. We agree that at least those European governments that we visited hold this position. However, although requested, Commerce provided no evidence to support its statement that the government of the United Kingdom has instructed its exporters to ignore U.S. reexport controls. The following is GAO’s comment on DOD’s letter dated July 16, 1998. 1. DOD provided one set of written comments for this report and for a companion report, in which we discuss our analysis of the basis for the 1995 executive branch decision to revise export controls for HPCs. We addressed DOD’s comments relevant to this report on page 8. Hai Tran The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the efforts by the executive branch to determine national security risks associated with exports of high performance computers (HPC). GAO noted that: (1) the executive branch has identified high performance computing as having applications in such national defense areas as nuclear weapons programs, cryptology, conventional weapons, and military operations; (2) however, except for nuclear weapons, the executive branch has not identified how and at what performance levels specific countries of concern may use HPCs for national defense applications--an important factor in assessing risks of HPC sales; (3) a Department of Energy study on nuclear weapons was completed in June 1998; (4) the study shows that nuclear weapons programs in tier 3 countries (which pose some national security and nuclear proliferation risks to the United States), especially those of China, India, and Pakistan, could benefit from the acquisition of HPC capabilities; (5) the executive branch has only recently begun to identify how specific countries of concern would use HPCs for nonnuclear national defense applications; (6) to date, a Department of Defense study on this matter begun in early 1998 is not completed; (7) with regard to foreign availability of HPCs, GAO found that subsidiaries of U.S. computer manufacturers dominate the overseas HPC market and they must comply with U.S. controls; (8) three Japanese companies are global competitors of U.S. manufacturers, two of which told GAO that they had no sales to tier 3 countries; (9) the third company did not provide data on such sales in a format that was usable for GAO's analysis; (10) two of the Japanese companies primarily compete with U.S. manufacturers for sales of high-end HPCs at about 20,000 millions of theoretical operations per second (MTOPS) and above; (11) two other manufacturers, one in Germany and one in the United Kingdom, also compete with U.S. HPC suppliers, but primarily within Europe; (12) only the German company has sold HPCs to tier 3 countries; (13) Japan, Germany, and the United Kingdom each have export controls on HPCs similar to those of the United States, according to foreign government officials; (14) because there is limited competition from foreign HPC manufacturers and U.S. manufacturers reported no lost sales to foreign competition in tier 3 countries, GAO concluded that foreign suppliers of HPCs had no impact on sales by U.S. exporters; (15) in addition, Russia, China, and India have developed HPCs, but the capabilities of their HPCs are believed to be limited; (16) thus, GAO's analysis suggests that HPCs over 2,000 MTOPS are not available to tier 3 countries without restriction from foreign sources.
You are an expert at summarizing long articles. Proceed to summarize the following text: As we report in our 2011 High-Risk Series update, Medicare remains on a path that is fiscally unsustainable over the long term. This fiscal pressure heightens the need for CMS to reform and refine Medicare’s payment methods to achieve efficiency and savings, and to improve its management, program integrity, and oversight of patient care and safety. CMS has made some progress in these areas, but many avenues for improvement remain. Since January 2009, CMS has implemented payment reforms for Medicare Advantage (Part C) and inpatient hospital, home health, and end-stage renal disease services. The agency has also begun to provide feedback to physicians on their resource use and is developing a value-based payment method for physician services that accounts for the quality and cost of care. Efforts to provide feedback and encourage efficiency are crucial because physician influence on use of other services is estimated to account for up to 90 percent of health care spending. In addition, CMS has taken steps to ensure that some physician fees recognize efficiencies when certain services are furnished together, but the agency has not targeted the services with the greatest potential for savings. Under the budget neutrality requirement, the savings that have been generated have been redistributed to increase physician fees for other services. Therefore, we recommended in 2009 that Congress consider exempting savings from adjusting physician fees to recognize efficiencies from budget neutrality to ensure that Medicare realizes these savings. Our examination of payment rates for home oxygen also found that although these rates have been reduced or limited several times, further savings are possible. As we reported in January 2011, if Medicare used the methodologies and payment rates of the lowest-paying private insurer of eight private insurers studied, it could have saved about $670 million of the estimated $2.15 billion it spent on home oxygen in 2009. Additionally, we found that Medicare bundles its stationary equipment rate payment for oxygen refills, but refills are required only for certain types of equipment, so a supplier may still receive payment for refills even if the equipment does not require them. Therefore, we suggested that Congress should consider reducing home oxygen payment rates and recommended that CMS remove payment for portable oxygen refills from payment for stationary equipment, and thus only pay for refills for the equipment types that require them. Our work has also shown that payment for imaging services may benefit from refinements. Specifically, CMS could add more front-end approaches to better ensure appropriate payments, such as requiring physicians to obtain prior authorization from Medicare before ordering an imaging service. CMS also has opportunities to improve the way it adjusts physician payments to account for geographical differences in the costs of providing care in different localities. We have recommended that the agency examine and revise the physician payment localities it uses for this purpose by using an approach that is uniformly applied to all states and based on the most current data. CMS agreed to consider the recommendation but was concerned about its redistributive effects. The agency subsequently initiated a study of physician payment locality adjustments. The study is ongoing, and CMS has not implemented any change. CMS’s implementation of competitive bidding for medical equipment and supplies and its new Medicare Administrative Contractors (MAC) have progressed, with some delays. Congress halted the first round of competitive bidding and required CMS to improve its implementation. In regard to contracting reform, because of delays resulting from bid protests filed in connection with the procurement process, CMS did not meet the target that it set for 2009 and 2010 in transferring workload to MACs. As of December 2010, CMS transferred Medicare fee-for-service claims workload to the new MACs in all but six jurisdictions. For those six jurisdictions, CMS is transferring claims workload in two jurisdictions and has ongoing procurement activity for the remainder. Some new MACs had delays in paying providers’ claims, but overall, CMS’s contractors continued to meet the agency’s performance targets for timeliness of claims processing in 2009. Regarding Medicare Advantage, CMS has not complied with statutory requirements to mail information on plan disenrollment to beneficiaries, but it did take steps to post this information on its Web site. In addition, the agency took enforcement actions for inappropriate marketing against at least 73 organizations that sponsored Medicare Advantage plans from January 2006 to February 2009. Of greater concern is that we found pervasive internal control deficiencies in CMS’s management of its contracting function that put billions of taxpayer dollars at risk of improper payments or waste. We recommended that CMS take actions to address them. Recently, CMS has taken several actions to address the recommendations and correct certain deficiencies we had noted, such as revising policies and procedures and developing a centralized tracking mechanism for employee training. However, CMS has not made sufficient progress to complete actions to address recommendations related to clarifying the roles and responsibilities for implementing certain contractor oversight responsibilities, clearing a backlog of contacts that are overdue for closeout, and finishing its investigation of over $70 million in payments we questioned in 2007. New directives, implementing guidance, and legislation designed to help reduce improper payments will affect CMS’s efforts over the next few years. The administration issued Executive Order 13520 on reducing improper payments in 2009 and related implementing guidance in 2010. In addition, the Improper Payments Elimination, and Recovery Act of 2010 amended the Improper Payments Information Act of 2002 and established additional requirements related to accountability, recovery auditing, compliance and noncompliance determinations, and reporting. CMS has already taken action in some areas—for example, as required by law, it implemented a national Recovery Audit Contractors (RAC) program in 2009 to analyze paid claims and identify overpayments for recoupment. CMS has set a key performance measure to reduce improper payments for Parts A and B (fee-for-service) and Part C and is developing measures of improper payments for Part D. CMS was not able to demonstrate sustained progress at reducing its fee-for-service error rate because changes made to improve the methodology for measurement make current year estimates noncomparable to any issued before 2009. Its 2010 fee-for-service payment error rate of 10.5 percent will serve as the baseline for setting targets for future reduction efforts. However, with a 2010 Part C improper payment rate of 14.1 percent, the agency met its target to have its 2010 improper payment rate lower than 14.3 percent. For Part D, the agency is working to develop a composite improper payment rate, and for 2010 has four non- addable estimates, with the largest being $5.4 billion. Other recent CMS program integrity efforts include issuing regulations tightening provider enrollment requirements and creating its Center for Program Integrity, which is responsible for addressing program vulnerabilities leading to improper payments. However, having corrective action processes to address the vulnerabilities that lead to improper payments is also important to effectively managing them. CMS did not develop an adequate process to address the vulnerabilities to improper payments identified by the RACs and we recommended that it do so. Further, our February 2009 report indicated that Medicare continued to pay some home health agencies for services that were not medically necessary or were not rendered. To help address the issue, we recommended that postpayment reviews be conducted on claims submitted by home health agencies with high rates of improper billing identified through prepayment review and that CMS require that physicians receive a statement of home health services that beneficiaries received based on the physicians’ certification. In addition, we recommended that CMS require its contractors to develop thresholds for unexplained increases in billing by providers and use them to develop automated prepayment controls as a way to reduce improper payments. CMS has not implemented these four recommendations. The agency indicated it had taken other actions; however, we believe these actions will not have the same effect. CMS’s oversight of Part D plan sponsors’ programs to deter fraud and abuse has been limited. However, CMS has taken some actions to increase it. For example, CMS officials indicated that they had conducted expanded desk audits and were implementing an oversight strategy. CMS’s oversight of the quality of nursing home care has increased significantly in recent years, but weaknesses in surveillance remain that could understate care quality problems. Under contract with CMS, states conduct surveys at nursing homes to help ensure compliance with federal quality standards, but a substantial percentage of state nursing home surveyors and state agency directors identified weaknesses in CMS’s survey methodology and guidance. In addition to these methodology and guidance weaknesses, workforce shortages and insufficient training, inconsistencies in the focus and frequency of the supervisory review of deficiencies, and external pressure from the nursing home industry may lead to understatement of serious care problems. CMS established the Special Facility Focus (SFF) Program in 1998 to help address poor nursing home performance. The SFF Program is limited to 136 homes because of resource constraints, but according to our estimate, almost 4 percent (580) of the roughly 16,000 nursing homes in the United States could be considered the most poorly performing. CMS’s current approach for funding state surveys of facilities participating in Medicare is ineffective, yet these surveys are meant to ensure that these facilities provide safe, high-quality care. We found serious weaknesses in CMS’s ability to (1) equitably allocate more than $250 million in federal Medicare funding to states according to their workloads, (2) determine the extent to which funding or other factors affected states’ ability to accomplish their workloads, and (3) guarantee appropriate state contributions. These weaknesses make assessing the adequacy of funding difficult. However, CMS has implemented many recommendations that we have made to improve oversight of nursing home care. Of the 96 recommendations made by GAO from July 1998 through March 2010, CMS has fully implemented 45, partially implemented 4, is taking steps to implement 29, and did not implement 18. Examples of key recommendations implemented by CMS include (1) a new survey methodology to improve the quality and consistency of state nursing home surveys and (2) new complaint and enforcement databases to better monitor state survey activities and hold nursing homes accountable for poor care. What Remains to Be Done When legislative and administrative actions result in significant progress toward resolving a high-risk problem, we remove the high-risk designation from the program. The five criteria for determining whether the high-risk designation can be removed are (1) a demonstrated strong commitment to, and top leadership support for, addressing problems; (2) the capacity to address problems; (3) a corrective action plan; (4) a program to monitor corrective measures; and (5) demonstrated progress in implementing corrective measures. CMS has not met our criteria for removing Medicare from the High-Risk List—for example, the agency is still developing its Part D improper payment estimate and has not yet been able to demonstrate sustained progress in lowering its fee-for-service and Part C improper payment estimates. CMS needs a plan with clear measures and benchmarks for reducing Medicare’s risk for improper payments, inefficient payment methods, and issues in program management and patient care and safety. One important step relates to our recommendation to develop an adequate corrective action process to address vulnerabilities to improper payments. Without a corrective action process that uses information on vulnerabilities identified by the agency, its contractors, and others, CMS will not be able to effectively address its challenges related to improper payments. CMS has implemented certain recommendations of ours, such as in the area of nursing home oversight. However, further action is needed on our recommendations to improve management of key activities. To refine payment methods to encourage efficient provision of services, CMS should take action to ensure the implementation of an effective physician profiling system; better manage payments for services, such as imaging; systematically apply payment changes to reflect efficiencies achieved by providers when services are commonly furnished together; and refine the geographic adjustment of physician payments by revising the physician payment localities using an approach uniformly applied to all states and based on current data. In addition, further action is needed by CMS to establish policies to improve contract oversight, better target review of claims for services with high rates of improper billing, and improve the monitoring of nursing homes with serious care problems. – – – – – Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you or other members of the subcommittee may have. For further information about this statement, please contact Kathleen M. King at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Sheila Avruch, Assistant Director; Kelly Demots; and Roseanne Price were key contributors to this statement. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Medicare Home Oxygen: Refining Payment Methodology Has Potential to Lower Program and Beneficiary Spending. GAO-11-56. Washington, D.C.: January 21, 2011. Medicare Recovery Audit Contracting: Weaknesses Remain in Addressing Vulnerabilities to Improper Payments, Although Improvements Made to Contractor Oversight. GAO-10-143. Washington, D.C.: March 31, 2010. Medicare Contracting Reform: Agency Has Made Progress with Implementation, but Contractors Have Not Met All Performance Standards. GAO-10-71. Washington, D.C.: March 25, 2010. Nursing Homes: Addressing the Factors Underlying Understatement of Serious Care Problems Requires Sustained CMS and State Commitment. GAO-10-70. Washington, D.C.: November 24, 2009. Medicare: CMS Working to Address Problems from Round 1 of the Durable Medical Equipment Competitive Bidding Program. GAO-10-27. Washington, D.C.: November 6, 2009. Centers for Medicare and Medicaid Services: Deficiencies in Contract Management Internal Control Are Pervasive. GAO-10-60. Washington, D.C.: October 23, 2009. Medicare Physician Payments: Fees Could Better Reflect Efficiencies Achieved When Services Are Provided Together. GAO-09-647. Washington, D.C.: July 31, 2009. Medicare: Improvements Needed to Address Improper Payments in Home Health. GAO-09-185. Washington, D.C.: February 27, 2009. Medicare Advantage: Characteristics, Financial Risks, and Disenrollment Rates of Beneficiaries in Private Fee-for-Service Plans. GAO-09-25. Washington, D.C.: December 15, 2008. Medicare Part B Imaging Services: Rapid Spending Growth and Shift to Physician Offices Indicate Need for CMS to Consider Additional Management Practices. GAO-08-452. Washington, D.C.: June 13, 2008. Medicare: Focus on Physician Practice Patterns Can Lead to Greater Program Efficiency. GAO-07-307. Washington, D.C.: April 30, 2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In the February 2011 High-Risk Series update, GAO continued designation of Medicare as a high-risk program because its complexity and susceptibility to improper payments, combined with its size, have led to serious management challenges. In 2010, Medicare covered 47 million people and had estimated outlays of $509 billion. The Centers for Medicare & Medicaid Services (CMS) has estimated fiscal year 2010 improper payments for Medicare fee-for-service and Medicare Advantage of almost $48 billion. This statement focuses on the nature of the risk in the program, progress made, and specific actions needed. It is based on GAO work developed by using a variety of methodologies--including analyses of Medicare claims, review of policies, interviews, and site visits--and information from CMS on the status of actions to address GAO recommendations. As GAO reported in its 2011 High-Risk Series update, Medicare remains on a path that is fiscally unsustainable over the long term. This fiscal pressure heightens CMS's challenges to reform and refine Medicare's payment methods to achieve efficiency and savings, and to improve its management, program integrity, and oversight of patient care and safety. CMS has made some progress in these areas, but many avenues for improvement remain. Reforming and refining payments. Since January 2009, CMS has implemented payment reforms for Medicare Advantage and inpatient hospital and other services, and has taken other steps to improve efficiency in payments. The agency has also begun to provide feedback to physicians on their resource use, but the feedback effort could be enhanced. CMS has taken steps to ensure that some physician fees recognize efficiencies when certain services are furnished together, but the agency has not targeted the services with the greatest potential for savings. Other areas that could benefit from payment method refinements include oxygen and imaging services. Improving program management. CMS's implementation of competitive bidding for medical equipment and supplies and its transfer of fee-for-service claims workload to new Medicare Administrative Contractors have progressed, with some delays. Of greater concern is that GAO found pervasive internal control deficiencies in CMS's management of contracts that increased the risk of improper payments. While the agency has taken actions to address some GAO recommendations for improving internal controls, it has not completely addressed recommendations related to clarifying the roles and responsibilities for implementing certain contractor oversight responsibilities, clearing a backlog of contacts that are overdue for closeout, and finishing its investigation of over $70 million in payments GAO questioned in 2007. Enhancing program integrity. CMS has implemented a national Recovery Audit Contractors (RAC) program to analyze paid claims and identify improper overpayments for recoupment, set performance measures to reduce improper payments, issued regulations to tighten provider enrollment, and created its Center for Program Integrity. However, the agency has not developed an adequate process to address vulnerabilities to improper payments identified by RACs, nor has it addressed three other GAO recommendations designed to reduce improper payments, including one to conduct postpayment reviews of claims submitted by home health agencies with high rates of improper billing. Overseeing patient care and safety. The agency's oversight of the quality of nursing home care has increased significantly in recent years, but weaknesses in the survey methodology and guidance for surveillance could understate care quality problems. In addition, CMS's current approach for funding state surveys of facilities participating in Medicare is ineffective. However, CMS has implemented, or is taking steps to implement, many recommendations GAO has made to improve nursing home oversight. CMS needs a plan with clear measures and benchmarks for reducing Medicare's risk for improper payments, inefficient payment methods, and issues in program management and patient care and safety. Further, CMS's effective implementation of recent laws will be critical to helping reduce improper payments. CMS also needs to take action to address GAO recommendations, such as to develop an adequate corrective action process, improve controls over contracts, and refine or better manage payment for certain services.
You are an expert at summarizing long articles. Proceed to summarize the following text: This year the space shuttle is scheduled to fly its final six missions to deliver hardware, supplies, and an international scientific laboratory to the International Space Station. NASA officials remain confident that the current flight manifest can be accomplished within the given time, and add that should delays occur, the International Space Station can still function. According to NASA, there are trade-offs the agency can make in what it can take up to support and sustain the station. However, failure to complete assembly as currently planned would further reduce the station’s ability to fulfill its research objectives and deprive the station of critical spare parts that only the shuttle can deliver. The recent review completed by the U.S. Human Space Flight Plans Committee included the option of flying the space shuttle through 2011 in order to complete the International Space Station. However, the Committee noted that there are currently no funds in NASA’s budget for additional shuttle flights. Most recently, the Administration is proposing over $600 million in the fiscal year 2011 budget to ensure that the space shuttle can fly its final missions, in case the space shuttle’s schedule slips into fiscal year 2011. Retirement of the shuttle will involve many activities that warrant special attention. These include: disposing of the facilities that no longer are needed while complying with federal, state, and local environmental laws and regulations; ensuring the retention of critical skills within NASA’s workforce and its suppliers; and disposing of over 1 million equipment items. In addition, the total cost of shuttle retirement and transition—to include the disposition of the orbiters themselves—is not readily transparent in NASA’s budget. We have recommended that NASA clearly identify all direct and indirect shuttle transition and retirement costs, including any potential sale proceeds of excess inventory and environmental remediation costs in its future budget requests. NASA provided this information to the House and Senate Appropriations committees in July 2009 but did not identify all indirect shuttle transition and retirement costs in its fiscal year 2010 budget request. We look forward to examining the fiscal year 2011 budget request to determine whether this information is identified. Lastly, NASA has recognized that sustaining the shuttle workforce through the retirement of the shuttle while ensuring that a viable workforce is available to support future activities is a major challenge. We commend NASA for its efforts to understand and mitigate the effect of the space shuttle’s retirement on the civil service and contractor workforce. Nevertheless, how well NASA executes its workforce management plans as they retire the space shuttle will affect the agency’s ability to maintain the skilled workforce to support space exploration. Although it is nearing completion, the International Space Station faces several significant challenges that may impede efforts to maximize utilization of research facilities available onboard. These include: the retirement of the Space Shuttle in 2010 and the loss of its unmatched capacity to move cargo and astronauts to and from the station; the uncertain future for the station beyond 2015; and the limited time available for research due to competing demands for the crew’s time. We have previously reported that the International Space Station will face a significant cargo supply shortfall without the Space Shuttle’s great capacity to deliver cargo to the station and return it to earth. NASA plans on using a mixed fleet of vehicles, including those developed by international partners, to service the space station on an interim basis. However, international partners’ vehicles alone cannot fully satisfy the space station’s cargo resupply needs. Without a domestic cargo resupply capability to augment this mixed fleet approach, NASA faces a 40 metric ton (approximately 88,000 pounds) cargo resupply shortfall between 2010 and 2015. While NASA is sponsoring commercial efforts to develop vehicles capable of carrying cargo to the station and the administration has endorsed this approach, none of those currently in development has been launched into orbit, and the vehicles’ aggressive development schedules leave little room for the unexpected. Furthermore, upon completion of construction, unless the decision is made to extend station operations, NASA has only 5 years to execute a robust research program before the International Space Station is deorbited. The leaves little time to establish a strong utilization program. At present, NASA projects that its share of the International Space Station research facilities will be less than fully utilized by planned NASA research. Specifically, NASA plans to utilize only 48 percent of the racks that accommodate scientific research facilities onboard, with the remainder available for use by others. Congress has directed NASA to take all necessary steps to ensure that the International Space Station remains a viable and productive facility capable of potential utilization through at least 2020. The Administration is proposing in its fiscal year 2011 budget to extend operations of the International Space Station to 2020 or beyond in concert with its international partners. Lastly, NASA faces a significant constraint for science on board the space station because of limited crew time. There can only be six crew members aboard the station at one time due to the number of spaces available in the “lifeboats,” or docked spacecraft that can transport the crew in case of an emergency. As such, crew time cannot presently be increased to meet increased demand. Though available crew time may increase as the six- person crew becomes more experienced with operating the space station efficiently or if the crew volunteers its free time for research, crew time for U.S. research remains a limiting factor. According to NASA officials, potential National Laboratory researchers should design their experiments to be as automated as possible or minimize crew involvement required for their experiments to ensure that they are accepted for flight. We have recommended that NASA implement actions, such as developing a plan to broaden and enhance ongoing outreach to potential users and creating a centralized body to oversee U.S. space station research decision making, including the selection of all U.S. research to be conducted on board and ensuring that all U.S. International Space Station National Laboratory research is meritorious and valid. NASA concurred with our recommendation and is researching the possibility of developing a management body to manage space station research, which would make the International Space Station National Laboratory similar to other national laboratories. NASA projects have produced ground-breaking research and advanced our understanding of the universe. However, one common theme binds most of the projects—they cost more and take longer to develop than planned. As we reported in our recently completed assessment of NASA’s 19 most costly projects—which have a combined life-cycle cost that exceeds $66 billion—the agency’s projects continue to experience cost growth and schedule delays. Ten of the 19 projects, which had there baselines set within the last 3 years, experienced cost growth averaging $121.1 million or 18.7 percent and the average schedule growth was 15 months. For example, the Glory project has recently breached its revised schedule baseline by 16 months and exceeded its development cost baseline by over 14 percent—for a total development cost growth of over 75 percent in just 2 years. Project officials also indicated that recent technical problems could cause additional cost growth. Similarly, the Mars Science Laboratory project is currently seeking reauthorization from Congress after experiencing development cost growth in excess of 30 percent. Many of the other projects we reviewed experienced challenges, including developing new or retrofitting older technologies, stabilizing engineering designs, and managing the performance of contractors and development partners. Our work has consistently shown that reducing these kinds of problems in acquisition programs hinges on developing a sound business case for each project. Such a business case provides for early recognition of challenges, allows managers to take corrective action, and places needed and justifiable projects in a better position to succeed. Product development efforts that have not followed a knowledge-based business case approach have frequently suffered poor cost, schedule, and performance outcomes. A sound business case includes development of firm requirements, mature technologies, a preliminary design, a realistic cost estimate, and sound estimates of available funding and time needed before the projects proceed beyond preliminary design review. If necessary, the project should be delayed until a sound business case, demonstrating the project’s readiness to move forward into product development, is in hand. In particular, two of NASA’s largest projects—Ares I and Orion, which are part of NASA’s Constellation program to return to the moon—face considerable technical, design, and production challenges. NASA is actively addressing these challenges. Both projects, however, still face considerable hurdles to meeting overarching safety and performance requirements, including limiting vibration during launch, mitigating the risk of hitting the launch tower during liftoff, and reducing the mass of the Orion vehicle. In addition, we found that the Constellation program, from the onset, has faced a mismatch between funding and program needs. This finding was reinforced by the Review of U.S. Human Spaceflight Plans Committee, which reported that NASA’s plans for the Constellation program to return to the moon by 2020 are unexecutable without increases to NASA’s current budget. To its credit, NASA has acknowledged that the Constellation program, for example, faces knowledge gaps concerning requirements, technologies, funding, schedule, and other resources. NASA stated that it is working to close these gaps and at the preliminary design review the program will be required to demonstrate that the program and its projects meet all system requirements with acceptable risk and within cost and schedule constraints, and that the program has established a sound business case for proceeding into the implementation phase. Even though NASA has made progress in developing the actual vehicles, the mismatch between resources and requirements remains and the administration’s proposed fiscal year 2011 budget leaves the future of the program in question. NASA has continually struggled to put its financial house in order. GAO and others have reported for years on these efforts. In fact, GAO has made a number of recommendations to address NASA’s financial management challenges. Moreover, the NASA Inspector General has identified financial management as one of NASA’s most serious challeng In a November 2008 report, the Inspector General found continuing weaknesses in NASA’s financial management process and systems, including internal controls over property accounting. It noted that these deficiencies have resulted in disclaimed audits of NASA’s financial statements since fiscal year 2003. The disclaimers were largely attributed to data integrity issues and poor internal controls. NASA has made progress in addressing some of these issues, but the recent disclaimer on the fiscal year 2009 audit shows that more work needs to be done. es. We have also reported that NASA remains vulnerable to disruptions in its information technology network. Information security is a critical consideration for any organization reliant on information technology and especially important for NASA, which depends on a number of key computer systems and communication networks to conduct its work. These networks traverse the Earth and beyond, providing critical two-way communication links between Earth and spacecraft; connections between NASA centers and partners, scientists, and the public; and administrative applications and functions. NASA has made important progress in implementing security controls and aspects of its information security program. However, NASA has not always implemented sufficient controls to protect the confidentiality, integrity, and availability of the information and systems supporting its mission directorates. Specifically, NASA did not consistently implement effective controls to prevent, limit, and detect unauthorized access to its networks and systems. A key reason for these weaknesses is that NASA has not yet fully implemented key activities of its information security program to ensure that controls are appropriately designed and operating effectively. During fiscal years 2007 and 2008, NASA reported 1,120 security incidents that resulted in the installation of malicious software on its systems and unauthorized access to sensitive information. NASA established a Security Operations Center in 2008 to enhance prevention and provide early detection of security incidents and coordinate agency-level information related to its security posture. Nevertheless, the control vulnerabilities and program shortfalls—which GAO identified—collectively increase the risk of unauthorized access to NASA’s sensitive information, as well as inadvertent or deliberate disruption of its system operations and services. They make it possible for intruders, as well as government and contractor employees, to bypass or disable computer access controls and undertake a wide variety of inappropriate or malicious acts. As a result, increased and unnecessary risk exists that sensitive information is subject to unauthorized disclosure, modification, and destruction and that mission operations could be disrupted. GAO has recommended actions the NASA Administrator should take to mitigate control vulnerabilities and fully implement a comprehensive information security program including: developing and implementing comprehensive and physical risk assessments; conducting sufficient or comprehensive security testing and evaluation of all relevant security controls; and implementing an adequate incident detection program. In response to our report, the Deputy Administrator noted that NASA is implementing many of our recommendations as part of an ongoing NASA strategic effort to improve information technology management and information technology security program deficiencies. The Deputy Administrator also stated that NASA will continue to mitigate the information security weaknesses identified in our report. The actions identified by the Deputy Administrator, if effectively implemented, will improve the agency’s information security program. In executing NASA’s space exploration, scientific discovery, and aeronautics research missions, NASA must use its resources as effectively and efficiently as possible because of the severity of the fiscal challenges our nation faces and the wide range of competing national priorities. Establishing a sound business case before a project starts should also better position NASA management to deliver promised capability for the funding it receives. While space development programs are complex and difficult by nature, and most are one-time efforts, the nature of its work should not preclude NASA from being accountable for achieving what it promises when requesting and receiving funds. Congress will also need to do its part to ensure that NASA has the support to hold poorly performing programs accountable in order to provide an environment where the systems portfolio as a whole can succeed with the resources NASA is given. NASA shows a willingness to face these challenges. We look forward to continuing work with NASA to develop tools to enhance the management of acquisitions and agency operations to optimize its investment in space and aeronautics missions. Madam Chairwoman, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. For additional information, please contact Cristina Chaplain at 202-512- 4841 or chaplainc@gao.gov. Individuals making contributions to this testimony include Jim Morrison, Assistant Director; Greg Campbell; Richard A. Cederholm; Shelby S. Oakley; Kristine R. Hassinger; Kenneth E. Patton; Jose A. Ramos; John Warren; and Gregory C. Wilshusen. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Aeronautics and Space Administration (NASA) is in the midst of many changes and one of the most challenging periods in its history. The space shuttle is slated to retire this year, the International Space Station nears completion but remains underutilized, and a new means of human space flight is under development. Most recently, the administration has proposed a new direction for NASA. Amid all this potential change, GAO was asked to review the key issues facing NASA. This testimony focuses on four areas: 1) retiring the space shuttle; 2) utilizing and sustaining the International Space Station; 3) continuing difficulty developing large-scale systems, including the next generation of human spaceflight systems; and 4) continuing weaknesses in financial management and information technology systems. In preparing this statement, GAO relied on completed work. To address some of these challenges, GAO has recommended that NASA: provide greater information on shuttle retirement costs to Congress, take actions aimed at more effective use of the station research facilities, develop business cases for acquisition programs, and improve financial and IT management. NASA concurred with GAO's International Space Station recommendations, and has improved some budgeting and management practices in response. The major challenges NASA faces include: (1) Retiring the Space Shuttle. The impending end of shuttle missions poses challenges to the completion and operation of the International Space Station, and will require NASA to carry out an array of activities to deal with shuttle staff, equipment, and property. This year the shuttle is scheduled to fly its final six missions to deliver hardware, supplies, and an international laboratory to the International Space Station. NASA officials remain confident that the current manifest can be accomplished within the given time, and add that should delays occur, the space station can still function. According to NASA, there are trade-offs the agency can make in what it can take up to support and sustain the station. However, failure to complete assembly would further reduce the station's ability to fulfill its research objectives and short the station of critical spare parts that only the shuttle can currently deliver. Retirement of the shuttle will require disposing of facilities; ensuring the retention of critical skills within NASA's workforce and its suppliers; and disposing of more than 1 million equipment items. (2) Utilizing the International Space Station. The space station, which is nearly complete, faces several significant challenges that may impede efforts to maximize utilization of its research facilities. These include the retirement of the shuttle and the loss of its unmatched capacity to move cargo and astronauts to and from the station; the uncertain future for the station beyond 2015; and the limited time available for research due to competing demands for the crew's time. (3) Developing Systems. A common theme in NASA projects--including the next generation of space flight efforts--is that they cost more and take longer to develop than planned. GAO again found this outcome in a recently completed assessment of NASA's 19 most costly projects--with a combined life-cycle cost of $66 billion. Within the last 3 years, 10 of the 19 projects experienced cost growth averaging $121.1 million or 18.7 percent, and the average schedule growth was 15 months. A number of these projects had experienced considerable cost growth before the most recent baselines were set. (4) Managing Finances and IT. NASA continues to struggle to put its financial house in order. GAO and others have reported for years on these efforts. The NASA Inspector General identified financial management as one of NASA's most serious challenges. In addition, NASA remains vulnerable to disruptions in its information technology network. NASA has made important progress in implementing security controls and aspects of its information security program. However, it has not always implemented sufficient controls to protect information and systems supporting its mission directorates.
You are an expert at summarizing long articles. Proceed to summarize the following text: Ex-Im’s mission is to support U.S. exports and jobs by providing export financing on terms that are competitive with the official export financing support offered by other governments. It had about 350 full-time staff positions in fiscal year 2010. Between fiscal years 2003 and 2008, Ex-Im authorized financing averaging $12.8 billion annually. In fiscal year 2009, Ex-Im had a record year, financing more than $21 billion in 2,891 authorizations. Since fiscal year 2008, Ex-Im has been “self-sustaining” for appropriations purposes, financing its operations from receipts collected from its borrowers. Exports, and trade more broadly, contribute to the U.S. economy in a variety of ways. Trade enables the United States to achieve a higher standard of living by exporting goods and services that are produced here relatively efficiently and importing goods and services that are produced here relatively inefficiently. An indication of this is that firms engaged in the international marketplace tend to exhibit higher rates of productivity growth and pay higher wages and benefits to their workers than domestically oriented firms of the same size. U.S. exports of goods grew from $820 billion in 2004 to $1.30 trillion in 2008, and were $1.1 trillion in 2009. In addition to the longer-term benefits of trade and exports, exports can serve as a countercyclical force for the U.S. economy—that is, strengthening the economy when other parts of it are relatively weaker. For a number of years, as the United States increasingly imported more than it exported, the U.S. economy was an engine of growth for other nations. In contrast, from December 2007 through June 2009—what has been officially labeled a recession period—U.S. economic growth was boosted by an improving trade balance. More recently, strong U.S. exports have been outpaced by import growth. The President created the National Export Initiative on March 11, 2010, with an ambitious goal of doubling exports in the next 5 years to support job creation. To facilitate achieving this goal, the National Export Initiative established an Export Promotion Cabinet that includes Ex-Im as well as 15 other agencies and executive departments. On September 16, 2010, the White House released a report on the initiative that provides an overview of progress made and lays out a plan for reaching the President’s goal. Ex-Im has a critical role to play in one of the report’s priority areas— increasing export credit. The report identifies several actions for Ex-Im. They include, for example, launching new products designed to ensure credit is available to small and medium-sized enterprises (SME); focusing efforts on high-potential industries such as medical technology, renewable power, and transportation; increasing the number and scope of partnerships with financial intermediaries; and introducing a simplified application for credit insurance. GAO has not evaluated the report or other aspects of the National Export Initiative, but would welcome the opportunity to continue its work with the Congress on oversight of these efforts. GAO has reported on Ex-Im’s efforts to achieve specific targets set by Congress regarding the composition of Ex-Im’s export financing. For example, Congress has shown interest in the level of financing Ex-Im provides to small businesses, including those owned by women and minorities, and Ex-Im’s efforts to increase that financing. Congress has also given Ex-Im directives regarding the share of its financing for environmentally beneficial exports, including renewable energy. Congressional interest in Ex-Im’s financing of environmentally beneficial exports span many years. In 1989, Congress directed that Ex-Im should seek to provide at least 5 percent of its energy sector financing for renewable energy projects and should undertake efforts to promote renewable energy. In 2008, Congress directed Ex-Im to provide not less than 10 percent of its annual financing for environmentally beneficial exports; in 2009-2010, Congress narrowed the targeted environmentally beneficial categories. Congress has also required Ex-Im to develop a strategy for increasing its environmental export financing, and to report on this financing and how the bank tracks it. GAO recently reviewed Ex-Im’s environmentally beneficial export financing and its efforts to meet congressional directives in this area. We found, first, that Ex-Im had fallen far short of achieving the 10 percent environmentally beneficial financing targets set by Congress. However, we also found that Ex-Im’s financing for renewable energy has recently increased; the level of renewable energy financing in the first two quarters of fiscal year 2010 exceeded its renewable energy financing for all of fiscal year 2009, which in turn represented a large increase over 2008 financing. We also reported that Ex-Im needs to further clarify its definitions and improve its reporting on environmentally beneficial exports. For Ex-Im, the term “environmentally beneficial exports” constitutes an overarching category that includes renewable energy, energy efficiency exports including energy efficient end-use technologies, and a mix of other products with beneficial effects on the environment. Ex-Im recently sought to clarify its definitions of energy efficiency exports by publicly providing examples of products it considers to be in that category, and it began to track its financing for those exports in its internal data. However, the examples Ex-Im released do not clearly identify which energy efficient end-use technologies would count towards their 10 percent financing target. Given the congressional interest in financing in this area, it is important that Ex-Im be as clear as possible in its application of terms to facilitate communicating financing goals to potential exporters and others and communicating progress in meeting targets to Congress. GAO recommended that Ex-Im develop and provide clear definitions for its subcategories of environmentally beneficial exports and report annually on the level of financing for each of the subcategories. We reported that while Ex-Im has taken steps to increase financing for environmentally beneficial exports, it could benefit from more consistently following strategic planning practices such as involving stakeholders and realigning resources. Ex-Im routinely shares information with stakeholders such as other U.S. agencies and lending institutions, but has not generally involved them in communicating goals or discussing strategies for achieving them. Ex-Im has considered reorganizing some staff into more focused teams to target priority industries and countries, but this effort has not included an analysis of the resources required to accomplish the goal of increasing certain types of environmentally beneficial exports. On the other hand, Ex-Im has taken some steps to assess factors that affect its financing of environmentally beneficial exports such as conducting analysis of the renewable energy markets to identify the best sectoral and geographic opportunities for Ex-Im financing. In order for Ex-Im to provide valuable information for the Congress and key stakeholders, GAO recommended that the bank consistently implement key practices for effective strategic planning, including clearly communicating the bank’s priorities for increasing financing of renewable energy and energy efficient end-use technologies to both internal and external stakeholders. Ex-Im agreed with our recommendations and stated that it would strive to implement them promptly. Promoting exports by small business has been a long-time priority of Congress as well as the executive branch, given these exports’ role in generating growth and employment. While many small businesses export, it is widely recognized that they face a number of challenges in exporting. For example, they typically do not have overseas offices and may not have much knowledge regarding foreign markets. Export promotion agencies have developed various goals with respect to their small business assistance, and in some cases Congress has mandated specific requirements for supporting small businesses. Since the 1980s, Congress has required that Ex-Im make available a certain percentage of its export financing for small business. In 2002, Congress established several new requirements for Ex-Im relating to small business, including increasing from 10 to 20 percent the proportion of Ex-Im’s aggregate loan, guarantee, and insurance authority that must be made available for the direct benefit of small business. When reauthorizing the bank’s charter in 2006, Congress again established new requirements for Ex-Im. These included creating a small business division with an office of financing for socially and economically disadvantaged small business concerns and small business concerns owned by women, designating small business specialists in all divisions, creating a small business committee to advise the bank president, and defining standards to measure the bank’s success in financing small business. For the past 4 fiscal years—2006-2009—Ex-Im has met the Congressional requirement to make available not less than 20 percent of its financing authority for small businesses. Percentages have ranged from almost 27 percent in fiscal year 2007 to about 21 percent in fiscal year 2009. The financing amount for small business was actually highest in 2009, given Ex-Im’s overall record financing that year. In fiscal years 2002-2005, Ex-Im did not reach the goal, with its small business financing share ranging from 16.9 percent to 19.7 percent. GAO has reported on several aspects of Ex-Im’s financing for small business exports. In 2006, we identified weaknesses in Ex-Im’s data systems for tracking and reporting on its small business financing and made recommendations for improvement. Ex-Im has implemented those recommendations. For example, Ex-Im moved to an electronic, web-based process that allows exporters, brokers, and financial institutions to transact with Ex-Im electronically. This contributed to more timely and accurate information on Ex-Im’s financing, and thus a greater level of confidence in Ex-Im’s reporting on its efforts relative to congressional goals. More recently, we reported on the performance standards that Ex-Im established for assessing its small business financing efforts. We found that Ex-Im had developed performance standards in most, although not all, of the areas specified by Congress, ranging from providing excellent customer service to increasing outreach. We also found that some measures for monitoring progress against the standards lacked targets and time frames, and that Ex-Im was just beginning to compile and use the small business information it was collecting to improve operations. GAO made several recommendations to Ex-Im for improving its performance standards for small business. Ex-Im has provided to GAO information on actions and specific steps it is taking to implement certain of these recommendations. For example, Ex-Im has identified targets for reducing the average turnaround time for processing certain types of small business transactions. GAO looks forward to continuing to work with Ex- Im in documenting that the recommendations have been implemented. Mr. Chairmen, the National Export Initiative has focused the spotlight on U.S. agencies that assist U.S. exporters as a way of expanding economic growth and creating jobs in the United States. As the nation’s export credit agency, the Export-Import Bank is a key part of the initiative, and there are a number of detailed initiatives related to export credit in the report that was released on September 16, 2010. However, the goal of doubling U.S. exports in 5 years is an ambitious goal and would require not only increased activity by agencies such as Ex-Im but also increases in demand in key nations around the world. This heightened emphasis on overall exports and increasing the level of Ex-Im financing takes place in the context of specific congressional directives regarding the composition of that financing. Our work on Ex- Im’s financing with respect to small business—including minority and women-owned businesses—and environmentally beneficial exports has demonstrated that substantial steps have been taken, and Ex-Im continues to face substantial challenges. While Ex-Im has had more success in achieving the congressional targets for small business than for environmentally beneficial exports, opportunities remain for a more strategic use of resources and for better communication with Congress and other stakeholders. More broadly, Ex-Im faces the challenge of contributing to the doubling of U.S. exports along with meeting other congressional requirements, including operating at little or no cost to the taxpayer. How any sharp increase in Ex-Im financing levels will affect specific targets we have described is not clear, and is likely to be an area requiring further discussion on how to balance these overall priorities. Chairman Moore, Chairman Meeks, Ranking Member Biggert, and Ranking Member Miller, this concludes my prepared remarks. I appreciate the opportunity to discuss these issues with you today. I would be pleased to respond to any questions you or other members of the subcommittees may have at this time. For further information about this testimony, please contact Loren Yager at (202) 512-4347 or by e-mail at YagerL@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of the statement. Individuals who made key contributions to this testimony include Celia Thomas, Shirley Brothwell, Karen Deans, Emily Suarez-Harris, Richard Krashevski, Justine Lazaro, Valerie Nowak, and Jennifer Young. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the role of the U.S. Export-Import Bank (Ex-Im) in promoting exports and achieving other U.S. policy goals. As Congress considers policies to achieve more robust growth in the U.S. economy, it must consider the full range of tools available to further growth and create new jobs for U.S. workers. Some of these tools are related to promoting exports, which can have broad benefits to the U.S. economy. As the official export credit agency of the United States, Ex-Im has a key role in helping many U.S. firms achieve sales in foreign markets. In addition to establishing Ex-Im's broad mandate of supporting U.S. employment through exports, Congress has laid out specific, targeted goals for the bank in areas such as increasing financing for environmentally beneficial exports and expanding services to small and minority-owned businesses. This testimony provides some broad observations regarding Ex-Im's contribution to the export promotion goals announced in the President's National Export Initiative. It also describes progress Ex-Im has made in achieving the specific targets set by Congress, as well as some challenges the bank faces in meeting those targets. The statement also provides some background information concerning the ways in which exports can enhance U.S. economic output. The President's National Export Initiative has put forth an ambitious goal of doubling exports in the next 5 years. Ex-Im has been identified as having a key role in the initiative, and a recent administration report identifies a number of specific actions for Ex-Im. Our work on Ex-Im's financing with respect to small business found areas where Ex-Im needed to improve its data systems for accurate reporting as well as its tracking of efforts to increase small business financing. Regarding Ex-Im's environmentally beneficial exports financing, we found that the bank could benefit from more consistently following strategic planning practices. Ex-Im has taken a number of steps in response to GAO recommendations, but opportunities for improvement remain. Additional attention to these issues will enable Ex-Im to develop better communication with Congress and other stakeholders regarding the balance between the small business and environmental export targets and the broader priorities in the National Export Initiative.
You are an expert at summarizing long articles. Proceed to summarize the following text: There is no single definition for financial literacy, which is sometimes also referred to as financial capability, but it has previously been described as the ability to make informed judgments and to take effective actions regarding current and future use and management of money. Financial literacy encompasses financial education—the processes whereby individuals improve their knowledge and understanding of financial products, services, and concepts. However, being financially literate refers to more than simply being knowledgeable about financial matters; it also entails utilizing that knowledge to make informed decisions, avoid pitfalls, and take other actions to improve one’s present and long-term financial well-being. Federal, state, and local government agencies, nonprofits, the private sector, and academia all play important roles in providing financial education resources, which can include print and online materials, broadcast media, individual counseling, and classroom instruction. Evidence indicates that many U.S. consumers could benefit from improved financial literacy efforts. In a 2010 survey of U.S. consumers prepared for the National Foundation for Credit Counseling, a majority of consumers reported they did not have a budget and about one-third were not saving for retirement. In a 2009 survey of U.S. consumers by the FINRA Investor Education Foundation, a majority believed themselves to be good at dealing with day-to-day financial matters, but the survey also revealed that many had difficulty with basic financial concepts. Further, about 25 percent of U.S. households either have no checking or savings account or rely on alternative financial products or services that are likely to have less favorable terms or conditions, such as nonbank money orders, nonbank check-cashing services, or payday loans. As a result, many Americans may not be managing their finances in the most effective manner for maintaining or improving their financial well-being. In addition, individuals today have more responsibility for their own retirement savings because traditional defined-benefit pension plans have increasingly been replaced by defined-contribution pension plans over the past two decades. As a result, financial skills are increasingly important for those individuals in or planning for retirement to help ensure that retirees can enjoy a comfortable standard of living. Efforts to improve financial literacy in the United States involve a range of public, nonprofit, and private participants. Among those participants, the federal government is distinctive for its size and reach, and for the diversity of its components, which address a wide array of issues and populations. At our forum last year on financial literacy, many participants said that the federal government had a unique role to play in promoting greater financial capability. They noted that the federal government has a built-in “bully pulpit” that can be used to draw attention to this issue. Participants also highlighted the federal government’s ability to convene the numerous agencies and entities involved in financial literacy, noting that the government has a powerful ability to bring people together. In addition, some participants noted the federal government’s ability to take advantage of existing distribution channels and points of contact between the government and citizens to distribute messages about financial literacy. In our ongoing work, we have found examples of federal agencies acting on such opportunities—for example, the Securities and Exchange Commission has worked with the Internal Revenue Service to include an insert about its investor education resources, including its “Investor.gov” education website, in the mailing of tax refund checks. At our first forum on financial literacy in 2004, participants noted that the federal government can serve as an objective and unbiased source of information, particularly in terms of helping consumers make wise decisions about the selection of financial products and services. Some participants expressed the belief that while the private sector offers a number of good financial literacy initiatives, it is ultimately motivated by its own financial interests, while the federal government may be in a better position to offer broad-based, noncommercial financial education. In preliminary results from an ongoing review, we have identified that, in fiscal year 2010, there were 16 significant financial literacy programs or activities among 14 federal agencies, as well as 4 housing counseling programs among 2 federal agencies and a federally chartered nonprofit corporation. We defined “significant” financial literacy programs or activities as those that were relatively comprehensive in scope or scale and included financial literacy as a key objective rather than a tangential In prior work, we cited a 2009 report that had identified 56 federal goal.financial literacy programs among 20 agencies. That report, conducted by the RAND Corporation, was based on a survey that had asked federal agencies to self-identify their financial literacy efforts. However, our subsequent analysis of these 56 programs found that there was a high degree of inconsistency in how different agencies defined financial literacy programs or efforts and whether they counted related efforts as one or multiple programs. We believe that our count of 16 significant federal financial literacy programs or activities and 4 housing counseling programs is based on a more consistent set of criteria. During his confirmation hearing, Comptroller General Dodaro noted that financial literacy was an area of priority for him, and he has initiated a multi-pronged strategy for GAO to address financial literacy issues. First, we will continue to evaluate federal efforts that directly promote financial literacy. In addition to our recent financial literacy forum, we have ongoing work that focuses on, among other things, the cost of federal financial literacy activities, the federal government’s coordination of these activities, and what is known about their effectiveness. Second, we will encourage research of the various financial literacy initiatives to evaluate the relative effectiveness of different financial literacy approaches. Third, we will look for opportunities to enhance financial literacy as an integral component of certain regular federal interactions with the public. Finally, we have recently instituted a program to empower GAO’s own employees. This program includes a distinguished speaker series, as well as an internal website with information on personal financial matters and links to information on pay and benefits and referral services through GAO’s counseling services office. Having multiple federal agencies involved in financial literacy efforts can have certain advantages. In particular, providing information from multiple sources can increase consumer access and the likelihood of educating more people. Moreover, certain agencies may have deep and long- standing expertise and experience addressing specific issue areas or serving specific populations. For example, the Securities and Exchange Commission has efforts in place to protect securities investors from fraudulent schemes, while the Department of Housing and Urban Development (HUD) oversees most, but not all, federally supported housing counseling. Similarly, the Department of Defense (DOD) may be the agency most able to efficiently and effectively deliver financial literacy programs and products to servicemembers and their families. However, as we stated in a June 2011 report, relatively few evidence-based evaluations of financial literacy programs have been conducted, limiting what is known about which specific methods and strategies—and which federal financial literacy activities—are most effective. Further, the participation of multiple agencies highlights the need for strong coordination of their activities. In general, we have found that the coordination and collaboration among federal agencies with regard to financial literacy have improved in recent years, in large part due to the multiagency Financial Literacy and Education Commission. The commission was created by Congress in 2003 and charged, among other things, with developing a national strategy to promote financial literacy and education, coordinating federal efforts, and identifying areas of overlap and duplication. Among other things, the commission, in concert with the Department of the Treasury, which provides its primary staff support, has served as a central clearinghouse for federal financial literacy resources—for example, it created a centralized federal website and has an ongoing effort to develop a catalog of federal research on financial literacy. The commission’s 2011 national strategy identified five action areas, one of which was to further emphasize the role of the commission in coordination. The strategy’s accompanying Implementation Plan lays out plans to coordinate communication among federal agencies, improve strategic partnerships, and develop channels of communication with other entities, including the President’s Advisory Council on Financial Capability and the National Financial Education Network of State and Local Governments. The Financial Literacy and Education Commission’s success in implementing these elements of the national strategy is key, given the inherently challenging task of coordinating the work of the commission’s many member agencies—each of which has its own set of interests, resources, and constituencies. Further, the addition of the Bureau of Consumer Financial Protection, whose director serves as the Vice Chair of the commission, adds a new player to the mix. In our recent and ongoing work, we have found instances in which multiple agencies or programs share similar goals and activities, which raises questions about the efficiency of some federal financial literacy efforts. For example, four federal agencies and one government- chartered nonprofit corporation provide or support various forms of housing counseling to consumers—DOD, HUD, the Department of Veterans Affairs, the Department of the Treasury, and NeighborWorks America. Other examples of overlap lie in the financial literacy responsibilities of the Bureau of Consumer Financial Protection, which was created by the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act). The act established within the bureau an Office of Financial Education and charged this office with developing and implementing a strategy to improve financial literacy through activities including opportunities for consumers to access, among other things, financial counseling; information to assist consumers with understanding credit products, histories, and scores; information about saving and borrowing tools; and assistance in developing long-term savings strategies. This office presents an opportunity to further promote awareness, coordinate efforts, and fill gaps related to financial literacy. At the same time, the duties this office is charged with fulfilling are in some ways similar to those of a separate Office of Financial Education and Financial Access within the Department of the Treasury, a small office that also seeks to broadly improve Americans’ financial literacy. In addition, the Dodd-Frank Act charges the Bureau of Consumer Financial Protection with developing and implementing a strategy on improving the financial literacy of consumers, even though the multiagency Financial Literacy and Education Commission already has its own statutory mandate to develop, and update as necessary, a national strategy for financial literacy. As the bureau has been staffing up and planning its financial education activities, it has been in regular communication with the Department of the Treasury and with other members of the Financial Literacy and Education Commission, and agency staff say they are seeking to coordinate their respective roles and activities. The Dodd-Frank Act also creates within the bureau an Office of Financial Protection for Older Americans, which is charged with helping seniors recognize warning signs of unfair, deceptive, or abusive practices and protect themselves from such practices; providing one-on-one financial counseling on issues including long-term savings and later-life economic security; and monitoring the legitimacy of certifications of financial advisers who advise seniors. These activities may overlap with those of the Federal Trade Commission, which also plays a role in helping seniors avoid unfair and deceptive practices. Further, the Department of Labor and the Social Security Administration both have initiatives in place to help consumers plan for retirement, and the Securities and Exchange Commission has addressed concerns about the designations and certifications used by financial advisers, who often play a role in retirement planning. Officials at the Bureau of Consumer Financial Protection told us that they have been coordinating their financial literacy roles and activities with those of other federal agencies to avoid duplication of effort. In prior work we have noted the importance of program evaluation and the need to focus federal financial literacy efforts on initiatives that work. Relatively few evidence-based evaluations of financial literacy programs have been conducted, limiting what is known about which specific methods and strategies are most effective. Financial literacy program evaluations are most reliable and definitive when they track participants over time, include a control group, and measure the program’s impact on consumers’ behavior. However, such evaluations are typically expensive, time-consuming, and methodologically challenging. Based on our previous work, it appears that no single approach, delivery mechanism, or technology constitutes best practice, but there is some consensus on key common elements for successful financial education programs, such as timely and relevant content, accessibility, cultural sensitivity, and an evaluation component. There are several efforts under way that seek to enhance evaluation of federal financial literacy programs. For example, the Financial Literacy and Education Commission has begun to establish a clearinghouse of evidence-based research and evaluation studies, current financial topics and trends of interest to consumers, innovative approaches, and best practices. In addition, the Bureau of Consumer Protection recently contracted with the Urban Institute for a financial education program evaluation project, which will assess the effectiveness of two existing financial education programs and seeks to identify program elements that improve consumers’ confidence about financial matters. We believe these measures are positive steps because federal agencies could potentially make the most of scarce resources by consolidating financial literacy efforts into the activities and agencies that are most effective. The Bureau of Consumer Financial Protection was charged by statute with a key role in improving Americans’ financial literacy and is being provided with resources to do so. As such, the bureau offers potential in enhancing the federal government’s role in financial literacy. At the same time, as we have seen, some of its responsibilities overlap with those of other agencies, which highlights the need for coordination and may offer opportunities for consolidation. As the bureau’s financial literacy activities evolve and are implemented, it will be important to evaluate how those efforts are working and make appropriate adjustments that might promote greater efficiency and effectiveness. In addition, the overlap we have identified among programs and activities increases the risk of inefficiency and emphasizes the importance of coordination among financial participants. This underscores the importance of steps the Bureau of Consumer Financial Protection has been taking to delineate its roles and responsibilities related to financial literacy vis-à-vis those of other federal agencies, which we believe is critical in order to minimize overlap and the potential for duplication. Chairman Akaka, Ranking Member Johnson, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For future contacts about this testimony, please contact Alicia Puente Cackley at (202) 512-8678 or at cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Jason Bromberg, Mary Coyle, Roberto Piñero, Rhonda Rose, Jennifer Schwartz, and Andrew Stavisky also made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Financial literacy plays an important role in helping to promote the financial health and stability of individuals and families. Economic changes in recent years have further highlighted the need to empower all Americans to make informed financial decisions. In addition to the important roles played by states, nonprofits, the private sector, and academia, federal agencies promote financial literacy through activities including print and online materials, broadcast media, individual counseling, and classroom instruction. This testimony discusses (1) the federal government’s role in promoting financial literacy, including GAO’s role; (2) the advantages and risks of financial literacy efforts being spread across multiple federal agencies; and (3) opportunities to enhance the effectiveness of federal financial literacy education efforts going forward. This testimony is based on prior and ongoing work, for which GAO reviewed agency budget documents, strategic plans, performance reports, websites, and other materials; convened forums of financial literacy experts; and interviewed representatives of federal agencies and selected private and nonprofit organizations. While this statement includes no new recommendations, in the past GAO has made a number of recommendations aimed at improving financial literacy efforts. The federal government plays a wide-ranging role in promoting financial literacy. Efforts to improve financial literacy in the United States involve an array of public, nonprofit, and private participants, but among those participants, the federal government is distinctive for its size and reach and for the diversity of its components, which address a wide range of issues and populations. At forums of financial literacy experts that GAO held in 2004 and 2011, participants noted that the federal government can use its “bully pulpit,” convening power, and other tools to draw attention to the issue, and serve as an objective and unbiased source of information about the selection of financial products and services. In prior work, GAO cited a 2009 report by the RAND Corporation in which 20 federal agencies self-identified as having 56 federal financial literacy programs, but GAO’s subsequent analysis found substantial inconsistency in how different agencies defined and counted financial literacy programs. Based on a more consistent set of criteria, GAO identified 16 significant financial literacy programs or activities among 14 federal agencies, as well as 4 housing counseling programs among 3 federally supported entities, in fiscal year 2010. The Comptroller General has initiated a multi-pronged strategy to address financial literacy issues. First, GAO will continue to evaluate federal efforts that directly promote financial literacy. Second, it will encourage research of the various financial literacy initiatives to evaluate the relative effectiveness of different approaches. Third, GAO will look for opportunities to enhance financial literacy as an integral component of certain regular federal interactions with the public. Finally, GAO has recently instituted a program to empower its own employees, which includes an internal website with information on personal financial matters and links to information on pay and benefits and referral services through its counseling services office and a distinguished speaker series. Having multiple federal agencies involved in financial literacy offers advantages as well as risks. Some agencies have long-standing expertise and experience addressing specific issue areas or populations, and providing information from multiple sources can increase consumer access and the likelihood of educating more people. However, the participation of multiple agencies also highlights the risk of inefficiency and the need for strong coordination of their activities. GAO has found that the coordination and collaboration among federal agencies with regard to financial literacy has improved in recent years, in large part as a result of the Financial Literacy and Education Commission. At the same time, GAO has found instances of overlap, in which multiple agencies or programs, including the new Bureau of Consumer Financial Protection, share similar goals and activities, underscoring the need for careful monitoring of the bureau’s efforts. In prior work GAO has noted the importance of program evaluation and the need to focus federal financial literacy efforts on initiatives that work. Federal agencies could potentially make the most of scarce resources by consolidating financial literacy efforts into the activities and agencies that are most effective. In addition, the Bureau of Consumer Financial Protection offers potential for enhancing the federal government’s role in financial literacy, but avoiding duplication will require that it continue its efforts to delineate its financial literacy roles and responsibilities vis-à-vis those of other federal agencies with overlapping responsibilities.
You are an expert at summarizing long articles. Proceed to summarize the following text: Every 4 years, DOD is required to conduct and report on a comprehensive assessment—the Quadrennial Roles and Missions Review—of the roles and missions of the armed services and the core competencies and capabilities of DOD to perform and support such roles and missions. Specifically, the Chairman of the Joint Chiefs of Staff is to conduct an independent military assessment of the roles and missions of the armed forces, assignment of functions among the armed services, and any recommendations regarding issues that need to be addressed. The Secretary of Defense is then to identify the core mission areas of the armed services; the core competencies and capabilities associated with these mission areas; the DOD component responsible for providing the identified core competency or capability; any gaps in the ability of the component to provide the competency or any unnecessary duplication of competencies or capabilities between a plan for addressing any gaps or unnecessary duplication. The Secretary is then to submit a report on this Quadrennial Roles and Missions Review following the review and not later than the submission of the President’s budget for the next fiscal year; however, the statutory reporting requirement does not explicitly require that all required elements of the assessment be reported. The Quadrennial Roles and Missions Review that resulted in the July 2012 submission occurred amid a series of strategy and policy reviews that DOD has undertaken over the past 5 years. Some of these reviews resulted in specific strategy documents, such as the National Security Strategy, National Defense Strategy, National Military Strategy, and National Security Space Strategy. DOD is also required to conduct two reviews on a regular basis that relate to the Quadrennial Roles and Missions Review: the Quadrennial Defense Review and the Biennial Review of DOD Agencies and Field Activities. The timing requirements for the Quadrennial Roles and Missions Review and the Quadrennial Defense Review result in each Quadrennial Roles and Missions Review occurring 2 years before and 2 years after a Quadrennial Defense Review. In December 2010, DOD also reissued its internal DOD Directive 5100.01, which establishes the functions of DOD and its major components, and, in September 2011, released an update of the Unified Command Plan, which allocates responsibilities among the combatant commands. In addition to these recurring strategy reviews, comprehensive assessments, and updates to DOD guidance, DOD has recently completed two other reviews: the Defense Strategic Guidance, which identified the strategic interests of the United States, and the Strategic Choices Management Review, initiated by the Secretary of Defense in 2013 to inform DOD’s planning for declining future budgets. The Defense Strategic Guidance, released in January 2012, was directed by the President to identify the strategic interests of the United States. The document states that it was an assessment of the defense strategy prompted by the changing geopolitical environment and fiscal pressures. The Defense Strategic Guidance was developed by senior officials from DOD—including the Office of the Secretary of Defense, the Joint Staff, the armed services, and the combatant commands—and the White House. The document outlines security challenges the United States faces and is intended to guide the development of the Joint Force through 2020 and during a period of anticipated fiscal constraints. The Defense Strategic Guidance identified 10 primary missions of the armed forces, as well as several principles to guide the force and program development necessary to achieve these missions. For more information about the Defense Strategic Guidance and other selected strategy and planning documents, see appendix I. In July 2012, DOD submitted the Quadrennial Roles and Missions Review report, together with the Defense Strategic Guidance, to Congress to meet the statutory reporting requirement; however, DOD’s submission did not provide sufficiently detailed information about most of the statutorily required elements of the assessment. Although the statute does not require DOD to report on all elements of the roles and missions assessment, a key principle for information quality indicates that information presented to Congress should be clear and sufficiently detailed.armed services and some information about core capabilities, but did not, Specifically, we found that DOD provided the missions of the for any of the 10 missions, clearly identify the components within the department responsible for providing the core competencies and capabilities, or identify any plans to address any capability gaps or unnecessary duplication. The Quadrennial Roles and Missions Review report identifies missions of the armed services and provides information about capabilities and previously identified areas of duplication. The report restates the 10 missions of the armed forces identified in the Defense Strategic Guidance, and identifies some protected capabilities and investments needed to carry out each of the missions. For example, the report restates DOD’s mission to project power despite anti-access / area denial challenges. It then lists five key enhancements and protected capabilities associated with this mission: enhance electronic warfare, develop a new penetrating bomber, protect the F-35 Joint Strike Fighter program, sustain undersea dominance and enhance capabilities, and develop and enhance preferred munitions capabilities. Additionally, the report also mentioned some previously identified areas of duplication and actions that were subsequently taken, such as eliminating redundancy in intelligence organizations, or proceeding with previous plans to eliminate organizations that performed duplicative functions or outlived their original purpose: the report notes the consolidation of specialized intelligence offices across DOD into two Defense Intelligence Agency task forces focused on counterterrorism and terrorism finance. Finally, the report also provides specific information about Information Operations as well as detention and interrogation, both of which were required to be included in Prior to the submission to this Quadrennial Roles and Missions Review.Congress, senior DOD leadership—including the Deputy Assistant Secretary of Defense for Force Development, the DOD General Counsel, Assistant Secretary of Defense for Legislative Affairs, Under Secretary of Defense (Comptroller), Director of Cost Assessment and Program Evaluation, Director of the Joint Staff, Under Secretary of the Navy, Secretary of the Army, and Secretary of the Air Force—internally concurred that the submission met the statutory requirement according to a tracking sheet used by the Office of the Under Secretary of Defense for Policy. While the submission identifies core missions for the armed services and provides some information about capabilities and competencies needed for those missions, it does not provide sufficiently detailed information about other statutorily required elements of the roles and missions assessment. In our review of the report, we found that DOD did not, for any of the 10 missions, clearly identify the components within the department responsible for providing the core competencies and capabilities, or identify any plans to address any capability gaps or unnecessary duplication. For example: The submission does not provide clear and sufficiently detailed information on which component or components are responsible for enhancing electronic warfare capabilities, which is identified by DOD as one of the key capabilities needed to project power despite anti- access / area denial challenges. In our prior work, we have found that DOD needed to strengthen its management and oversight of electronic warfare programs and activities, reduce overlap, and improve its return on its multibillion-dollar acquisition investments. DOD has acknowledged that it faces ongoing challenges in determining whether the current level of investment is optimally matched with the existing capability gaps.does not provide sufficiently detailed information of its approach to assign responsibilities, close potential gaps, or eliminate unnecessary duplication. The submission also does not provide clear and sufficiently detailed information on which components are responsible for enhancing airborne intelligence, surveillance, and reconnaissance capabilities, which are required for counterterrorism and irregular warfare missions. In our prior work, we have found that ineffective acquisition practices and collaboration efforts in the DOD unmanned aircraft systems portfolio creates overlap and the potential for duplication among a number of current programs and systems. Similarly, we have noted that opportunities exist to avoid unnecessary redundancies and maximize the efficient use of intelligence, surveillance, and reconnaissance capabilities. However, DOD’s submission does not clarify responsibilities among the Air Force, Army, or Navy for developing these capabilities. This is the second time that DOD did not provide sufficiently detailed information to Congress following its roles and missions assessment. In the first Quadrennial Roles and Missions Review Report submitted to Congress in 2009, DOD identified the core missions of the department and identified the DOD Joint Capabilities Areas as the core competencies for the department. However, the report did not provide details for all elements required of the assessment. For example, the report did not provide core competencies and capabilities—including identifying responsible organizations—for each of the missions; instead the report provided some capability information for only specific focus areas within some of these missions. Despite the limited information contained in the 2009 Quadrennial Roles and Missions Review Report, the department used that first review to inform changes later made in DOD Directive 5100.01, which establishes functions of the department and its major components. However, as a result of not providing clear, sufficiently detailed, and relevant information in the most recent submission, DOD did not provide Congress comprehensive information about roles, responsibilities, and needed capabilities and competencies that Congress was seeking. DOD did not conduct a comprehensive process for the roles and missions assessment. Instead DOD limited its approach to leveraging the results of another review, conducted in 2011, that resulted in the January 2012 release of the Defense Strategic Guidance. However, this earlier review was not intended to assess all elements the statute required of the roles and missions review and, as a result, by relying on it DOD does not have the assurance that its resulting assessment was comprehensive. We recognize that there were some benefits to this approach, as the Defense Strategic Guidance did identify primary missions of the armed services, which were then provided as the core missions required for the Quadrennial Roles and Missions Review. In addition, the Defense Strategic Guidance provided several principles to guide the force and program development necessary to achieve these missions. The Defense Strategic Guidance also became the basis for completing the most recent Quadrennial Defense Review. However, neither DOD’s review for preparing the Defense Strategic Guidance nor the Quadrennial Roles and Missions Review itself clearly identified the components within the department that are responsible for providing the core competencies and capabilities needed to address each of the primary missions, or plans for addressing any capability gaps or unnecessary duplication. Further, by using such an approach for preparing the roles and missions assessment, DOD did not document and follow key principles for conducting an effective and comprehensive assessment.principles include (1) developing and documenting a planned approach, including the principles or assumptions that will inform the assessment, which addresses all statutory requirements; (2) involving key internal stakeholders; (3) identifying and seeking input from appropriate external stakeholders; and (4) establishing time frames with milestones for conducting the assessment and completing the report. Planned approach: DOD did not develop and document its planned approach, including the principles or assumptions used to inform and address all statutory requirements of the assessment. Specifically, it did not document in its approach how it was going to address the statutory requirements related to the identification of components responsible for providing the core competencies and capabilities, any gaps, or any unnecessary duplication. A documented, planned approach provides a framework for understanding the strategic direction and the assumptions used to identify, analyze, assess, and address the statutory requirements of the assessment. Internal stakeholder involvement: The involvement of key internal stakeholders was limited. As part of a comprehensive process, the involvement of key internal stakeholders helps ensure that the information obtained during the review is complete. According to officials from the armed services, the Joint Staff, and the Office of the Under Secretary of Defense for Policy, officials from those offices had a limited role in the development and review of the roles and missions assessment. For example, the Chairman of the Joint Chiefs of Staff did not conduct an independent assessment of the roles and missions assessment prior to the broader, department-wide assessment. According to officials from the Office of the Secretary of Defense and Joint Staff, this decision was made because the Joint Chiefs of Staff had provided substantial input to, and had endorsed, the recently completed Defense Strategic Guidance. According to Joint Staff officials, the Chairman had agreed with the approach proposed by the Under Secretary of Defense for Policy to rely on the review that resulted in the Defense Strategic Guidance as the primary basis for the Quadrennial Roles and Missions Review. The Joint Staff reviewed the submission prepared by the Office of the Under Secretary of Defense for Policy and the Chairman then cosigned the submission with the Secretary of Defense. The armed services had limited responsibility for participating in the preparation of the roles and missions submission, and were given a limited opportunity to review and provide comment on DOD’s draft submission before it was submitted to Congress. In addition, officials from the Office of the Director of Administration and Management—responsible for the biennial review of DOD agencies and field activities where additional efficiencies may be identified—told us they sought an opportunity to participate in the Quadrennial Roles and Missions Review process, but were not included in the review. According to an official from the Office of the Under Secretary of Defense for Policy, internal stakeholder involvement was incorporated from the prior, senior-level review that resulted in the Defense Strategic Guidance. However, the Office of the Director of Administration and Management was not involved in that prior review. By not considering ways to build more opportunity for stakeholder input, DOD was not well-positioned to obtain and incorporate input from across the armed services, agencies, offices, and commands within the department. Identification and involvement of appropriate external stakeholders: DOD had limited input from appropriate external stakeholders, such as Congress and federal agencies, with related national security goals. Input from Congress could have provided more specific guidance and direction for what it expected of the roles and missions assessment. According to DOD officials, they briefly discussed the assessment with some congressional staff early in the process. In addition, the 2012 Quadrennial Roles and Missions Review report did provide specific information about Information Operations as well as detention and interrogation, as requested by Congress. This information was collected in addition to information leveraged from the review for the Defense Strategic Guidance. However, DOD officials told us that they would benefit from additional clarification of Congress’s expectations when performing subsequent roles and missions assessments. For example, these officials noted that it would be helpful if Congress highlighted which specific roles and responsibilities areas were of concern so that more detailed information might be provided about these areas in the next report. According to a DOD official, the White House was involved with the review for the Defense Strategic Guidance, but consultation with interagency partners was limited and occurred late in the process. While other federal agency partners were not involved with the latest Quadrennial Roles and Missions Review assessment, the involvement of other federal agency partners—such as the Department of State, Department of Homeland Security, and Office of the Director of National Intelligence—provides an opportunity to enlist their ideas, expertise, and assistance related to strategic objectives that are not solely the responsibility of DOD—such as homeland security and homeland defense. In assessing the capabilities and competencies, but not obtaining input from appropriate external stakeholders, DOD did not have additional support and input for the assessment of its roles and missions, or input as to what these stakeholders expected as an outcome of the assessment. Time frames: DOD did not develop a schedule to gauge progress for conducting the assessment and completing the report. Developing a schedule with time frames is useful to keep the overall review on track to meet deadlines and to produce a final product. However, aside from tracking the final review of the report in tracking sheets used by the Office of the Under Secretary of Defense for Policy and Joint Staff, DOD did not have planning documents that outlined specific time frames with milestones associated with conducting the assessment— including time allotted for conducting the assessment itself, soliciting input from internal and external stakeholders, and drafting the report prior to circulation for final review. The lack of such a schedule may have been a contributing factor to the delay in DOD’s submission. The report was required to be submitted to the congressional defense committees no later than the date in which the President’s budget request for the next fiscal year was provided to Congress, which was February 13, 2012; however, the report was submitted on July 20, 2012. DOD’s approach for the latest Quadrennial Roles and Missions Review also differed from the department’s approach for preparing the 2009 Quadrennial Roles and Missions Review. For the 2009 effort, DOD developed and documented guidance in a “terms of reference” that included, among other things, a methodological approach, time frames with deliverables, and a list of offices within DOD responsible for conducting portions of the assessment. However, no similar document was developed for the 2012 roles and missions assessment. According to officials from the Office of the Under Secretary of Defense for Policy, the 2009 Quadrennial Roles and Missions Review occurred before DOD had to address the challenges of the current fiscal climate, and as a result there might have been more interest in conducting the review. In contrast, in preparing the 2012 roles and missions review, the officials told us that senior DOD leadership had recently considered these difficult issues in preparing the Defense Strategic Guidance, and so preferred to rely on those recent discussions rather than conduct a separate review. According to DOD officials, the primary reason that they did not perform a separate effort to examine roles and missions is that the statutory assessment and reporting requirements of the Quadrennial Roles and Missions Review are largely duplicative of the review conducted for the Defense Strategic Guidance, as well as other reviews and processes. DOD officials stated that identifying core missions as well as core competencies and capabilities is also mirrored in the requirements for the Quadrennial Defense Review. Additionally, according to DOD officials, the annual budget process is designed to identify and assign capabilities within each service’s budget request, eliminate capability and capacity gaps, and eliminate unnecessary duplication among DOD components. However, by not conducting a specific, comprehensive roles and missions assessment, DOD missed an opportunity to examine these issues through a broad, department-wide approach, rather than through processes established for other purposes. Strategic assessments of the roles, missions, and needed competencies and capabilities within DOD—whether conducted through the Quadrennial Roles and Missions Review or some other strategic-level, department-wide assessment—can be used to inform the department and strengthen congressional oversight. Given the complex security challenges and increased fiscal pressures that the department faces, such assessments are important to help the department prioritize human capital and other investment needs across the many components within the department. Without a comprehensive roles and missions assessment, documented in a sufficiently detailed report, DOD missed an opportunity to lay the groundwork for the Quadrennial Defense Review and other department-wide reviews, allocate responsibilities among the many components within DOD, prioritize key capabilities and competencies, inform the department’s investments and budget requests, identify any unnecessary duplication resulting in cost savings through increased efficiency and effectiveness, and aid congressional oversight. A comprehensive process that outlined a planned approach for addressing all statutory requirements of the roles and missions assessment; involved key internal stakeholders; offered an opportunity for key external stakeholders, such as Congress, to provide input regarding the department’s approach; and set clear time frames to gauge progress for the assessment, would have helped provide DOD with reasonable assurance that its resulting assessment of roles and missions was comprehensive and that DOD was positioned to provide such a sufficiently detailed report to Congress. To assist DOD in conducting any future comprehensive assessments of roles and missions that reflect appropriate statutory requirements, we recommend that the Secretary of Defense develop a comprehensive process that includes a planned approach, including the principles or assumptions used to inform the assessment, that addresses all statutory requirements; the involvement of key DOD stakeholders, such as the armed services, Joint Staff, and other officials within the department; an opportunity to identify and involve appropriate external stakeholders, to provide input to inform the assessment; and time frames with milestones for conducting the assessment and for reporting on its results. In written comments on a draft of this report, DOD partially concurred with the report’s recommendation to develop a comprehensive process to assist in conducting future assessments of roles and missions. DOD’s comments are summarized below and reprinted in appendix II. In its comments, DOD agreed that it is important to make strategy-driven decisions regarding its missions and associated competencies and capabilities, and to assign and clarify to its components their roles and responsibilities. DOD noted that, in the context of dynamic strategic and budgetary circumstances and increasing fiscal uncertainty, the department leveraged its strategic planning and annual budget processes, which resulted in the release of the 2012 Defense Strategic Guidance and associated mission, capability, and force structure priorities to inform and address the 2012 Quadrennial Roles and Missions Review. Specifically, DOD commented on the four recommended principles of a comprehensive process: Regarding a planned approach, the department stated that it determined that using other, ongoing strategic planning efforts to complete the roles and missions assessment met the review’s statutory requirement. As noted in the report, there were some benefits to DOD’s taking advantage of other processes. However, DOD did not document its approach for identifying the components within the department responsible for providing the core competencies and capabilities, or identify any capability gaps or unnecessary duplication. A documented, planned approach provides a framework for understanding the strategic direction and the assumptions used to identify, analyze, assess, and address the statutory requirements of the assessment. Regarding DOD stakeholders, the department stated that the processes it used did include the involvement of key DOD stakeholders, but acknowledged that formally documenting the process for obtaining stakeholder input would have clarified the role of the Chairman of the Joint Chiefs of Staff. Documenting the decision regarding the Chairman’s role would have provided some clarification; however, as noted in the report, it is also important to obtain and document input from all key internal stakeholders—including the armed services, agencies, offices, and commands within the department. Regarding external stakeholders, the department stated that it did seek limited additional clarification from Congress prior to conducting the roles and missions assessment, but did not seek formal input to the assessment from other federal agencies because it relied on the external stakeholder input obtained during the development of the Defense Strategic Guidance. However, during the course of our review, a DOD official told us there was limited involvement from other federal agency partners during the review for the Defense Strategic Guidance. As noted in the report, not obtaining input from appropriate external stakeholders—such as the Department of State, Department of Homeland Security, and Office of the Director of National Intelligence—when assessing the capabilities and competencies hindered DOD from having the additional support for the assessment of its roles and missions. Regarding time frames and milestones, the department stated that the development of time frames just for the roles and missions assessment would have been largely duplicative of existing time frames for other efforts, including the development of the Defense Strategic Guidance and the annual budget process. However, developing a schedule with time frames would have been useful to keep the roles and missions assessment on track and aid the department in submitting its report by the statutory deadline. Developing a comprehensive process for its roles and missions assessment—a process that outlined the department’s planned approach for addressing all statutory requirements, involved key internal stakeholders, offered an opportunity for Congress and other key external stakeholders to provide input, and set clear time frames to gauge progress for the assessment—would have helped provide DOD with reasonable assurance that its resulting assessment was comprehensive. The department’s approach resulted in a report that was insufficiently detailed, therefore, we continue to believe the recommendation is valid to guide future roles and missions reviews. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Policy; the Chairman of the Joint Chiefs of Staff; the Secretaries of the Army, of the Navy, and of the Air Force; the Commandant of the Marine Corps; DOD’s Director of Administration and Management; and the Director of the Office of Management and Budget. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or PendletonJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The Department of Defense (DOD) is required to regularly assess and report on its roles and missions in the Quadrennial Roles and Missions Review. The most recently completed Quadrennial Roles and Missions Review occurred amid a series of strategy and policy reviews that DOD has undertaken over the past 6 years, including the first Quadrennial Figure 1 provides a Roles and Missions Review conducted in 2009.timeline of the issuance of select DOD strategic-level reports and other documents that contain roles and missions-related information. The National Defense Strategy provides the foundation and strategic framework for much of the department’s strategic guidance. Specifically, it addresses how the military services plan to fight and win America’s wars and describes how DOD plans to support the objectives outlined in the President’s National Security Strategy. It also provides a framework for other DOD strategic guidance related to deliberate planning, force development, and intelligence. Further, the National Defense Strategy informs the National Military Strategy and describes plans to support the objectives outlined in the President’s National Security Strategy. By law, DOD is required to conduct the Quadrennial Defense Review every 4 years to determine and express the nation’s defense strategy and establish a defense program for the next 20 years. The review is to comprise a comprehensive examination of the national defense strategy, force structure, force modernization plans, infrastructure, budget planning, and other elements of the defense program and policies of the United States. The Quadrennial Defense Review also includes an evaluation by the Secretary of Defense and the Chairman of the Joint Chiefs of Staff of the military’s ability to successfully execute its missions. The latest Quadrennial Defense Review was issued in March 2014.addition to these strategic reviews conducted at DOD, both the Department of Homeland Security and the Department of State released strategic reviews that provide a strategic framework to guide the activities In to secure the homeland and to provide a blueprint for diplomatic and development efforts. The Ballistic Missile Defense Review, released in February 2010, is a review conducted pursuant to guidance from the President and the Secretary of Defense, while also addressing the statutory requirement to assess U.S. ballistic missile defense policy and strategy. This review evaluated the threats posed by ballistic missiles and developed a missile defense posture to address current and future challenges. Specifically, this review sought to align U.S. missile defense posture with near-term regional missile threats and sustain the ability to defend the homeland against limited long-range missile attack. The Nuclear Posture Review is a statutorily mandated review that establishes U.S. nuclear policy, strategy, capabilities and force posture for the next 5 to 10 years.April 2010 and provided a roadmap for implementing the President’s policy for reducing nuclear risks to the United States and the international community. Specifically, the 2010 report identified long-term modernization goals and requirements, including sustaining a safe, secure, and effective nuclear arsenal through the life extension of existing nuclear weapons; increasing investments to rebuild and modernize the nation’s nuclear infrastructure; and strengthening the science, technology, and engineering base. The latest review was released by DOD in The National Security Strategy describes and discusses the worldwide interests, goals, and objectives of the United States that are vital to its national security and calls for a range of actions to implement the strategy.President in May 2010, addressed, among other things, how the United States would strengthen its global leadership position; disrupt, dismantle, and defeat al Qaeda; and achieve economic recovery at home and abroad. This strategy also emphasized the need for a whole-of- government approach with interagency engagement to ensure the security of the American people and the protection of American interests. The National Security Strategy is to be used to inform the National Defense Strategy and the National Military Strategy. The most recent National Security Strategy, released by the DOD Directive 5100.01 established the functions of the department and its major components. DOD reissued the directive in 2010 after the first Quadrennial Roles and Missions Review included what DOD describes as a thorough review of the directive. DOD updated the prior directive to incorporate emerging responsibilities in areas such as special operations and cyberspace operations and reflect other changes in the department’s organization over the preceding decade. The Space Posture Review is a statutorily mandated review of U.S. national security space policy and objectives, conducted jointly by the Through Secretary of Defense and the Director of National Intelligence.coordination with the Office of the Director of National Intelligence, DOD released the National Security Space Strategy in January 2011. The strategy is derived from principles and goals found in the National Space Policy and builds on the strategic approach laid out in the National Security Strategy. Specifically, the strategy’s stated objectives for national space security include strengthening safety, stability, and security in space; maintaining and enhancing the strategic national security advantages afforded to the United States by space; and engaging the space industrial base that supports U.S. national security. National Military Strategy and the Joint Strategic Capabilities Plan The National Military Strategy and the Joint Strategic Capabilities Plan, along with other strategic documents, provide DOD with guidance and instruction on military policy, strategy, plans, forces and resource requirements and allocations essential to successful execution of the National Security Strategy and other Presidential Directives. Specifically, the National Military Strategy, last issued in 2011, provides focus for military activities by defining a set of interrelated military objectives from which the service chiefs and combatant commanders identify desired capabilities and against which the Chairman of the Joint Chiefs of Staff assesses risk. This strategy defines the national military objectives, describes how to accomplish these objectives, and addresses the military capabilities required to execute the strategy. The Secretary of Defense’s National Defense Strategy informs the National Military Strategy, which is developed by the Chairman of the Joint Chiefs of Staff. In addition, the Joint Strategic Capabilities Plan is to provide guidance to the combatant commanders, the chiefs of the military services, and other DOD agencies to accomplish tasks and missions based on current capabilities. It also is to serve as the link between other strategic guidance and the joint operation planning activities. Biennial Review of DOD Agencies and Field Activities By law, DOD is required to conduct a review every 2 years of the services and supplies that each DOD agency and field activity provides.Office of the Director of Administration and Management in the Office of the Secretary of Defense has led this biennial review. The goals are to determine whether DOD needs each of these agencies and activities, or whether it is more effective, economical, or efficient for the armed services to assume the responsibilities. However, unlike the Quadrennial Roles and Missions Review, which assesses the roles of all DOD components, the biennial review focuses on DOD agencies and field activities. The Secretary of Defense recently directed that the biennial review should also include an assessment of the offices within the Office The of the Secretary of Defense. DOD issued the latest report on this biennial review in April 2013. The Unified Command Plan provides guidance to combatant commanders and establishes their missions, responsibilities, force structure, geographic area of responsibility, and other attributes. Section 161 of Title 10 of the U.S. Code tasks the Chairman of the Joint Chiefs of Staff to conduct a review of the plan not less often than every 2 years and submit recommended changes to the President through the Secretary of Defense. The Unified Command Plan was last updated in 2011. Sustaining U.S. Global Leadership: Priorities for 21st Century Defense The Sustaining U.S. Global Leadership: Priorities for 21st Century Defense report (also referred to as the Defense Strategic Guidance), released in January 2012, was directed by the President to identify the strategic interests of the United States. The document states that it was an assessment of the defense strategy prompted by the changing geopolitical environment and fiscal pressures. The Defense Strategic Guidance was developed by senior officials from DOD—including the Office of the Secretary of Defense, the Joint Staff, the armed services, and the combatant commands—and the White House. The document outlines security challenges the United States faces and is intended to guide the development of the Joint Force through 2020 and during a period of anticipated fiscal constraints. The Defense Strategic Guidance identified 10 primary missions of the armed forces: counter terrorism and irregular warfare; deter and defeat aggression; project power despite anti-access / area denial challenges;counter weapons of mass destruction; operate effectively in cyberspace and space; maintain a safe, secure, and effective nuclear deterrent; defend the Homeland and provide support to civil authorities; provide a stabilizing presence; conduct stability and counterinsurgency operations; and conduct humanitarian, disaster relief, and other operations. It also identified several principles to guide the force and program development necessary to achieve these missions. For example, it noted the need for the department to continue to reduce costs through reducing the rate of growth of manpower costs, and the identification of additional efficiencies. In March 2013, the Secretary of Defense directed the completion of a Strategic Choices Management Review. The Strategic Choices Management Review was to examine the potential effect of additional, anticipated budget reductions on the department and develop options for performing the missions in the Defense Strategic Guidance. Specifically, the review was to inform how the department would allocate resources when executing its fiscal year 2014 budget and preparing its fiscal year 2015 through fiscal year 2019 budget plans. According to the Secretary of Defense, the purpose of the Strategic Choices Management Review was to understand the effect of further budget reductions on the department and develop options to deal with these additional reductions. The Secretary of Defense further emphasized that producing a detailed budget blueprint was not the purpose of this review. In addition to the contact named above, key contributors to this report were Margaret Morgan and Kevin L. O’Neill, Assistant Directors; Tracy Abdo; Darreisha M. Bates; Elizabeth Curda; Leia Dickerson; Gina Flacco; Brent Helt; Mae Jones; Amie Lesser; Travis Masters; Judy McCloskey; Terry Richardson; and Sabrina Streagle.
DOD is one of the largest organizations in the world, with its budget representing over half of the U.S. federal government's discretionary spending. According to DOD, the global security environment presents an increasingly complex set of challenges. Congress requires DOD to assess and report on its roles and missions every 4 years. In July 2012, DOD submitted its most recent Quadrennial Roles and Missions Review report. In June 2013, GAO was mandated to review DOD's process for conducting the latest Quadrennial Roles and Missions Review. GAO evaluated the extent to which DOD developed a sufficiently detailed report and conducted a comprehensive process for assessing roles and missions. GAO compared DOD's July 2012 report with the statutory requirements for the assessment, and compared DOD's assessment process with key principles derived from a broad selection of principles GAO and other federal agencies have identified. The Department of Defense's (DOD) July 2012 submission to Congress following its most recent Quadrennial Roles and Missions Review did not provide sufficiently detailed information about most of the statutorily required elements of the assessment. Specifically, DOD's July 2012 submission included the results of a 2011 review that led to the January 2012 release of a new strategic guidance document (hereinafter referred to as the Defense Strategic Guidance) as well as the Quadrennial Roles and Missions Review report. Although DOD is not statutorily required to report on all elements of the assessment, the submission that it provided to Congress was lacking key information. A key principle for information quality indicates that information presented to Congress should be clear and sufficiently detailed; however, neither the Defense Strategic Guidance nor the Quadrennial Roles and Missions Review included sufficiently detailed information about certain key elements of the roles and missions assessment. For example, while the submitted documents identify the core missions of the armed services and provide some information on capabilities associated with these missions, neither document provides other information required by the roles and missions assessment—including identifying the DOD components responsible for providing the identified core competencies and capabilities and identifying plans for addressing any unnecessary duplication or capability gaps. DOD's process for assessing roles and missions missed key principles associated with effective and comprehensive assessments. Specifically, DOD limited its process to leveraging the prior review that resulted in the Defense Strategic Guidance; by doing so its process did not include the following: A planned approach : DOD did not develop or document a planned approach that included the principles or assumptions used to inform the assessment. Internal stakeholder involvement: DOD included limited internal stakeholder involvement. For example, DOD gave the armed services a limited opportunity to review the draft prior to its release. Identification and involvement of external stakeholders : DOD obtained limited input from relevant external stakeholders, such as Congress, on the specific guidance and direction they expected of the roles and missions assessment. Time frames : DOD did not develop a schedule to gauge progress for conducting the assessment and completing the report, which may have contributed to the report being provided to Congress over 5 months late. DOD officials stated that the primary reason that they did not perform a separate roles and missions review is that the statutory requirements were duplicative of other reviews and processes, such as the Defense Strategic Guidance. However, by not conducting a comprehensive assessment, DOD missed an opportunity to conduct a department-wide examination of roles and missions. Instead, by relying on processes established for other purposes, DOD has limited assurance that it has fully identified all possible cost savings that can be achieved through the elimination of unnecessary duplication and that it has positioned itself to report clear and sufficient information about the statutorily required assessment to Congress. GAO recommends that, in conducting future assessments of roles and missions, DOD develop a comprehensive process that includes a planned approach, involvement of key internal and external stakeholder involvement, and time frames. DOD partially concurred, stating that it had leveraged other processes. GAO maintains that the roles and missions report was insufficiently detailed and continues to believe the recommendation is valid, as discussed in the report.
You are an expert at summarizing long articles. Proceed to summarize the following text: The national information and communications networks consist of a collection of mostly privately owned networks that are critical to the nation’s security, economy, and public safety. The communications sector operates these networks and is comprised of public- and private-sector entities that have a role in, among other things, the use, protection, or regulation of the communications networks and associated services (including Internet routing). For example, private companies, such as AT&T and Verizon, function as service providers, offering a variety of services to individual and enterprise end users or customers. The Internet is a vast network of interconnected networks. It is used by governments, businesses, research institutions, and individuals around the world to communicate, engage in commerce, do research, educate, and entertain. customers that are positioned at the ends of the network, or the “last mile,” as referred to by industry. The core networks transport a high volume of aggregated traffic substantial distances or between different service providers or “carriers.” These networks connect regions within the United States as well as all continents except Antarctica, and use submarine fiber optic cable systems, land-based fiber and copper networks, and satellites. In order to transmit data, service providers manage and control core infrastructure elements with numerous components, including signaling systems, databases, switches, routers, and operations centers. Multiple service providers, such as AT&T and Verizon, operate distinct core networks traversing the nation that interconnect with each other at several points. End users generally do not connect directly with the core networks. Access networks are primarily local portions of the network that connect end users to the core networks or directly to each other and enable them to use services such as local and long distance phone calling, video conferencing, text messaging, e-mail, and various Internet-based services. These services are provided by various technologies such as satellites, including fixed and portable systems; wireless, including cellular base stations; cable, including video, data, and voice systems, and cable system end offices; and wireline, including voice and data systems and end offices. Communications traffic between two locations may originate and terminate within an access network without connecting to core networks (e.g., local phone calling within the wireline network). Communications traffic between different types of access networks (e.g., between the wireline and wireless networks) may use core networks to facilitate the transmission of traffic. Individual and enterprise users connect to access networks through various devices (e.g., wired phones, cell phones, and computers). Figure 1 depicts the interconnection of user devices and services, access networks, and core networks. Figure 2 depicts the path that a single communication can take to its final destination. Aggregate traffic is normally the multimedia (voice, data, video) traffic combined from different service providers, or carriers, to be transported over high-speed through the core networks. Roll over each below to view more information. The nation’s communications infrastructure also provides the networks that support the Internet. In order for data to move freely across communications networks, the Internet network operators employ voluntary, self-enforcing rules called protocols. Two sets of protocols— the Domain Name System (DNS) and the Border Gateway Protocol (BGP)—are essential for ensuring the uniqueness of each e-mail and website address and for facilitating the routing of data packets between autonomous systems, respectively. DNS provides a globally distributed hierarchical database for mapping unique names to network addresses. It links e-mail and website addresses with the underlying numerical addresses that computers use to communicate with each other. It translates names, such as http://www.house.gov, into numerical addresses, such as 208.47.254.18, that computers and other devices use to identify each other on the network and back again in a process invisible to the end user. This process relies on a hierarchical system of servers, called domain name servers, which store data linking address names with address numbers. These servers are owned and operated by many public and private sector organizations throughout the world. Each of these servers stores a limited set of names and numbers. They are linked by a series of root servers that coordinate the data and allow users’ computers to find the server that identifies the sites they want to reach. Domain name servers are organized into a hierarchy that parallels the organization of the domain names (such as “.gov”, “.com”, and “.org”). Figure 3 below provides an example of how a DNS query is turned into a number. BGP is used by routers located at network nodes to direct traffic across the Internet. Typically, routers that use this protocol maintain a routing table that lists all feasible paths to a particular network. They also determine metrics associated with each path (such as cost, stability, and speed) and follow a set of constraints (e.g., business relationships) to choose the best available path for forwarding data. This protocol is important because it binds together many autonomous networks that comprise the Internet (see fig. 4). Like those affecting other cyber-reliant critical infrastructure, threats to the communications infrastructure can come from a wide array of sources. These sources include corrupt employees, criminal groups, hackers, and foreign nations engaged in espionage and information warfare. These threat sources vary in terms of the capabilities of the actors, their willingness to act, and their motives, which can include monetary gain or political advantage, among others. Table 1 describes the sources in more detail. These sources may make use of various cyber techniques, or exploits, to adversely affect communications networks, such as denial-of-service attacks, phishing, passive wiretapping, Trojan horses, viruses, worms, and attacks on the information technology supply chains that support the communications networks. Table 2 provides descriptions of these cyber exploits. In addition to cyber-based threats, the nation’s communications networks also face threats from physical sources. Examples of these threats include natural events (e.g., hurricanes or flooding) and man-made disasters (e.g., terrorist attacks), as well as unintentional man-made outages (e.g., a backhoe cutting a communication line). While the private sector owns and operates the nation’s communications networks and is primarily responsible for protecting these assets, federal law and policy establish regulatory and support roles for the federal government in regard to the communications networks. In this regard, federal law and policy call for critical infrastructure protection activities that are intended to enhance the cyber and physical security of both the public and private infrastructures that are essential to national security, national economic security, and public health and safety. The federal role is generally limited to sharing information, providing assistance when asked by private-sector entities, and exercising regulatory authority when applicable. As part of their efforts in support of the security of communications networks, FCC, DHS, DOD, and Commerce have taken a variety of actions, including ones related to developing cyber policy and standards, securing Internet infrastructure, sharing information, supporting national security and emergency preparedness (NS/EP), and promoting sector protection efforts. FCC is a U.S. government agency that regulates interstate and international communications by radio, television, wire, satellite, and cable throughout the United States.for certain communications providers to report on the reliability and security of communications infrastructures. These include disruption- reporting requirements for outages that are defined as a significant degradation in the ability of an end user to establish and maintain a Its regulations include requirements channel of communications as a result of failure or degradation in the performance of a communications provider’s network. The Commission’s Public Safety and Homeland Security Bureau has primary responsibility for assisting providers in ensuring the security and availability of the communications networks. The bureau also serves as a clearinghouse for public safety communications information and emergency response issues. In addition, its officials serve as Designated Federal Officers on the Communications Security, Reliability, and Interoperability Council. The Communications Security, Reliability, and Interoperability Council is a federal advisory committee whose mission is to provide recommendations to FCC to help ensure, among other things, secure and reliable communications systems, including telecommunications, media, and public safety systems. The council has provided recommendations in the form of voluntary best practices that provide companies with guidance aimed at improving the overall reliability, interoperability, and security of networks. Specifically, it is composed of 11 working groups that consist of experts from industry and other federal agencies. The working groups focus on various related topics, including those related to network security management, as well the security of the Border Gateway Protocol and the Domain Name System. The working groups develop recommendations through industry cooperation and voluntary agreements. For example, in March 2012, the commission announced the voluntary commitments by the nation’s largest Internet service providers, including AT&T and Verizon, to adopt the council’s recommendations aimed at better securing their communications networks. The recommendations covered a variety of security practices, including those related to the security of the Domain Name System and BGP. The key FCC and council efforts related to the security of the communications sector are detailed in table 3 below. DHS is the principal federal agency to lead, integrate, and coordinate the implementation of efforts to protect cyber-critical infrastructures. DHS’s role in critical infrastructure protection is established by law and policy. The Homeland Security Act of 2002, Homeland Security Presidential Directive 7, and the National Infrastructure Protection Plan establish a cyber protection approach for the nation’s critical infrastructure sectors— including communications—that focuses on the development of public- private partnerships and establishment of a risk management framework. These policies establish critical infrastructure sectors, including the communications sector; assign agencies to each sector (sector-specific agencies), including DHS as the sector lead for the communications and information technology sectors; and encourage private sector involvement through the development of sector coordinating councils, such as the Communications Sector Coordinating Council, and information-sharing mechanisms, such as the Communications Information Sharing and Analysis Center. Additionally, DHS has a role, along with agencies such as DOD, in regard to national security and emergency preparedness (NS/EP) communications that are intended to increase the likelihood that essential government and private-sector individuals can complete critical phone calls and organizations can quickly restore service during periods of disruption and congestion resulting from natural or man-made disasters. In particular, Executive Order No.13618 established an NS/EP Communications Executive Committee to serve as an interagency forum to address such communications matters for the nation. Among other things, the committee is to advise and make policy recommendations to the President on enhancing the survivability, resilience, and future architecture for NS/EP communications. The Executive Committee is composed of Assistant Secretary-level or equivalent representatives designated by the heads of the Departments of State, Defense, Justice, Commerce, and Homeland Security, the Office of the Director of National Intelligence, the General Services Administration, and the Federal Communications Commission, as well as such additional agencies as the Executive Committee may designate. The committee is chaired by the DHS Assistant Secretary for the Office of Cybersecurity and Communications and the DOD Chief Information Officer, with administrative support for the committee provided by DHS. To fulfill DHS’s cyber-critical infrastructure protection and NS/EP-related missions, the Office of Cybersecurity and Communications within the National Protection and Programs Directorate is responsible for, among other things, ensuring the security, resiliency, and reliability of the nation’s cyber and communications infrastructure, implementing a cyber-risk management program for protection of critical infrastructure, and planning for and providing national security and emergency preparedness communications to the federal government. The office is made up of the following five subcomponents that have various responsibilities related to DHS’s overarching cybersecurity mission: Stakeholder Engagement and Cyber Infrastructure Resilience division, among other things, is responsible for managing the agency’s role as the sector-specific agency for the communications sector. Office of Emergency Communications is responsible for leading NS/EP and emergency communications in coordination and cooperation with other DHS organizations. National Cybersecurity and Communications Integration Center is the national 24-hours-a-day, 7-days-a-week operations center that is to provide situational awareness, multiagency incident response, and strategic analysis for issues related to cybersecurity and NS/EP communications. The center is comprised of numerous co-located, integrated elements including the National Coordinating Center for Telecommunications, the U.S. Computer Emergency Readiness Team (US-CERT), and the Industrial Control Systems Cyber Emergency Response Team. Federal Network Resilience division is responsible for collaborating with departments and agencies across the federal government to strengthen the operational security of the “.gov” networks. As part of those efforts, the division leads the DHS initiative related to DNSSEC. Network Security Deployment division is responsible for designing, developing, acquiring, deploying, sustaining, and providing customer support for the National Cybersecurity Protection System. Four of these subcomponents have taken specific actions with respect to the communications networks, which are detailed in table 4 below. Under the National Infrastructure Protection Plan, DHS’s Office of Cybersecurity and Communications, as the sector-specific agency for the communications and information technology sectors, is responsible for leading federal efforts to support sector protection efforts. As part of the risk management process for protecting the nation’s critical infrastructure, including the protection of the cyber information infrastructure, the National Infrastructure Protection Plan recommends that outcome- oriented metrics be established that are specific and clear as to what they are measuring, practical or feasible in that needed data are available, built on objectively measureable data, and align to sector priorities. These metrics are to be used to determine the health and effectiveness of sector efforts and help drive future investment and resource decisions. DHS and its partners have previously identified the development of outcome-oriented metrics as part of the process to be used to manage risks to the nation’s critical communications infrastructure. For example, in 2010, DHS and its communications sector partners identified preserving the overall health of the core network as the sector’s first priority at the national level. They also defined a process for developing outcome-oriented sector metrics that would map to their identified goals and would yield quantifiable information (when available). Additionally, DHS and its information technology sector partners stated that they would measure their cyber protection efforts related to DNS and BGP in terms of activities identified in 2009 to assist sector partners in mitigating risks to key sector services, such as providing DNS functionality and Internet routing services. In 2010, they noted that implementation plans would be developed for each of the activities and outcome-based metrics would be used to monitor the status and effectiveness of the activities. However, DHS and its partners have not yet developed outcome-based metrics related to the cyber-protection activities for the core and access networks, DNS functionality, and Internet routing services. For the communications sector, DHS officials stated that the sector had recently completed the first part of a multiphased risk assessment process that included identification of cyber risks. The officials further stated that efforts are under way to prioritize the identified risks and potentially develop actions to mitigate them. However, DHS officials stated that outcome-oriented metrics had not yet been established and acknowledged that time frames for developing such metrics had not been agreed to with their private sector partners. For the information technology sector, DHS officials noted that the information technology sector’s private sector partners had decided to focus on progress-related metrics (which report the status of mitigation development activities as well as implementation decisions and progress) to measure the effectiveness of sector activities to reduce risk across the entire sector and periodically re-examine their initial risk evaluation based on perceived threats facing the sector. While these progress-related metrics are part of the information technology sector’s planned measurement activities, the sector’s plans acknowledge that outcome-based metrics are preferable to demonstrate effectiveness of efforts. Until metrics related to efforts to protect core and access networks, DNS, and BGP are fully developed, implemented, and tracked by DHS, federal decision makers will have less insight into the effectiveness of sector protection efforts. Within DOD, the Office of the Chief Information Officer (CIO) has been assigned the responsibility for implementing Executive Order 13618 requirements related to NS/EP communication functions. As previously described, the CIO (along with the Assistant Secretary for Cybersecurity and Communications in DHS) co-chairs the NS/EP Communications Executive Committee established in Executive Order 13618. The CIO directs, manages, and provides policy guidance and oversight for DOD’s information and the information enterprise, including matters related to information technology, network defense, network operations, and cybersecurity. Table 5 describes the department’s efforts in relation to this executive order. Federal law and policy also establish a role for the Department of Commerce (Commerce) related to the protection of the nation’s communications networks. For example, Commerce conducts industry studies assessing the capabilities of the nation’s industrial base to support the national defense. In addition, the department’s National Telecommunications and Information Administration (NTIA) was established as the principal presidential adviser on telecommunications and information policies. Further, Commerce’s National Institute of Standards and Technology (NIST) is to, among other things, cooperate with other federal agencies, industry, and other private organizations in establishing standard practices, codes, specifications, and voluntary consensus standards. Commerce also has a role in ensuring the security and stability of DNS. Prompted by concerns regarding who has authority over DNS, along with the stability of the Internet as more commercial interests began to rely on it, the Clinton administration issued an electronic commerce report in July 1997 that identified the department as the lead agency to support private efforts to address Internet governance. In June 1998, NTIA issued a policy statement (known as the White Paper) that stated it would enter into an agreement with a not-for-profit corporation formed by private sector Internet stakeholders for the technical coordination of DNS. In addition, Commerce created the Internet Policy Task Force in August 2011 to, among other things, develop and maintain department-wide policy proposals on a range of global issues that affect the Internet, including cybersecurity. While NIST has been identified as the Commerce lead bureau for cybersecurity, the task force is to leverage the expertise of other Commerce bureaus, such as the Bureau of Industry and Security and NTIA. Commerce components also carry out functions related to the security of the nation’s communications networks. The Bureau of Industry and Security conducted an industrial study to examine the operational and security practices employed by network operators in the nation’s communications infrastructure. In addition, NTIA manages agreements with the Internet Corporation for Assigned Names and Numbers (ICANN) and VeriSign, Inc., through which changes are made to the authoritative root zone file. Also, NIST participates in open, voluntary, industry-led, consensus-based, standards-setting bodies that design and develop specifications for network security technologies, including those used in the nation’s communications networks (such as DNS and BGP) as well as in industry technical forums for the purpose of promulgating the deployment of such new technologies. Table 6 describes some of the key efforts of Commerce as they relate to the cybersecurity of the nation’s communications networks. No cyber incidents affecting the core and access networks have been reported by communications networks owners and operators through three established reporting mechanisms from January 2010 to October 2012. To report incidents involving the core and access communications networks to the federal government, communication networks operators can use reporting mechanisms established by FCC and DHS to share information on outages and incidents: FCC’s Network Outage Reporting System is a web-based filing system that communications providers use to submit detailed outage reports to FCC. In turn, FCC officials stated that the agency uses the reported outage data to develop situational awareness of commercial network performance as well as to aid the commission in influencing and developing best practices regarding incidents. DHS’s Network Security Information Exchange is an information- sharing forum comprised of representatives from the communications and information technology sectors that meet bimonthly to voluntarily share communications-related incidents, among other things. DHS’s National Cybersecurity and Communications Integration Center, which includes the National Coordinating Center, US-CERT, and the Industrial Control Systems Cyber Emergency Response Team, is used to share information about threats, vulnerabilities, and intrusions related to communications networks and the sector as a whole. Communications and information technology providers can voluntarily report threats, vulnerabilities, and intrusions to the center. Although these mechanisms for reporting exist, available information showed that no cyber-based incidents involving the core and access communication networks had been reported using these mechanisms to the federal government from January 2010 to October 2012. Specifically, of the over 35,000 outages reported to FCC during this time period, none were related to traditional cyber threats (e.g., botnets, spyware, viruses, and worms). FCC officials stated that there could be an increase in the presence of cyber-related outages reported in the future as the Voice- over-Internet-Protocol reporting requirements are enforced. Further, DHS Office of Cybersecurity and Communications officials stated that no cyber incidents related to the core and access networks were reported to them during January 2010 to October 2012. For example, although several incidents attributed to the communications sector were reported to DHS’s Industrial Control Systems Cyber Emergency Response Team in fiscal year 2012, none of these incidents involved core and access networks. Our review of reports published by information security firms and communication network companies also indicated that no cyber incidents related to the core and access networks were publicly reported from January 2010 to October 2012. Officials within FCC and the private sector attributed the lack of incidents to the fact that the communications networks provide the medium for direct attacks on consumer, business, and government systems—and thus these networks are less likely to be targeted by a cyber attack themselves. In addition, Communications Information Sharing and Analysis Center officials expressed greater concern about physical threats (such as natural and man-made disasters, as well as unintentional man-made outages) to communications infrastructure than cyber threats. DOD, in its role as the sector-specific agency for the defense industrial base critical infrastructure sector, established two pilot programs to enhance the cybersecurity of sector companies and better protect unclassified department data residing on those company networks. The Deputy Secretary of Defense established the Cyber Security/Information Assurance program under the department’s Office of the Chief Information Officer to address the risk posed by cyber attacks against sector companies. The Opt-In Pilot was designed to build upon the Cyber Security/Information Assurance Program and, according to department officials, established a voluntary information-sharing process for the department to provide classified network security indicators to Internet service providers. In August 2012, we reported on these pilot programs as part of our study to identify DOD and private sector efforts to protect the defense industrial base from cybersecurity threats. Our report described these programs in detail, including challenges to their success. For example, one challenge noted by defense industrial base company officials was that the quality of the threat indicators provided by the federal government as part of the Opt-In pilot had not met their needs. In addition, the quality of the pilot was affected by the lack of a mechanism for information sharing among government and private stakeholders. The report also made recommendations to DOD and DHS to better protect the defense industrial base from cyber threats. (The August 2012 report was designated as official use only and is not publicly available.) Using information in that report, we identified six attributes that were implemented to varying extents as part of the pilot programs (see table 7). These attributes were utilized by DOD and the defense industrial base companies to protect their sector from cyber threats and could inform the cyber protection efforts of the communications sector. Agreements: Eligible defense industrial base companies who wanted to participate in these pilots enter into an agreement with the federal government. This agreement establishes the bilateral cyber- information-sharing process that emphasizes the sensitive, nonpublic nature of the information shared which must be protected from unauthorized use. The agreement does not obligate the participating company to change its information system environment or otherwise alter its normal conduct of cyber activities. Government sharing of unclassified and classified cyber threat information: DOD provides participating defense industrial base companies with both unclassified and classified threat information, and in return, the companies acknowledge receipt of threat information products. For any intrusions reported to DOD by the participating companies under the program, the department can develop damage assessment products, such as incident-specific and trend reports, and provide them to participating companies and DOD leadership. Feedback mechanism on government services: When a participating company receives cyber threat information from DOD, it has the option of providing feedback to the department on, among other things, the quality of the products. Government cyber analysis, mitigation, and digital forensic support: A participating company can also optionally report intrusion events. When this occurs, DOD can conduct forensic cyber analysis and provide mitigation and digital forensic support. The department can also provide on-site support to the company that reported the intrusion. Government reporting of voluntarily reported incidents: In addition to providing cyber analysis, mitigation, and cyber forensic support, DOD can report the information to other federal stakeholders, law enforcement agencies, counterintelligence agencies, and the DOD program office that might have been affected. Internet service providers deploying countermeasures based on classified threat indicators for organizations: Each Cyber Security/Information Assurance program participating company can voluntarily allow its Internet service providers to deploy countermeasures on its behalf, provided the Internet service provider has been approved to receive classified network security indicators from the U.S. government. For those providers, US-CERT collects classified threat indicators from multiple sources and provides them to the companies’ participating Internet service providers. If the Internet service provider identifies a cyber intrusion, it will alert the company that was the target of the intrusion. Providers can also voluntarily notify US-CERT about the incident, and US-CERT will share the information with DOD. In May 2012, DOD issued an interim final rule to expand the Cyber Security/Information Assurance program to all eligible defense industrial base sector companies. Additionally, the Defense Industrial Base Opt-In Pilot became the Defense Industrial Base Enhanced Cybersecurity Service (DECS) Program, and is now jointly managed by DHS and DOD. In addition, on February 12, 2013, the President signed Executive Order 13636, which requires the Secretary of Homeland Security to establish procedures to expand DECS (referred to as the Enhanced Cybersecurity Services program) to all critical infrastructure sectors, including the communications sector. Considering these attributes and challenges could inform DHS’s efforts as it develops these new procedures. Securing the nation’s networks is essential to ensuring reliable and effective communications within the United States. Within the roles prescribed for them by federal law and policy, the Federal Communications Commission and the Departments of Homeland Security, Defense, and Commerce have taken actions to support the communications and information technology sectors’ efforts to secure the nation’s communications networks from cyber attacks. However, until DHS and its sector partners develop appropriate outcome-oriented metrics, it will be difficult to gauge the effectiveness of efforts to protect the nation’s core and access communications networks and critical support components of the Internet from cyber incidents. While no cyber incidents have been reported affecting the nation’s core and access networks, communications networks operators can use reporting mechanisms established by FCC and DHS to share information on outages and incidents. The pilot programs undertaken by DOD with its defense industrial base partners exhibit several attributes that could apply to the communications sector and help private sector entities more effectively secure the communications infrastructure they own and operate. As DHS develops procedures for expanding this program, considering these attributes could inform DHS’s efforts. To help assess efforts to secure communications networks and inform future investment and resource decisions, we recommend that the Secretary of Homeland Security direct the appropriate officials within DHS to collaborate with its public and private sector partners to develop, implement, and track sector outcome-oriented performance measures for cyber protection activities related to the nation’s communications networks. We provided a draft of this report to the Departments of Commerce (including the Bureau of Industry and Security, NIST, and NTIA), Defense, and Homeland Security and FCC for their review and comment. DHS provided written comments on our report (see app. II), signed by DHS’s Director of Departmental GAO-OIG Liaison Office. In its comments, DHS concurred with our recommendation and stated that it is working with industry to develop plans for mitigating risks that will determine the path forward in developing outcome-oriented performance measures for cyber protection activities related to the nation’s core and access communications networks. Although the department did not specify an estimated completion date for developing and implementing these measures, we believe the prompt implementation of our recommendation will assist DHS in assessing efforts to secure communication networks and inform future investment and resource decisions. We also received technical comments via e-mail from officials responsible for cybersecurity efforts related to communication networks at Defense, DHS, FCC, and Commerce's Bureau of Industry and Security and NTIA. We incorporated these comments where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies to interested congressional committees; the Secretaries of the Departments of Commerce, Defense, and Homeland Security; the Chairman of the Federal Communications Commission; the Director of the Office of Management and Budget; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6244 or at wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to (1) identify the roles of and actions taken by key federal entities to help protect the communications networks from cyber- based threats, (2) assess what is known about the extent to which cyber- incidents affecting the communications networks have been reported to the Federal Communications Commission (FCC) and Department of Homeland Security (DHS), and (3) determine if the Department of Defense’s (DOD) pilot programs to promote cybersecurity in the defense industrial base can be used in the communications sector. Our audit focused on the core and access networks of the communication network. These networks include wireline, wireless, cable, and satellite. We did not address broadcast access networks because they are responsible for a smaller volume of traffic than other networks. Additionally, we focused on the Internet support components that are critical for delivering services: the Border Gateway Protocol (BGP) and Domain Name System (DNS). To identify the roles of federal entities, we collected, reviewed, and analyzed relevant federal law, policy, regulation, and critical infrastructure protection-related strategies. Sources consulted include statutes such as the Communications Act of 1934, Homeland Security Act of 2002, and the Defense Production Act of 1950, as well as other public laws; the Code of Federal Regulations; National Communication System Directive 3-10; the National Infrastructure Protection Plan; the Communications Sector- Specific Plan; the Information Technology Sector-Specific Plan; the Communications Sector Risk Assessment; the Information Technology Sector Risk Assessment; Homeland Security Presidential Directives; selected executive orders; and related GAO products. Using these materials, we selected the Departments of Commerce, Defense, and Homeland Security, and FCC to review their respective roles and actions related to the security of the privately owned communications network because they were identified as having the most significant roles and organizations for addressing communications cybersecurity. To identify the actions taken by federal entities we collected, reviewed, and analyzed relevant policies, plans, reports, and related performance metrics and interviewed officials at each of the four agencies. For example, we reviewed and analyzed Department of Commerce agreements detailing the process for how changes are to be made to the authoritative root zone file and Internet Policy Task Force reports on cybersecurity innovation and the Internet. In addition, we analyzed and identified current and planned actions outlined in DOD’s National Security/Emergency Preparedness Executive Committee Charter. Also, we analyzed reports issued by the Communications Security, Reliability, and Interoperability Council on a variety of issues, including the security of the Domain Name System and the Border Gateway Protocol. Further, we reviewed and analyzed the risk assessments and sector-specific plans for both the communications and information technology critical infrastructure sectors, as well DHS’s plans for realignment in response to Executive Order 13618. In addition, we interviewed agency officials regarding authority, roles, policies, and actions created by their department or agency, and actions taken by their departments and agencies to encourage or enhance the protection of communications networks, BGP, and DNS, and fulfill related roles. For Commerce, we interviewed officials from the Bureau of Industry and Security, National Telecommunications and Information Administration, and the National Institute of Standards and Technology. For DOD, we interviewed officials from the Office of the Chief Information Officer, including those from the National Leadership Command Capability Management Office and the Trusted Mission Systems and Networks Office. We also interviewed officials from the Office of the Under Secretary of Defense for Policy. For DHS, we interviewed officials from the National Protection and Programs Directorate’s Office of Cybersecurity and Communications. For FCC, we interviewed officials from the International, Media, Public Safety and Homeland Security, Wireless Telecommunications, and Wireline Competition Bureaus. Based on our analysis and the information gathered through interviews, we created a list of actions taken by each agency. Additionally, we reviewed documents (including the communications sector risk assessment) from and conducted interviews with officials from the Communications Information Sharing and Analysis Center to assess federal efforts to fulfill roles and responsibilities. To assess what is known about the extent to which cyber-incidents affecting the communications networks have been reported to FCC and DHS, we analyzed FCC policy and guidance related to its Network Outage Reporting System. Additionally, we conducted an analysis of outage reports submitted from January 2010 to October 2012 to determine the extent to which they were related to cybersecurity threats, such as botnets, spyware, viruses, and worms affecting the core and access networks. To assess the reliability of FCC outage reports, we (1) discussed data quality control procedures with agency officials, (2) reviewed relevant documentation, (3) performed testing for obvious problems with completeness or accuracy, and (4) reviewed related internal controls. We determined that the data were sufficiently reliable for the purposes of this report. We also interviewed officials from FCC’s Public Safety and Homeland Security Bureau to understand incident reporting practices of its regulated entities, and how reported incident data were used by FCC to encourage improvement or initiate enforcement actions. Further, we interviewed officials from DHS’s United States Computer Emergency Readiness Team regarding the extent to which incidents were reported to it that affected core and access communications networks. We also conducted an analysis of information security reports from nonfederal entities, to determine if cyber incidents on the core and access communications networks had been reported to nonfederal entities. Additionally, we interviewed Communications Information Sharing and Analysis Center officials to identify the mechanisms and processes used to report cyber-related incidents in the communications sector to the center and then to the federal government. To determine if DOD’s pilot can be used to inform the communications sector, we reviewed our August 2012 report on DOD efforts to enhance the cybersecurity of the defense industrial base critical infrastructure sector. We then identified and summarized attributes of the program that could be publicly reported and that were potentially applicable to the communications sector. The information used to compile the attributes from the August 2012 report was determined by DOD at that time not to be considered official use only. We also interviewed officials from DHS’s Office of Cybersecurity and Communications to ascertain the current status of the pilot programs and efforts to determine the applicability of the pilots to all critical infrastructures, including the communications sector. We conducted this performance audit from April 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. GAO staff who made significant contributions to this report include Michael W. Gilmore, Assistant Director; Thomas E. Baril, Jr; Bradley W. Becker; Cortland Bradford; Penney Harwell Caramia; Kush K. Malhotra; Lee A. McCracken; David Plocher; and Adam Vodraska.
Ensuring the effectiveness and reliability of communications networks is essential to national security, the economy, and public health and safety. The communications networks (including core and access networks) can be threatened by both natural and human-caused events, including increasingly sophisticated and prevalent cyber-based threats. GAO has identified the protection of systems supporting the nation's critical infrastructure--which includes the communications sector--as a government-wide high-risk area. GAO was asked to (1) identify the roles of and actions taken by key federal entities to help protect communications networks from cyber-based threats, (2) assess what is known about the extent to which cyber incidents affecting the communications networks have been reported to the FCC and DHS, and (3) determine if Defense's pilot programs to promote cybersecurity in the defense industrial base can be used in the communications sector. To do this, GAO focused on core and access networks that support communication services, as well as critical components supporting the Internet. GAO analyzed federal agency policies, plans, and other documents; interviewed officials; and reviewed relevant reports. While the primary responsibility for protecting the nation's communications networks belongs to private-sector owners and operators, federal agencies also play a role in support of their security, as well as that of critical components supporting the Internet. Specifically, private-sector entities are responsible for the operational security of the networks they own, but the Federal Communications Commission (FCC) and the Departments of Homeland Security (DHS), Defense, and Commerce have regulatory and support roles, as established in federal law and policy, and have taken a variety of related actions. For example, FCC has developed and maintained a system for reporting network outage information; DHS has multiple components focused on assessing risk and sharing threat information; Defense and DHS serve as co-chairs for a committee on national security and emergency preparedness for telecommunications functions; and Commerce has studied cyber risks facing the communications infrastructure and participates in standards development. However, DHS and its partners have not yet initiated the process for developing outcome-based performance measures related to the cyber protection of key parts of the communications infrastructure. Outcome-based metrics related to communications networks and critical components supporting the Internet would provide federal decision makers with additional insight into the effectiveness of sector protection efforts. No cyber-related incidents affecting core and access networks have been recently reported to FCC and DHS through established mechanisms. Specifically, both FCC and DHS have established reporting mechanisms to share information on outages and incidents, but of the outages reported to FCC between January 2010 and October 2012, none were related to common cyber threats. Officials within FCC and the private sector stated that communication networks are less likely to be targeted themselves because they provide the access and the means by which attacks on consumer, business, and government systems can be facilitated. Attributes of two pilot programs established by Defense to enhance the cybersecurity of firms in the defense industrial base (the industry associated with the production of defense capabilities) could be applied to the communications sector. The department's pilot programs involve partnering with firms to share information about cyber threats and responding accordingly. Considering these attributes can inform DHS as it develops procedures for expanding these pilot programs to all critical infrastructure sectors, including the communications sector. GAO recommends that DHS collaborate with its partners to develop outcome-oriented measures for the communications sector. DHS concurred with GAO's recommendation.
You are an expert at summarizing long articles. Proceed to summarize the following text: Some context for my remarks is appropriate. The threat of terrorism was significant throughout the 1990s; a plot to destroy 12 U.S. airliners was discovered and thwarted in 1995, for instance. Yet the task of providing security to the nation’s aviation system is unquestionably daunting, and we must reluctantly acknowledge that any form of travel can never be made totally secure. The enormous size of U.S. airspace alone defies easy protection. Furthermore, given this country’s hundreds of airports, thousands of planes, tens of thousands of daily flights, and the seemingly limitless ways terrorists or criminals can devise to attack the system, aviation security must be enforced on several fronts. Safeguarding airplanes and passengers requires, at the least, ensuring that perpetrators are kept from breaching security checkpoints and gaining access to secure airport areas or to aircraft. Additionally, vigilance is required to prevent attacks against the extensive computer networks that FAA uses to guide thousands of flights safely through U.S. airspace. FAA has developed several mechanisms to prevent criminal acts against aircraft, such as adopting technology to detect explosives and establishing procedures to ensure that passengers are positively identified before boarding a flight. Still, in recent years, we and others have often demonstrated that significant weaknesses continue to plague the nation’s aviation security. Our work has identified numerous problems with aspects of aviation security in recent years. One such problems is FAA’s computer-based air traffic control system. The ATC system is an enormous, complex collection of interrelated systems, including navigation, surveillance, weather, and automated information processing and display systems that link hundreds of ATC facilities and provide information to air traffic controllers and pilots. Failure to adequately protect these systems could increase the risk of regional or nationwide disruption of air traffic—or even collisions. In five reports issued from 1998 through 2000, we pointed out numerous weaknesses in FAA’s computer security. FAA had not (1) completed background checks on thousands of contractor employees, (2) assessed and accredited as secure many of its ATC facilities, (3) performed appropriate risk assessments to determine the vulnerability of the majority of its ATC systems, (4) established a comprehensive security program, (5) developed service continuity controls to ensure that critical operations continue without undue interruption when unexpected events occur, and (6) fully implemented an intrusion detection capability to detect and respond to malicious intrusions. Some of these weaknesses could have led to serious problems. For example, as part of its Year 2000 readiness efforts, FAA allowed 36 mainland Chinese nationals who had not undergone required background checks to review the computer source code for eight mission-critical systems. To date, we have made nearly 22 recommendations to improve FAA’s computer security. FAA has worked to address these recommendations, but most of them have yet to be completed. For example, it is making progress in obtaining background checks on contractors and accrediting facilities and systems as secure. However, it will take time to complete these efforts. Control of access to aircraft, airfields, and certain airport facilities is another component of aviation security. Among the access controls in place are requirements intended to prevent unauthorized individuals from using forged, stolen, or outdated identification or their familiarity with airport procedures to gain access to secured areas. In May 2000, we reported that our special agents, in an undercover capacity, obtained access to secure areas of two airports by using counterfeit law enforcement credentials and badges. At these airports, our agents declared themselves as armed law enforcement officers, displayed simulated badges and credentials created from commercially available software packages or downloaded from the Internet, and were issued “law enforcement” boarding passes. They were then waved around the screening checkpoints without being screened. Our agents could thus have carried weapons, explosives, chemical/biological agents, or other dangerous objects onto aircraft. In response to our findings, FAA now requires that each airport’s law enforcement officers examine the badges and credentials of any individual seeking to bypass passenger screening. FAA is also working on a “smart card” computer system that would verify law enforcement officers’ identity and authorization for bypassing passenger screening. The Department of Transportation’s Inspector General has also uncovered problems with access controls at airports. The Inspector General’s staff conducted testing in 1998 and 1999 of the access controls at eight major airports and succeeded in gaining access to secure areas in 68 percent of the tests; they were able to board aircraft 117 times. After the release of its report describing its successes in breaching security, the Inspector General conducted additional testing between December 1999 and March 2000 and found that, although improvements had been made, access to secure areas was still gained more than 30 percent of the time. Screening checkpoints and the screeners who operate them are a key line of defense against the introduction of dangerous objects into the aviation system. Over 2 million passengers and their baggage must be checked each day for articles that could pose threats to the safety of an aircraft and those aboard it. The air carriers are responsible for screening passengers and their baggage before they are permitted into the secure areas of an airport or onto an aircraft. Air carriers can use their own employees to conduct screening activities, but mostly air carriers hire security companies to do the screening. Currently, multiple carriers and screening companies are responsible for screening at some of the nation’s larger airports. Concerns have long existed over screeners’ ability to detect and prevent dangerous objects from entering secure areas. Each year, weapons were discovered to have passed through one checkpoint and have later been found during screening for a subsequent flight. FAA monitors the performance of screeners by periodically testing their ability to detect potentially dangerous objects carried by FAA special agents posing as passengers. In 1978, screeners failed to detect 13 percent of the objects during FAA tests. In 1987, screeners missed 20 percent of the objects during the same type of test. Test data for the 1991 to 1999 period show that the declining trend in detection rates continues. Furthermore, the recent tests show that as tests become more realistic and more closely approximate how a terrorist might attempt to penetrate a checkpoint, screeners’ ability to detect dangerous objects declines even further. As we reported last year, there is no single reason why screeners fail to identify dangerous objects. Two conditions—rapid screener turnover and inadequate attention to human factors—are believed to be important causes. Rapid turnover among screeners has been a long-standing problem, having been identified as a concern by FAA and by us in reports dating back to at least 1979. We reported in 1987 that turnover among screeners was about 100 percent a year at some airports, and according to our more recent work, the turnover is considerably higher. From May 1998 through April 1999, screener turnover averaged 126 percent at the nation’s 19 largest airports; 5 of these airports reported turnover of 200 percent or more, and one reported turnover of 416 percent. At one airport we visited, of the 993 screeners trained at that airport over about a 1-year period, only 142, or 14 percent, were still employed at the end of that year. Such rapid turnover can seriously limit the level of experience among screeners operating a checkpoint. Both FAA and the aviation industry attribute the rapid turnover to the low wages and minimal benefits screeners receive, along with the daily stress of the job. Generally, screeners are paid at or near the minimum wage. We reported last year that some of the screening companies at 14 of the nation’s 19 largest airports paid screeners a starting salary of $6.00 an hour or less and, at 5 of these airports, the starting salary was the then- minimum wage—$5.15 an hour. It is common for the starting wages at airport fast-food restaurants to be higher than the wages screeners receive. For instance, at one airport we visited, screeners’ wages started as low as $6.25 an hour, whereas the starting wage at one of the airport’s fast- food restaurants was $7 an hour. The demands of the job also affect performance. Screening duties require repetitive tasks as well as intense monitoring for the very rare event when a dangerous object might be observed. Too little attention has been given to factors such as (1) improving individuals’ aptitudes for effectively performing screener duties, (2) the sufficiency of the training provided to screeners and how well they comprehend it, and (3) the monotony of the job and the distractions that reduce screeners’ vigilance. As a result, screeners are being placed on the job who do not have the necessary aptitudes, nor the adequate knowledge to effectively perform the work, and who then find the duties tedious and dull. We reported in June 2000 that FAA was implementing a number of actions to improve screeners’ performance. However, FAA did not have an integrated management plan for these efforts that would identify and prioritize checkpoint and human factors problems that needed to be resolved, and identify measures—and related milestone and funding information—for addressing the performance problems. Additionally, FAA did not have adequate goals by which to measure and report its progress in improving screeners’ performance. FAA is implementing our recommendations. However, two key actions to improving screeners’ performance are still not complete. These actions are the deployment of threat image projection systems—which place images of dangerous objects on the monitors of X-ray machines to keep screeners alert and monitor their performance—and a certification program to make screening companies accountable for the training and performance of the screeners they employ. Threat image projection systems are expected to keep screeners alert by periodically imposing the image of a dangerous object on the X-ray screen. They also are used to measure how well screeners perform in detecting these objects. Additionally, the systems serve as a device to train screeners to become more adept at identifying harder-to-spot objects. FAA is currently deploying the threat image projections systems and expects to have them deployed at all airports by 2003. The screening company certification program, required by the Federal Aviation Reauthorization Act of 1996, will establish performance, training, and equipment standards that screening companies will have to meet to earn and retain certification. However, FAA has still not issued its final regulation establishing the certification program. This regulation is particularly significant because it is to include requirements mandated by the Airport Security Improvement Act of 2000 to increase screener training—from 12 hours to 40 hours—as well as expand background check requirements. FAA had been expecting to issue the final regulation this month, 2 ½ years later than it originally planned. We visited five countries—Belgium, Canada, France, the Netherlands, and the United Kingdom—viewed by FAA and the civil aviation industry as having effective screening operations to identify screening practices that differ from those in the United States. We found that some significant differences exist in four areas: screening operations, screener qualifications, screener pay and benefits, and institutional responsibility for screening. First, screening operations in some of the countries we visited are more stringent. For example, Belgium, the Netherlands, and the United Kingdom routinely touch or “pat down” passengers in response to metal detector alarms. Additionally, all five countries allow only ticketed passengers through the screening checkpoints, thereby allowing the screeners to more thoroughly check fewer people. Some countries also have a greater police or military presence near checkpoints. In the United Kingdom, for example, security forces—often armed with automatic weapons—patrol at or near checkpoints. At Belgium’s main airport in Brussels, a constant police presence is maintained at one of two glass-enclosed rooms directly behind the checkpoints. Second, screeners’ qualifications are usually more extensive. In contrast to the United States, Belgium requires screeners to be citizens; France requires screeners to be citizens of a European Union country. In the Netherlands, screeners do not have to be citizens, but they must have been residents of the country for 5 years. Training requirements for screeners were also greater in four of the countries we visited than in the United States. While FAA requires that screeners in this country have 12 hours of classroom training before they can begin work, Belgium, Canada, France, and the Netherlands require more. For example, France requires 60 hours of training and Belgium requires at least 40 hours of training with an additional 16 to 24 hours for each activity, such as X-ray machine operations, that the screener will conduct. Third, screeners receive relatively better pay and benefits in most of these countries. Whereas screeners in the United States receive wages that are at or slightly above minimum wage, screeners in some countries receive wages that are viewed as being at the “middle income” level in those countries. In the Netherlands, for example, screeners received at least the equivalent of about $7.50 per hour. This wage was about 30 percent higher than the wages at fast-food restaurants in that country. In Belgium, screeners received the equivalent of about $14 per hour. Not only is pay higher, but the screeners in some countries receive benefits, such as health care or vacations—in large part because these benefits are required under the laws of these countries. These countries also have significantly lower screener turnover than the United States: turnover rates were about 50 percent or lower in these countries. Finally, the responsibility for screening in most of these countries is placed with the airport authority or with the government, not with the air carriers as it is in the United States. In Belgium, France, and the United Kingdom, the responsibility for screening has been placed with the airports, which either hire screening companies to conduct the screening operations or, as at some airports in the United Kingdom, hire screeners and manage the checkpoints themselves. In the Netherlands, the government is responsible for passenger screening and hires a screening company to conduct checkpoint operations, which are overseen by a Dutch police force. We note that, worldwide, of 102 other countries with international airports, 100 have placed screening responsibility with the airports or the government; only 2 other countries—Canada and Bermuda—place screening responsibility with air carriers. Because each country follows its own unique set of screening practices, and because data on screeners’ performance in each country were not available to us, it is difficult to measure the impact of these different practices on improving screeners’ performance. Nevertheless, there are indications that for least one country, practices may help to improve screeners’ performance. This country conducted a screener testing program jointly with FAA that showed that its screeners detected over twice as many test objects as did screeners in the United States. Mr. Chairman, this concludes my prepared statement. I will be pleased to answer any questions that you or Members of the Committee may have. For more information, please contact Gerald L. Dillingham at (202) 512- 2834. Individuals making key contributions to this testimony included Bonnie Beckett, J. Michael Bollinger, Colin J. Fallon, John R. Schulze, and Daniel J. Semick. Responses of Federal Agencies and Airports We Surveyed About Access Security Improvements (GAO-01-1069R, Aug. 31, 2001). Aviation Security: Additional Controls Needed to Address Weaknesses in Carriage of Weapons Regulations (GAO/RCED-00-181, Sept. 29, 2000). FAA Computer Security: Actions Needed to Address Critical Weaknesses That Jeopardize Aviation Operations (GAO/T-AIMD-00-330, Sept. 27, 2000). FAA Computer Security: Concerns Remain Due to Personnel and Other Continuing Weaknesses (GAO/AIMD-00-252, Aug. 16, 2000). Aviation Security: Long-Standing Problems Impair Airport Screeners’ Performance (GAO/RCED-00-75, June 28, 2000). Computer Security: FAA Is Addressing Personnel Weaknesses, But Further Action Is Required (GAO/AIMD-00-169, May 31, 2000). Security: Breaches at Federal Agencies and Airports (GAO-OSI-00-10, May 25, 2000). Combating Terrorism: How Five Foreign Countries Are Organized to Combat Terrorism (GAO/NSIAD-00-85, Apr. 7, 2000). Aviation Security: Vulnerabilities Still Exist in the Aviation Security System (GAO/T-RCED/AIMD-00-142, Apr. 6, 2000). Aviation Security: Slow Progress in Addressing Long-Standing Screener Performance Problems (GAO/T-RCED-00-125, Mar. 16, 2000). Computer Security: FAA Needs to Improve Controls Over Use of Foreign Nationals to Remediate and Review Software (GAO/AIMD-00-55, Dec. 23, 1999). FBI: Delivery of ATF Report on TWA Flight 800 Crash (GAO/OSI-99-18R, Aug. 13, 1999). Aviation Security: FAA’s Actions to Study Responsibilities and Funding for Airport Security and to Certify Screening Companies (GAO/RCED- 99-53, Feb. 25, 1999). Air Traffic Control: Weak Computer Security Practices Jeopardize Flight Safety (GAO/AIMD-98-155, May 18, 1998). Aviation Security: Progress Being Made, but Long-Term Attention Is Needed (GAO/T-RCED-98-190, May 14, 1998). Aviation Security: Implementation of Recommendations Is Under Way, but Completion Will Take Several Years (GAO/RCED-98-102, Apr. 24, 1998). Combating Terrorism: Observations on Crosscutting Issues (T-NSIAD- 98-164, Apr. 23, 1998). Aviation Safety: Weaknesses in Inspection and Enforcement Limit FAA in Identifying and Responding to Risks (GAO/RCED-98-6, Feb. 27, 1998). Aviation Security: FAA’s Procurement of Explosives Detection Devices (GAO/RCED-97-111R, May 1, 1997). Aviation Security: Commercially Available Advanced Explosives Detection Devices (GAO/RCED-97-ll9R, Apr. 24, 1997). Aviation Security: Posting Notices at Domestic Airports (GAO/RCED-97- 88R, Mar. 25, 1997). Aviation Safety and Security: Challenges to Implementing the Recommendations of the White House Commission on Aviation Safety and Security (GAO/T-RCED-97-90, Mar. 5, 1997). Aviation Security: Technology’s Role in Addressing Vulnerabilities (GAO/T-RCED/NSIAD-96-262, Sept. 19, 1996). Aviation Security: Urgent Issues Need to Be Addressed (GAO/T- RCED/NSIAD-96-251, Sept. 11, 1996). Terrorism and Drug Trafficking: Technologies for Detecting Explosives and Narcotics (GAO/NSIAD/RCED-96-252, Sept. 4, 1996). Aviation Security: Immediate Action Needed to Improve Security (GAO/T-RCED/NSIAD-96-237, Aug. 1, 1996).
A safe and secure civil aviation system is a critical component of the nation's overall security, physical infrastructure, and economic foundation. Billions of dollars and myriad programs and policies have been devoted to achieving such a system. Although it is not fully known at this time what actually occurred or what all the weaknesses in the nation's aviation security apparatus are that contributed to the horrendous events on September 11, 2001, it is clear that serious weaknesses exist in our aviation security system and that their impact can be far more devastating than previously imagined. As reported last year, GAO's review of the Federal Aviation Administration's (FAA) oversight of air traffic control (ATC) computer systems showed that FAA had not followed some critical aspects of its own security requirements. Specifically, FAA had not ensured that ATC buildings and facilities were secure, that the systems themselves were protected, and that the contractors who access these systems had undergone background checks. Controls for limiting access to secure areas, including aircraft, have not always worked as intended. GAO's special agents used fictitious law enforcement badges and credentials to gain access to secure areas, bypass security checkpoints at two airports, and walk unescorted to aircraft departure gates. Tests of screeners revealed significant weaknesses as measured in their ability to detect threat objects located on passengers or contained in their carry-on luggage. Screening operations in Belgium, Canada, France, the Netherlands, and the United Kingdom--countries whose systems GAO has examined--differ from this country's in some significant ways. Their screening operations require more extensive qualifications and training for screeners, include higher pay and better benefits, and often include different screening techniques, such as "pat-downs" of some passengers.
You are an expert at summarizing long articles. Proceed to summarize the following text: HAZMAT is any substance or material that the Secretary of Transportation has determined is capable of posing an unreasonable risk to health, safety, or property when transported in commerce. The Secretary of Transportation designates HAZMAT under the Hazardous Materials Transportation Act and its implementing regulations. Within the federal government, DOT has the primary responsibility to issue regulations for the safe transport of HAZMAT in intrastate, interstate, and foreign commerce. To accomplish this mission, DOT issues HAZMAT regulations and provides other services to the transportation community and emergency responders—such as training, enforcement, technical support, information, and policy guidance—to protect the public against the safety risks inherent in transporting HAZMAT. According to DOT’s Office of Hazardous Materials Safety, an estimated 1.4 million HAZMAT shipments are transported in the United States each day on average. These shipments amount to more than 3 billion tons of HAZMAT transported every year. While only about 43 percent of all HAZMAT tonnage is transported by highway, that tonnage accounts for approximately 94 percent of the individual shipments. Air, water, rail, and pipeline constitute the remaining HAZMAT transportation modes, with air generally being the most restrictive (due to aircraft cargo limitations and load restrictions). DOT uses a United Nations classification system to categorize all HAZMAT into nine classes and ensure its safe storage, handling, transportation, use, and disposal. Each of the nine HAZMAT classes is defined by a specific set of parameters—usually characterized by chemical properties (e.g., an oxidizer material) or inherent physical properties (e.g., a corrosive material) or as possibly posing a health hazard (e.g., a poisonous substance). Figure 1 shows examples of DOT’s labels, warning labels, and hazard warnings for the nine classes of HAZMAT, which can be further subdivided into divisions (e.g., class 5 is divided into 5.1 and 5.2). For example, HAZMAT in these classes could include explosives, which are class 1; gasoline, class 3; and lithium batteries, class 9. TRANSCOM is DOD’s single manager for transportation, other than service-unique or theater-assigned assets.of three military service component commands that manage the movement of DOD shipments, including HAZMAT—the Army’s Military The command is composed Surface Deployment and Distribution Command, the Navy’s Military Sealift Command, and the Air Force’s Air Mobility Command. The Army’s Military Surface Deployment and Distribution Command provides worldwide common-use ocean terminal services and traffic-management services, conducts port operations at sealift terminals, and plans, transports, and tracks shipments transported by surface and rail. The Navy’s Military Sealift Command transports shipments over water using a mixture of government-owned and commercial ships. The Air Force’s Air Mobility Command transports shipments to destinations anywhere around the world by military or commercial carriers by air and serves as the single port manager for common-user aerial ports. DOD installations, bases, or sites appoint personnel to assist with traffic- management functions. These traffic-management functions include, but are not limited to, providing efficient, responsive, and quality transportation services. These duties also include assisting with the handling, labeling, and packaging of HAZMAT shipments prior to offering them to carriers for transport. Shipments may originate at a vendor (e.g., a manufacturer) or DOD location (e.g., a Defense Logistics Agency depot). Moreover, a manufacturer might use a separate commercial carrier than one used by DOD to deliver its parts to the DOD installation that ordered them. Shippers, carriers, and receivers carry out the three basic roles of the transportation system. A shipper may be a DOD entity (e.g., the Defense Logistics Agency), or a contracted commercial vendor (e.g., a manufacturer) where a shipment originates. A carrier may be a DOD entity or a private-sector individual or company that transports shipments from the shipper to the receiver. A receiver is the DOD entity that is the final destination point for the shipment. In cases in which shipments are destined for overseas areas, the shipments may be transported through central receiving points, such as one of DOD’s five aerial ports or a Defense Logistics Agency depot, before being sent to the final destination point. In figure 2, we list the specific functions that a shipper, carrier, and receiver might perform to transport HAZMAT. A shipper is responsible for performing several functions prior to the movement of HAZMAT. These pretransportation functions include, but are not limited to, the following: Determining the hazard class and mode of transportation: The shipper reviews the Hazardous Materials Regulations to properly identify the hazardous materials in the shipment. This information is needed to determine the packaging, marking, and labeling,requirements for the specific type of HAZMAT being transported. Packaging and labeling: Once the mode of transportation has been determined, the shipper selects the appropriate packaging and labeling, fills the package, secures a closure on the package, and affixes the appropriate labels. Documentation: HAZMAT shipments that require documentation (as shown in fig. 3 and fig. 4) must describe the HAZMAT and the total quantity of the HAZMAT being shipped, provide emergency response information, and provide certification by the shipper that the HAZMAT is in proper condition for transportation. Loading, blocking, and bracing: The HAZMAT package is to be appropriately loaded, blocked, and braced in a freight container or transport vehicle. During this process, the HAZMAT may be segregated in a transport vehicle to ensure that it is not transported with an incompatible shipment. Selecting, providing, or affixing placards: The container and the carrier’s vehicle are to be properly marked to identify that the vehicle contains HAZMAT. Once a shipper has completed the required pretransportation functions, a carrier takes possession of the HAZMAT shipment and performs the transportation functions. Transportation functions include the movement, loading incidental to movement, unloading incidental to movement, and storage incidental to movement. Transport may involve a single mode of transport (e.g., by highway or aircraft) or it can be multimodal—for example, moving first by surface (by highway or rail) and then through a central receiving point, such as an aerial port (for aircraft) or a depot. At the end of the transportation function, a carrier delivers the HAZMAT shipment to its final destination point, where a receiver takes possession of the HAZMAT shipment for immediate use or storage for later use. When transporting HAZMAT, there is a complex framework of statutes and regulations prescribed by multiple civilian and military entities that must be considered and evaluated to ensure safe, secure, and efficient transport. The Hazardous Materials Transportation Act, enacted in 1975 and since amended, is the primary statutory regime governing the transport of HAZMAT in the United States. The purpose of the act is to protect against the risks to life, property, and the environment that are inherent in the transport of HAZMAT in intrastate, interstate, and foreign commerce.Regulations, located in Title 49 of the Code of Federal Regulations, which generally govern the handling, labeling, packaging, and transportation of To implement the act, DOT issued the Hazardous Materials HAZMAT shipments in commerce, among other activities. The regulations include specific guidance pertaining to each of the nine classes of HAZMAT on the basis of their composition, level of danger, and mode of transport. With regard to DOD, the Defense Transportation Regulation prescribes how DOD is to transport HAZMAT. Specifically, the Defense Transportation Regulation incorporates or references requirements from DOT’s Hazardous Materials Regulations, as well as various international- and country-specific regulations or standards for transporting HAZMAT shipments by air and water. In figure 5, we illustrate the statutory and regulatory elements that govern DOD’s handling, labeling, and packaging of HAZMAT shipments. Applying the different regulations governing the transportation of a specific HAZMAT class to the mode of transportation can be complex for service transportation officials and requires careful reading of all applicable regulations. As an example, when shipping acetyl chloride (flammable liquid, class 3 HAZMAT) the shipper must review several sources to determine how to properly ship the item. If the shipper intends to ship the HAZMAT on a commercial aircraft carrying passengers, the Hazardous Materials Regulations indicate that no more than 1 liter of this liquid per package can be shipped because of the risk posed by transporting HAZMAT of that classification. If the HAZMAT is being shipped by military aircraft or on an international flight, additional requirements and limitations may need to be applied. On the other hand, if carriers transport acetyl chloride by highway, they may be able to transport greater quantities. DOD may also ship DOD-unique HAZMAT items that are not addressed in DOT’s Hazardous Materials Regulations or for which DOD needs to seek a waiver or approval. In these cases, the Hazardous Materials Regulations and DOT guidance specify a process that any commercial or government entity can use to apply for a DOT waiver in the form of a special permit or approval providing relief from requirements in the Hazardous Materials Regulations. DOD shipments may also use certificates of equivalence, which are approvals issued by DOD itself in instances where a packaging design differs from the requirements of the Hazardous Materials Regulations. Certificates of equivalence certify that the packaging equals or exceeds the comparable requirements of the Hazardous Materials Regulations. DOD shippers can use special permits or approvals from DOT or certificates of equivalence from DOD to handle department-unique HAZMAT shipments in a way commensurate with the DOT requirements. For example, according to officials, missiles transported in certain configurations are not standard items covered by the Hazardous Materials Regulations. Therefore, DOD obtained an approval from DOT that indicates the type of packaging and container to be used to transport this item, which, according to officials, is equal to or exceeds the requirements of the Hazardous Materials Regulations. Similarly, for new items that are not mentioned in the Hazardous Materials Regulations, such as certain DOD-specific or emerging HAZMAT (e.g., lithium batteries with a metal casing) DOD might obtain a waiver that identifies how these items will be packaged and transported. In addition to the statutory and regulatory elements discussed above, DOD has developed additional guidance to address specific circumstances that are either not covered in the existing framework of statutes and regulations or are areas where the department believes additional or different requirements are needed. For example: The Defense Transportation Regulation addresses the policies, procedures, and responsibilities for transporting HAZMAT by military aircraft. Additionally, Air Force Manual 24-204 (Interservice) identifies procedural exceptions in the context of tactical, contingency, or emergency airlift. Because of the increased risk to the aircraft, aircrew, and participants, these procedural exceptions must only be used when there is a validated operational requirement. For example, in certain circumstances DOD may be able to use a single Shipper’s Declaration for Dangerous Goods to identify and certify more than one type of hazardous material when shipped under a single tracking number. For arms, ammunition, and explosives and certain classified shipments, the Defense Transportation Regulation contains requirements, procedures, and responsibilities related to the Transportation Protective Services program. Transportation Protective Services include a series of safeguards additional to those found in the Hazardous Materials Regulations such as, depending on the shipment, satellite tracking of the carrier vehicle using the Defense Transportation Tracking System and the use of two drivers with security clearances to provide constant surveillance. TRANSCOM’s Surface Deployment and Distribution Command provides approval for certain commercial carriers to offer these Transportation Protective Services after determining that they meet safety performance thresholds, among other requirements. According to Army health officials, the Defense Transportation Regulation also contains more-stringent training requirements for personnel shipping medical HAZMAT. As described by officials, the Defense Transportation Regulation and other guidance requires medical personnel who package and ship hazardous materials be trained and certified to do so at one of the DOD-approved hazardous- materials transportation courses. Specifically, the Defense Transportation Regulation indicates that anyone involved with the transportation of pathogens or etiologic agents who manages, packages, certifies, or prepares laboratory samples and specimens or regulated medical waste for transport by any mode may satisfy the training requirement through specific courses offered by the Army According to officials, these training Public Health Command. requirements extend to those personnel who package and ship biological select agents and toxins as well. According to TRANSCOM officials, in general DOD HAZMAT shipments arrive at their final destination without incident or delay; however, the department faces some challenges ensuring the safe, timely, and cost- effective transportation of some HAZMAT shipments. According to DOD data, for HAZMAT transported by surface and air, improper documentation and packaging have led to transportation delays. In a limited number of instances with a potential for public safety and national security consequences, DOD installations did not provide carriers transporting sensitive arms, ammunition and explosives HAZMAT with access to secure hold areas or assist them in locating the nearest alternate means to secure those shipments. In addition, the reliability of safety performance data calls into question DOD’s process for selecting eligible, and evaluating current, HAZMAT carriers to transport arms, ammunition, explosives, and sensitive and classified shipments. According to DOD information that we analyzed for air and surface HAZMAT shipments and according to officials, a substantial number of HAZMAT shipments were not documented and packaged in accordance with the Defense Transportation Regulation, which resulted in delays. To ensure the safe and secure transport of HAZMAT, DOD delayed the transport of these shipments until the documentation or packaging issues were resolved. According to agency officials we interviewed, these shipment delays can be as short as a few hours or last several weeks, depending on the nature of the issue. For example, regarding HAZMAT shipments to be transported by air, Global Air Transportation Execution System data show that, of the 246,747 total shipments of HAZMAT received at all five major domestic military aerial ports for fiscal years 2009 through 2013, 67,149 shipments (or 27 percent) were delayed— primarily because they were not in compliance with the Defense Transportation Regulation requirements for documentation and packaging. While the Surface Deployment and Distribution Command data are not aggregated in the same format as the air shipment data, according to agency officials, improper documentation and packaging cause some delays in transportation of surface HAZMAT shipments. According to DOD officials, improper documentation and packaging also result in delays for the sea transportation mode. Agency officials stated that the specific documentation and packaging issues that resulted in delays varied. Following are examples that illustrate the types of instances that we identified during the course of our review in which, according to officials, the documentation and packaging of HAZMAT shipments were not in compliance with the Defense Transportation Regulation requirements and resulted in delays: Improper documentation (air): At a major aerial port we visited, we identified examples of HAZMAT shipments containing acetaldehyde (flammable liquid, class 3 HAZMAT) that were delayed because, according to officials, they were missing the Shipper’s Declaration for Dangerous Goods. The Defense Transportation Regulation requires that when transporting HAZMAT by air, the shipper must complete a Shipper’s Declaration for Dangerous Goods for the shipment. In another example, at a depot we visited, we identified various HAZMAT shipments of lithium batteries that were also delayed because, according to officials, they had improper HAZMAT shipping names on the bill of lading. Improper documentation (surface): We found instances of similar documentation problems for surface transportation that led to HAZMAT transportation delays. Reviewing examples of Transportation Discrepancy Reports—which are used to document shipper-related discrepancies, among other things—we found that improper documentation is a major cause for delays. For example, in the reports we found instances where the required Dangerous Goods Declaration forms were missing and where the proper HAZMAT material name was not included with the shipment as required by the Defense Transportation Regulation. Improper packaging (air): At a major aerial port we visited, we identified a shipment of rusted acetylene (flammable gas, class 2.1 HAZMAT) gas cylinders that were delayed in transport because, according to officials, they had arrived at the aerial port improperly packaged—stacked on top of one another in lidless wooden boxes (see fig. 6). According to military aerial port officials, to comply with the requirements of the Defense Transportation Regulation the rusted cylinders need to be recertified to ensure the integrity of the packaging before they are safe for air transport and may not be stacked on top of one another. Officials told us that, as a result of improper packaging, the potentially dangerous acetylene cylinders were delayed at the aerial port until the packaging discrepancies could be resolved by the shipper. Improper packaging (air): We observed a dented drum of flammable, toxic liquid (class 3 HAZMAT) that aerial port officials identified as improperly packaged under the Defense Transportation Regulation because of its potential to leak (see fig. 7). The military aerial port officials pointed out that, when transported by air, leaking HAZMAT drums can cause serious damage to DOD personnel and aircraft transporting them. Improper packaging (surface): Similarly, reviewing samples of Transportation Discrepancy Reports, we found instances of improper packaging. For example, we found a report of a shipment of helium gas (nonflammable gas, class 2.2 HAZMAT) in a container that, according to the report, was not properly secured (blocked and braced) for shipment. Another report identified an instance where a grenade (explosive, class 1.1 HAZMAT) arrived inside an armored vehicle instead of proper packaging for an explosive. Additionally, we found a report of a bottled oxygen (nonflammable gas, class 2.2 HAZMAT) shipment that arrived via highway improperly packaged for air shipment. According to the report, the material must be packed in a flame penetration and thermal-resistant package aboard an aircraft to prevent a potentially serious accident. During the course of our review, DOD officials pointed out several potential causes for these types of noncompliance. While the DOD officials we interviewed provided information about specific HAZMAT shipment delays, none stated a holistic understanding of the root causes resulting in HAZMAT transportation delays. Specifically we found the following: DOD officials from a surface port we interviewed told us that some DOD personnel and commercial shippers lack experience and training on HAZMAT documentation and packaging. Additionally, at an aerial port, officials believed that some commercial shippers also lacked familiarity with DOD HAZMAT documentation and packaging requirements and some shippers did not have HAZMAT-certified personnel to ensure proper HAZMAT documentation and packaging. Aerial port officials told us that normal personnel rotations exacerbated training problems because experienced personnel regularly moved to other positions, leaving less-experienced personnel behind to document and package HAZMAT shipments. Aerial port officials also told us that shippers have limited incentives to comply with DOD regulations because the aerial port itself has no way to discipline shippers that regularly transported improperly documented or packaged HAZMAT shipments knowing that in some cases the aerial port would fix the shipment. At another aerial port and DOD installation we visited, officials reported that miscommunication among the shipper, carrier, or receiver (incorrect shipping address or modes of transportation needed to transport the shipment) causes delays for HAZMAT shipments. Additionally, DOD officials at the surface and aerial ports we visited noted that they lacked the ability or authority to correct the root causes they suspected to be the source of delays. As a result, when a HAZMAT shipment arrives with improper packaging, DOD officials at these ports can either correct the problem themselves by repackaging the shipment or attempt to contact the shipper to correct the issue. In 2008, the Office of the Deputy Assistant Secretary of Defense for Transportation Policy commissioned a study to address DOD’s unacceptable level of delayed HAZMAT shipments. According to the Frustrated Cargo Analysis Final Project Report, the study’s scope was to reduce the effect of these delays by first defining the scope of the problem, developing a consensus on root causes, and providing a data- centric solution to the problems. While the 2008 report had several findings that identified causes for delays, including improper documentation and packaging of HAZMAT at an aerial port and a distribution depot, DOD has not resolved the problems addressed in that report or followed up on those findings with a more-recent analysis of the root causes for the delays that the department continues to experience. Moreover, we visited one of the sites identified in the report and observed that the same causes for delays with regard to documentation and packaging of HAZMAT persisted at that location. Additionally, we visited a different distribution depot and noted the same causes for delays. According to the Office of Management and Budget’s Management’s Responsibility for Internal Control, federal agencies should establish or maintain internal control to achieve effective and efficient operations and compliance with applicable regulations—in this case, the Defense Transportation Regulation and related guidance. Moreover, federal internal control standards call for agency management officials to promptly evaluate findings from audits and other reviews showing deficiencies, determine proper actions in response to those findings and recommendations, and complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management’s attention. However, DOD has not conducted a recent, holistic analysis to determine the root causes for the significant number of delays in HAZMAT shipments resulting from improper documentation and packaging. DOD officials acknowledged that they do not have this current information and agreed that it would be helpful in identifying and addressing issues causing delayed HAZMAT shipments. Without such information, DOD cannot adequately ensure compliance with the Defense Transportation Regulation. These delays have resulted in DOD’s committing resources—personnel, storage space, and packing materials—to ensure that the delayed cargo can ultimately be transported safely. According to reports from the Surface Deployment and Distribution Command Operations Center’s Defense Transportation Tracking System, DOD installations did not provide commercial carriers access to a secure hold area for at least 44 out of 70,891 sensitive arms, ammunition, and explosives shipments or did not assist carriers in finding alternative means to secure those shipments in fiscal years 2012 and 2013. Although these instances represent a relatively small percentage of the overall number of sensitive arms, ammunition, and explosives shipments, not providing secure hold for even a small percentage of these sensitive shipments poses a risk to public safety and to national security. While we found no evidence of severe incidents resulting from these instances where commercial carriers transporting sensitive HAZMAT were not provided access to secure hold areas, the potential high-risk consequences to public safety and national security of this type of failure are significant. A secure hold area is a location designated for the temporary parking of carrier vehicles transporting DOD-owned arms, ammunition, and explosives and other sensitive material. See Department of Defense, Physical Security of Sensitive Conventional Arms, Ammunition, and Explosives (AA&E), Manual 5100.76, encl. 10, para. 7.a (Apr. 17, 2012). location (or other approved location such as a carrier-owned facility) that can provide secure hold. To determine which DOD installations have the ability to provide secure hold, shippers and carriers use the Transportation Facilities Guide. The Transportation Facilities Guide lists DOD installation information including those that offer secure hold areas and their hours of operation, among other things. DOD installations are required to update their Transportation Facilities Guide records immediately whenever critical operational changes are made, such as changes in operating hours or installations closures. Otherwise, installations update the guide on a semiannual basis if the installation is participating in the secure hold area program and annually if it is not. However, even with the tools DOD makes available to DOD personnel, shippers, and carriers, some carriers of arms, ammunition, and explosives shipments are not able to gain access to secure hold areas. To find examples of instances where carriers were not provided secure hold or assisted with finding alternate locations, we examined descriptions reported in DOD’s Defense Transportation Tracking System Emergency Response Reports. Following are examples of instances in which DOD installations did not provide access to a secure hold area to carriers transporting sensitive and classified HAZMAT: A commercial carrier (i.e., a truck) transporting a shipment of ammunition, explosives, and fireworks arrived at its final destination point after normal working hours and was denied access to the installation. DOD personnel at the final destination point directed the carrier to another DOD installation for secure hold, but no one at that installation answered the carrier’s calls requesting secure hold for the night. Despite the carrier’s efforts to inform DOD personnel of the security requirements for arms, ammunition, and explosives shipments, the carrier was denied secure hold for the shipment and spent the night in his truck with the shipment in an empty parking lot near the highway. According to the Defense Transportation Regulation, temporary parking for certain arms, ammunition, and explosive shipments should be conducted at DOD-approved or commercially owned secure holding facilities. A DOD installation denied a commercial carrier access to a secure hold area for a shipment of thousands of pounds of aircraft flares and directed the carrier to park at a nearby major retail store parking lot. DOD personnel told the Surface Deployment and Distribution Command Operations Center that they regularly send carriers arriving after hours to this retail store or a nearby rest area. Parking arms, ammunition, and explosives at either location may be inconsistent with DOD guidance. DOD personnel at a secure hold installation instructed a commercial carrier of a small-arms and parts shipment to arrive before the installation closed because they would not accept after-hours shipments. Furthermore, DOD personnel explained that the installation was located in an unsafe neighborhood and advised the carrier to park in a safe public parking lot away from the installation if arriving after hours. Observing that the shipment would arrive after hours, the carrier decided to spend the night in his truck with the shipment parked at a public truck stop near the interstate highway. The Defense Transportation Regulation indicates that personnel at the DOD installation should assist the commercial carrier in identifying another location that could provide secure hold. According to the Defense Transportation Regulation, when commercial carriers experience challenges gaining access to a secure hold area, the carrier or DOD installation personnel can contact the Surface Deployment and Distribution Command for assistance. According to officials, Operations Center personnel are to attempt to resolve the issue by referring to relevant regulations such as the Defense Transportation Regulation, DOD Manual 5100.76-M (Physical Security of Sensitive Conventional Arms, Ammunition, and Explosives), or rerouting the shipment to another DOD location that can provide secure hold for the shipment. However, if the installation and Operations Center personnel cannot resolve the issue and grant carriers access to the secure hold area, Operations Center personnel are to generate an Emergency Response Report to document the actions taken by both parties and store those reports in the Defense Transportation Tracking System. According to the Surface Deployment and Distribution Command officials, Operations Center personnel forward the Emergency Response Reports to military-service representatives through an ad hoc process. However, according to the officials, no further corrective action is required by those representatives. As an example, during our visit the Operations Center personnel demonstrated several examples of recent incidents where they generated an Emergency Response Report as a result of DOD installation personnel denying a carrier access to a secure hold area or not assisting them in locating the nearest alternate means to secure those shipments. While the Surface Deployment and Distribution Command’s Emergency Response Reports document installation and carrier issues gaining access to secure hold areas at some DOD installations, according to Surface Deployment and Distribution Command officials they lack authority to change or require corrective actions to installation access policies. According to these officials, such actions would be taken by the military services or individual installation commanders. However, officials told us that there is no process within the installations’ commands—which set installation access policies—that requires them to follow up on Emergency Response Reports to identify needed improvements or recommend any corresponding corrective action at installations identified in the reports. However, the Office of Management and Budget’s Management’s Responsibility for Internal Control provides that management is responsible for establishing and maintaining internal control to achieve objectives such as compliance with applicable laws and regulations. It further notes that agencies and individual federal managers must take systematic and proactive measures to, among other things, assess the adequacy of internal control in federal programs and operations, identify needed improvements, and take corresponding corrective action. Without such a process, DOD may not be able to minimize the time that sensitive arms, ammunition, and explosives shipments spend in public areas. To ensure the safety and security of DOD’s shipments of sensitive arms, ammunition, explosives, and classified shipments, DOD established a Transportation Protective Services program as part of the Surface Deployment and Distribution Command. Among other requirements to participate in the program, commercial carriers providing certain services must meet safety performance thresholds defined in DOD guidance using scores from DOT’s Compliance, Safety, Accountability program, which is managed by DOT’s Federal Motor Carrier Safety Administration. Specifically, DOD uses the Federal Motor Carrier Safety Administration’s Safety Measurement System scores—one component of the Compliance, Safety, Accountability program—to determine which carriers are eligible to participate in the department’s Transportation Protective Services program. The Safety Measurement System within the Compliance, Safety, Accountability program is a data-driven approach for identifying carriers at risk of presenting a safety hazard or causing a crash. Safety Measurement System data comprise information collected during roadside inspections and from reported crashes to calculate scores across seven categories that quantify a carrier’s safety performance relative to that of other carriers. Specifically, the categories are: unsafe driving, crash indicator, hours of service compliance, driver fitness, controlled substances/alcohol, vehicle maintenance, and HAZMAT compliance.violation rates in each of these categories for each commercial carrier and then compares these rates to other carriers. The Federal Motor Carrier Safety Administration calculates DOT uses the Safety Measurement System scores to, among other things, establish safety performance thresholds for carriers. For example, for HAZMAT commercial carriers, DOT establishes a threshold score of 60 or lower (lower scores are better) in each of three categories: unsafe driving, crash indicator, and hours-of-service compliance. DOT’s Federal Motor Carrier Safety Administration identifies carriers who score above the threshold as those posing the greatest safety risk. The Federal Motor Carrier Safety Administration can then intervene to focus on specific safety behaviors. The Federal Motor Carrier Safety Administration intervention actions include sending warning letters, conducting on- and off-site investigations, fines, or placing the carrier out of service. DOD uses DOT’s Safety Measurement System scores to determine whether commercial carriers are eligible for transporting HAZMAT under the department’s Transportation Protective Services program. For all but one of the seven Safety Measurement System categories, DOD requires that commercial carriers seeking to provide certain Transportation Protective Services meet DOT’s established HAZMAT carrier thresholds. For compliance with the Safety Measurement System HAZMAT regulations category, however, DOT establishes a threshold score no higher than 80 for commercial carriers. In contrast, DOD requires a more- stringent score of 75. At the time of our review, only 53 of the over 400,000 commercial carriers in the United States had been selected to participate in the Transportation Protective Services program. In February 2014, we found that the Federal Motor Carrier Safety Administration faces challenges in reliably assessing safety risk for the majority of carriers. Among other things, most carriers lack sufficient safety performance data to ensure that Federal Motor Carrier Safety Administration can reliably compare them with other carriers using Safety Measurement System scores. Basing an assessment of a carrier’s safety performance on limited data may misrepresent the safety status of carriers, particularly those without sufficient data from which to reliably draw such a conclusion. In addition, previous evaluations of the Safety Measurement System have focused on estimating the correlations between crash risk and regulatory violation rates and Safety Measurement System scores. These evaluations have found mixed evidence that Safety Measurement System scores predict crash risk with a high degree of precision for specific carriers or groups of carriers. As we found, according to the Federal Motor Carrier Safety Administration’s own methodology, the Safety Measurement System is intended to prioritize intervention resources, identify and monitor carrier safety problems, and support the safety fitness determination process.Carrier Safety Administration also includes a disclaimer with the publicly released Safety Measurement System scores stating that the data are intended for agency and law-enforcement purposes, and readers should not draw safety conclusions about a carrier’s safety condition based on the Safety Measurement System score, but rather the carrier’s official safety rating. Due to ongoing litigation related to the Compliance, Safety, The Federal Motor Accountability program and the publication of Safety Measurement System scores, we did not assess the potential effects or tradeoffs resulting from the display or any public use of these scores. We recommended that DOT improve the Compliance, Safety, Accountability program by revising the Safety Measurement System methodology to better account for limitations in drawing comparisons of safety performance information across carriers; and in doing so, conduct a formal analysis that specifically identifies, among other things, the limitations in the data used to calculate Safety Measurement System scores including variability in the carrier population and the quality and quantity of data available for carrier safety performance assessments. According to federal internal control standards, for an entity to run and control its operations, it must have relevant, reliable, and timely communications, including operational data. Program managers need operational data to determine whether they are meeting their agencies’ goals for accountability, and effective and efficient use of resources. DOT agreed to consider our February 2014 recommendations, but expressed what it described as significant and substantive disagreements with some aspects of our analysis and conclusions. To the extent that DOT makes changes to the Safety Measurement System methodology, this could also affect how DOD uses the information to evaluate Transportation Protective Services carriers’ safety performance because of the underlying data reliability concerns with the Compliance, Safety, Accountability program’s Safety Measurement System data. The complex framework governing the handling, labeling, and packaging of DOD’s HAZMAT shipments exists to ensure the safe, timely, and cost- effective handling of these materials, but its complexity creates challenges as shippers, carriers, and receivers implement the various statutes and regulations of that framework. The distributed nature of this framework—policies and procedures drawn from various organizations, varying by class of HAZMAT and mode of transportation, and executed by multiple players—creates conditions under which mistakes can be made, particularly regarding pre-transportation functions (e.g., packaging and labeling). While DOD has issued regulations and other guidance regarding the handling of HAZMAT, it lacks adequate internal controls to help ensure that its actions are consistent with those regulations and guidance. Furthermore, the department lacks a clear understanding of why these pre-transportation mistakes keep occurring. Absent a better understanding of the root causes for these mistakes, the department will be unable to identify corrective actions to better ensure the safe, timely, and cost-effective transportation of its HAZMAT shipments. Moreover, absent a department-wide process to identify necessary corrective action to ensure that DOD installations provide access to secure hold, there is no assurance that the installations will not repeatedly deny access to the secure hold area for HAZMAT shipments or fail to assist carriers in finding alternative means to secure those shipments. While the relative scale of these incidents is fairly small, the nature of HAZMAT shipments is such that the risk to public safety as well as the potential impact of cost and timeliness should be minimized to the greatest extent practicable. DOD uses Safety Measurement System scores to determine safety performance of its Transportation Protective Services carriers. However, both our February 2014 report and the Federal Motor Carrier Safety Administration state that these scores should not be used to draw safety conclusions about a carrier’s safety condition. As a result, DOD may be determining which carriers should be eligible for the Transportation Protective Services program using the Compliance, Safety, Accountability’s Safety Measurement System that, for many carriers, lacks sufficient information to reliably assess carriers’ safety performance. To improve DOD’s compliance with HAZMAT regulations and other guidance and potentially reduce shipment delays, we recommend that the Secretary of Defense, in coordination with the Chairman of the Joint Chiefs of Staff, direct the Under Secretary of Defense for Acquisition, Technology and Logistics, in collaboration with the military departments and TRANSCOM, to identify the root causes of improper documentation and packaging of HAZMAT throughout the DOD transportation system, identify any needed corrective actions, and develop an action plan with associated milestones to implement those corrective actions. To minimize the time sensitive arms, ammunition, and explosives shipments spend in public areas, we recommend that the Secretary of Defense, in coordination with the Chairman of the Joint Chiefs of Staff, direct the Secretaries of the military departments, in collaboration with TRANSCOM, to establish a process to identify and implement the necessary corrective actions to ensure that DOD installations identified by Surface Deployment and Distribution Command’s Emergency Response Reports provide secure hold for sensitive shipments or assist them in locating the nearest alternate means to secure those shipments. To better ensure the safety and security of DOD’s shipments of sensitive arms, ammunition, and explosives, we recommend that the Secretary of Defense, in coordination with the Chairman of the Joint Chiefs of Staff, direct TRANSCOM to examine the data limitations of the DOT Federal Motor Carrier Safety Administration’s Safety Measurement System raised in our February 2014 report on modifying DOT’s Compliance, Safety, and Accountability program and determine what changes, if any, should be made to the process used by DOD to decide HAZMAT carrier eligibility and evaluate performance for the Transportation Protective Services program. We provided a draft of our report to DOD and DOT for review. In written comments, DOD partially concurred with our first recommendation and fully concurred with our second and third recommendations. DOD’s written comments are reprinted in their entirety in appendix II. DOD and DOT provided technical comments, which we have incorporated throughout our report as appropriate. DOD partially concurred with our first recommendation that the Secretary of Defense, in coordination with the Chairman of the Joint Chiefs of Staff, direct TRANSCOM, in collaboration with the Secretaries of the military departments, to identify the root causes of improper documentation and packaging of HAZMAT throughout the DOD transportation system, identify any needed corrective actions, and develop an action plan with associated milestones to implement those corrective actions. DOD agreed to conduct the analysis we recommended, but stated that the Under Secretary of Defense for Acquisition, Technology, and Logistics rather than TRANSCOM would lead this analysis. DOD explained that, because of the myriad of issues to be addressed, including training and Transportation and Supply Discrepancy Reports, the Under Secretary of Defense for Acquisition, Technology, and Logistics, not TRANSCOM, is the proper organization to lead this analysis. We agree and have amended our first recommendation accordingly. DOD concurred with our second recommendation that the Secretary of Defense, in coordination with the Chairman of the Joint Chiefs of Staff, direct the Secretaries of the military departments, in collaboration with TRANSCOM, to establish a process to identify and implement the necessary corrective actions to ensure that DOD installations identified by Surface Deployment and Distribution Command’s Emergency Response Reports provide secure hold for sensitive shipments or assist them in locating the nearest alternate means to secure those shipments. Specifically, DOD stated that the Under Secretary of Defense for Intelligence reissued Department of Defense Instruction 5100.76. DOD also noted that the Under Secretary of Defense for Acquisition, Technology and Logistics will work with the Under Secretary of Defense (Intelligence), TRANSCOM, and the military departments to develop corrective actions and operating methods geared toward eliminating secure-hold denials. We agree that these actions, if fully implemented, would address our recommendation. DOD concurred with our third recommendation that the Secretary of Defense, in coordination with the Chairman of the Joint Chiefs of Staff, direct TRANSCOM to examine the data limitations of the DOT Federal Motor Carrier Safety Administration’s Safety Measurement System raised in our February 2014 report on modifying DOT’s Compliance, Safety, and Accountability program and determine what changes, if any, should be made to the process used by DOD to decide HAZMAT carrier eligibility and evaluate performance for the Transportation Protective Services program. However, DOD did not identify any specific steps it planned to take to address our recommendation. We are sending copies of this report to the appropriate congressional committees and to the Secretary of Defense and the Secretary of Transportation. The report also is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To examine the statutes, regulations, guidance, policies, and procedures that govern the Department of Defense’s (DOD) handling, labeling, and packaging of hazardous material (HAZMAT) shipments to support military operations, we reviewed the Hazardous Materials Transportation Act, as amended; regulations issued by the Department of Transportation (DOT), including the Hazardous Materials Regulation in Title 49 of the Code of Federal Regulations; relevant sections of DOD’s Defense Transportation Regulation, Joint Staff guidance, including Joint Publication 4-01, The Defense Transportation System; and international standards for the transport of HAZMAT. To better understand the statutes, regulations, guidance, policies, and procedures that govern DOD’s handling, labeling, and packaging of HAZMAT shipments to support military operations, several members of the audit team completed a 2-day DOD-sponsored course on HAZMAT that covered the above- mentioned areas and passed a comprehensive exam at the end of the course demonstrating their understanding and knowledge of those topics. To corroborate our understanding of this framework, we interviewed officials from the Office of the Deputy Assistant Secretary of Defense for Transportation Policy, Office of the Deputy Assistant Secretary of Defense for Supply Chain Integration, the U.S. Transportation Command (TRANSCOM), and DOT’s Pipeline and Hazardous Materials Safety Administration. To understand DOD-specific requirements for transporting HAZMAT shipments, we reviewed DOT’s Hazardous Materials Regulations and DOD’s Defense Transportation Regulation and interviewed officials from TRANSCOM—specifically, officials from the Surface Deployment and Distribution Command, which is the Army service component command of TRANSCOM that plans, transports, and tracks DOD shipments. To examine the extent to which DOD faces any challenges in implementing its policies and procedures for transporting HAZMAT in a safe, timely, and cost-effective manner, we selected and visited several DOD locations involved in the transport of HAZMAT, including the U.S. Defense Supply and Distribution Center (Richmond, Virginia); TRANSCOM’s Surface Deployment and Distribution Command; and two of DOD’s five major aerial ports: Dover Air Force Base (Dover, Delaware) and Norfolk Naval Base (Norfolk, Virginia). Additionally, we interviewed officials from another major aerial port—Travis Air Force Base (Fairfield, California). We also visited Aberdeen Proving Ground (Aberdeen, Maryland) to review the transport of arms, ammunitions, and explosives, and to visit a secure hold location for that type of HAZMAT. We selected these locations because they provided a cross section of the various modes of transportation and hazardous-material classes. We also selected these locations because they provided us with information on the various central locations through which DOD transports these materials. We focused our review on surface and air modes of transport—on surface transport because most individual shipments are transported by highway (94 percent of all shipments in fiscal year 2013) and on air transport because that mode generally has more-restrictive requirements, for example the quantity of certain HAZMAT (like acetic acid, class 8 HAZMAT) allowed on passenger air transport is lower than in other modes. We reviewed data from TRANSCOM’s Global Air Transportation Execution System for 5 fiscal years (fiscal years 2009 through 2013) and the Defense Transportation Tracking System for fiscal years 2012 and 2013 to analyze records related to pretransportation and transportation functions (e.g., handling, labeling, and packaging activities and the transport of HAZMAT in commerce to the final destination point), respectively. We compared data from those records with requirements in the Defense Transportation Regulation. We found that the data we examined were sufficiently reliable for identifying challenges and the extent to which they affect the transport of HAZMAT. We examined sections of the Hazardous Materials Regulations and the Defense Transportation Regulation related to documenting and packaging HAZMAT and to ensuring the secure hold of sensitive and high-risk HAZMAT at DOD installations. We compared those sections with records we examined related to pretransportation and transportation functions (e.g., handling, labeling, and packaging activities and the transport of HAZMAT in commerce to the final destination point). Specifically, we reviewed fiscal years 2009 to 2013 records from TRANSCOM’s Air Mobility Command’s Global Air Transportation Execution System database, which is the aerial port operations and management information system designed to support automated shipments and passenger processing. We also reviewed data from fiscal year 2012 and 2013 from the Emergency Response Reports provided from the Defense Transportation Tracking System, part of TRANSCOM’s Surface Deployment and Distribution Command, which catalogs information involving shipments of arms, ammunition and explosives, and other sensitive cargo. We found that the data we examined were sufficiently reliable to identify secure-hold access issues that had been reported. We reviewed records from DOT’s Federal Motor Carrier Safety Administration to understand the Compliance Safety and Accountability systems and how the safety scores generated by the system are used to evaluate the Transportation Protective Services’ 53 carriers that transport DOD’s sensitive arms, ammunition, and explosives shipments. To corroborate our understanding of the documents and data we analyzed, we interviewed officials from the Office of the Secretary of Defense; the Office of the Deputy Assistant Secretary of Defense for Transportation Policy; the Defense Logistics Agency; the Army, the Air Force, the Navy, the Marine Corps; and DOT’s Pipeline and Hazardous Materials Safety Administration. We visited or contacted officials from the following DOD and DOT organizations during our review: Defense Logistics Agency, Aviation Branch, Richmond, Virginia; Defense Logistics Agency, U.S. Defense Supply and Distribution Center, Richmond, Virginia; Headquarters, Defense Logistics Agency, Fort Belvoir, Virginia; Headquarters, Department of the Army, Pentagon, Arlington, Virginia; Headquarters, U.S. Marine Corps, Naval Annex, Arlington, Virginia; Office of the Under Secretary of Defense (Acquisition, Technology and Logistics), Office of the Deputy Assistant Secretary of Defense (Transportation Policy), Mark Center, Alexandria, Virginia; Office of the Under Secretary of Defense (Acquisition, Technology and Logistics), Office of the Deputy Assistant Secretary of Defense (Supply Chain Integration), Mark Center, Alexandria, Virginia; U.S. Air Force, Dover Air Force Base, Dover, Delaware; U.S. Air Force, Travis Air Force Base, Fairfield, California; U.S. Air Force, Headquarters Air Materiel Command, Wright- Patterson Air Force Base, Dayton, Ohio; U.S. Army Operations Center, Pentagon, Arlington, Virginia; U.S. Army, Public Health Command, Army Institute of Public Health, Aberdeen Proving Ground, Maryland; U.S. Army, Surface Deployment and Distribution Command, 596th Transportation Brigade, Sunny Point, North Carolina; U.S. Army, Surface Deployment and Distribution Command, 597th Transportation Brigade, Fort Eustis, Virginia; U.S. Army Sustainment Command, Rock Island Arsenal, Rock Island, Illinois; U.S. Army Medical Command, Fort Sam Houston, San Antonio, U.S. Army, Medical Research Institute of Chemical Defense, Aberdeen Proving Ground, Aberdeen, Maryland; U.S. Navy, Naval Station Norfolk, Norfolk, Virginia; U.S. Transportation Command, Scott Air Force Base, Illinois; Air Mobility Command; Military Sealift Command; Surface Deployment and Distribution Command; o Defense Transportation Tracking System Office; and U.S. Department of Transportation, Washington, D.C. We conducted this performance audit from April 2013 to May 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, James A. Reynolds, Assistant Director; Adam Anguiano; Alfonso Garcia; Brandon Jones; Mae Jones; Oscar W. Mardis; Terry Richardson; and Michael Shaughnessy made key contributions to this report.
Over 3 billion tons of HAZMAT are transported by commercial carriers in the United States each year. DOD accounted for about 1.6 million HAZMAT shipments in fiscal year 2013, using commercial and military carriers. These shipments can be high risk and highly sensitive and if improperly handled, labeled, or packaged could result in the loss of life, property damage, and harm to national security interests. The National Defense Authorization Act for Fiscal Year 2013 mandates GAO to review DOD's guidance, policies, and procedures regarding HAZMAT shipments. GAO examined the (1) statutes, regulations, guidance, policies, and procedures that govern DOD's handling, labeling, and packaging of HAZMAT shipments to support military operations and (2) extent to which DOD faces any challenges in implementing its policies and procedures for transporting HAZMAT in a safe, timely, and cost-effective manner. GAO examined DOD's and DOT's regulations and related DOD documentation for the transport of HAZMAT and found the 2009-13 data it examined sufficiently reliable for the purposes of the review. The handling, labeling, and packaging of hazardous materials (HAZMAT) shipments are governed by a complex framework of statutes and regulations prescribed by multiple civilian and military entities (see figure below). The Hazardous Materials Transportation Act is the primary statutory regime governing the transport of HAZMAT in the United States. To implement the act, the Department of Transportation (DOT) issued the Hazardous Materials Regulations. The Defense Transportation Regulation prescribes how the Department of Defense (DOD) is to transport HAZMAT. DOD has experienced some challenges in implementing HAZMAT regulations and other guidance, which can adversely affect the safe, timely, and cost-effective transportation of HAZMAT. For example, GAO found the following: Improper documentation and packaging of HAZMAT led to delays at DOD transportation aerial ports. DOD data show that about 27 percent of HAZMAT received at all five major domestic military aerial ports over the past 5 fiscal years were delayed, primarily due to noncompliant documentation and packaging. At least 44 times during fiscal years 2012 and 2013, DOD installations did not provide commercial carriers with access to secure hold areas for arms, ammunition, and explosives shipments or assist them in finding alternatives, as required by DOD regulations. Although there were about 70,891 of these types of arms, ammunition, and explosives shipments in fiscal years 2012 and 2013, not providing secure hold for even a small percentage of these sensitive shipments poses a risk to public safety and national security. DOD may determine which carriers should be eligible to transport its most-sensitive HAZMAT shipments using a safety score that lacks sufficient information to reliably assess safety performance for many carriers. DOD uses DOT's Safety Measurement System scores to determine which carriers are eligible to participate in its Transportation Protective Services program. However, in February 2014 GAO found that scores from many carriers lack sufficient safety performance data to reliably compare them with other commercial carriers' scores. GAO recommends that DOD improve the documentation and secure hold of HAZMAT shipments and examine limitations on data used to select certain HAZMAT carriers. DOD generally agreed with the recommendations but requested one be directed to a different office. GAO agreed and made the associated change.
You are an expert at summarizing long articles. Proceed to summarize the following text: Since the 1960s, geostationary and polar-orbiting environmental satellites have been used by the United States to provide meteorological data for weather observation, research, and forecasting. NOAA’s National Environmental Satellite Data and Information Service (NESDIS) is responsible for managing the civilian geostationary and polar-orbiting satellite systems as two separate programs, called GOES and the Polar Operational Environmental Satellites, respectively. Unlike polar-orbiting satellites, which constantly circle the earth in a relatively low polar orbit, geostationary satellites can maintain a constant view of the earth from a high orbit of about 22,300 miles in space. NOAA operates GOES as a two-satellite system that is primarily focused on the United States (see fig. 1). These satellites are uniquely positioned to provide timely environmental data to meteorologists and their audiences on the earth’s atmosphere, its surface, cloud cover, and the space environment. They also observe the development of hazardous weather, such as hurricanes and severe thunderstorms, and track their movement and intensity to reduce or avoid major losses of property and life. Furthermore, the satellites’ ability to provide broad, continuously updated coverage of atmospheric conditions over land and oceans is important to NOAA’s weather forecasting operations. To provide continuous satellite coverage, NOAA acquires several satellites at a time as part of a series and launches new satellites every few years. Three satellites—GOES-11, GOES-12, and GOES-13—are currently in orbit. Both GOES-11 and GOES-12 are operational satellites, while GOES-13 is in an on-orbit storage mode. It is a backup for the other two satellites should they experience any degradation in service. The others in the series, GOES-O and GOES-P, are planned for launch over the next few years. NOAA is also planning a future generation of satellites, known as the GOES-R series, which are planned for launch beginning in 2012. Each of the operational geostationary satellites continuously transmits raw environmental data to NOAA ground stations. The data are processed at these ground stations and transmitted back to the satellite for broadcast to primary weather services both in the United States and around the world, including the global research community. Raw and processed data are also distributed to users via ground stations through other communication channels, such as dedicated private communication lines and the Internet. Figure 2 depicts a generic data relay pattern from the geostationary satellites to the ground stations and commercial terminals. To date, NOAA has procured three series of GOES satellites and is in the planning stages to acquire a fourth one (see table 1). In 1970, NOAA initiated its original GOES program based on experimental geostationary satellites developed by NASA. While these satellites operated effectively for many years, they had technical limitations. For example, this series of satellites was “spin-stabilized,” meaning that the satellites slowly spun while in orbit to maintain a stable position with respect to the earth. As a result, the satellite viewed the earth only about 5 percent of the time and had to collect data very slowly, capturing one narrow band of data each time its field-of-view swung past the earth. A complete set of sounding data took 2 to 3 hours to collect. In 1985, NOAA and NASA began to procure a new generation of GOES, called the GOES I-M series, based on a set of requirements developed by NOAA’s National Weather Service, NESDIS, and NASA, among others. GOES I-M consisted of five satellites, GOES-8 through GOES-12, and was a significant improvement in technology from the original GOES satellites. For example, GOES I-M was “body-stabilized,” meaning that the satellite held a fixed position in orbit relative to the earth, thereby allowing for continuous meteorological observations. Instead of maintaining stability by spinning, the satellite would preserve its fixed position by continuously making small adjustments in the rotation of internal momentum wheels or by firing small thrusters to compensate for drift. These and other enhancements meant that the GOES I-M satellites would be able to collect significantly better quality data more quickly than the older series of satellites. In 1998, NOAA began the procurement of satellites to follow GOES I-M, called the GOES-N series. This series used existing technologies for the instruments and added system upgrades, including an improved power subsystem and enhanced satellite pointing accuracy. Furthermore, the GOES-N satellites were designed to operate longer than its predecessors. This series originally consisted of four satellites, GOES-N through GOES- Q. However, the option for the GOES-Q satellite was cancelled based on NOAA’s assessment that it would not need the final satellite to continue weather coverage. In particular, the agency found that the GOES satellites already in operation were lasting longer than expected and that the first satellite in the next series could be available to back up the last of the GOES-N satellites. As noted earlier, the first GOES-N series satellite— GOES-13—was launched in May 2006. The GOES-O and GOES-P satellites are currently in production and are expected to be launched in July 2008 and July 2011, respectively. NOAA is currently planning to procure the next series of GOES satellites, called the GOES-R series. This series will consist of four satellites, GOES- R through GOES-U, and is intended to provide the first major technological advance in instrumentation since the first satellite of the GOES I-M series was launched in 1994. NOAA is planning for the GOES-R program to improve on the technology of prior GOES series, in terms of both system and instrument improvements. The system improvements are expected to fulfill more demanding user requirements and to provide more rapid information updates. Table 2 highlights key system-related improvements GOES-R is expected to make to the geostationary satellite program. The instruments on the GOES-R series are expected to significantly increase the clarity and precision of the observed environmental data. NOAA plans to acquire five different types of instruments. The program office considers two of the instruments—the Advanced Baseline Imager and the Hyperspectral Environmental Suite—to be most critical because they will provide data for key weather products. Table 3 summarizes the planned instruments and their expected capabilities. The program management structure for the GOES-R program differs from past GOES programs. Prior to the GOES-R series, NOAA was responsible for program funding, procurement of the ground elements, and on-orbit operation of the satellites, while NASA was responsible for the procurement of the spacecraft, instruments, and launch services. NOAA officials stated that this approach limited the agency’s insight and management involvement in the procurement of major elements of the system. Alternatively, under the GOES-R management structure, NOAA has responsibility for the procurement and operation of the overall system— including spacecraft, instruments, and launch services. NASA is responsible for the procurement of the individual instruments until they are transferred to the overall GOES-R system contractor for completion and integration onto the spacecraft. Additionally, to take advantage of NASA’s acquisition experience and technical expertise, NOAA located the GOES-R program office at NASA’s Goddard Space Flight Center. It also designated key program management positions to be filled with NASA personnel (see fig. 3). These positions include the deputy system program director role for advanced instrument and technology infusion, the project manager for the flight portion of the system, and the deputy project manager for the ground and operations portion of the system. NOAA officials explained that they changed the management structure for the GOES-R program in order to streamline oversight and fiduciary responsibilities, but that they still plan to rely on NASA’s expertise in space system acquisitions. Satellite programs are often technically complex and risky undertakings, and as a result, they often experience technical problems, cost overruns, and schedule delays. We and others have reported on a historical pattern of repeated missteps in the procurement of major satellite systems, including the National Polar-orbiting Operational Environmental Satellite System (NPOESS), the GOES I-M series, the Space Based Infrared System High Program (SBIRS-High), and the Advanced Extremely High Frequency Satellite System (AEHF). Table 4 lists key problems experienced with these programs and is followed by a summary of each program. NPOESS is being developed to combine two separate polar-orbiting environmental satellite systems currently operated by NOAA and the Department of Defense (DOD) into a single state-of-the-art environment monitoring system. A tri-agency program office—comprised of officials from DOD, NOAA, and NASA—is responsible for managing this program. Within the program office, each agency has the lead on certain activities. NOAA has overall program management responsibility for the converged system and for satellite operations; DOD has the lead on the acquisition; and NASA has primary responsibility for facilitating the development and incorporation of new technologies into the converged system. Since its inception, the NPOESS program has encountered cost overruns and schedule delays. Specifically, within a year of the contract award, the program cost estimate increased by $1.2 billion, from $6.9 billion to $8.1 billion, and the expected availability of the first satellite was delayed by 20 months. We reported in September 2004 that these cost increases and schedule delays were caused, in part, by changes in the NPOESS funding stream. Subsequently, in November 2005, we reported that problems in the development of a critical sensor would likely cause program costs to grow by at least another $3 billion and the schedule for the first launch would likely be delayed by almost 3 years. The senior executive oversight committee for NPOESS was expected to make a decision in December 2005 on the direction of the program—which involved increased costs, delayed schedules, and reduced functionality. We urged this committee to make a decision quickly so that the program could proceed. However, in late November 2005, the NPOESS program’s anticipated cost growth triggered a legislative requirement forcing DOD to reassess its options and to recertify the program. In June 2006, DOD decided to reduce the system’s capabilities and number of satellites from six to four, and announced that the newly-restructured program was estimated to cost $11.5 billion and the launch of the first satellite had been delayed by at least 4 years from the time the contract was awarded. NPOESS’ problems involved a number of factors, including unrealistic cost and schedule estimates, insufficient technical maturity of critical sensors at a key development milestone, poor performance at multiple levels of contractor and government management, insufficient executive oversight, and excessive award fee payments to the contractor. Specifically, in 2003, an Air Force cost group performed an independent cost estimate for NPOESS and found that, based on actual outcomes from historical programs similar to NPOESS, the program office underestimated contract costs by almost $1 billion. This group also concluded that the program office underestimated the required time needed to integrate the sensors onto the spacecraft by almost 80 percent. Despite the differences in planned cost and schedule, the program office moved forward with its own estimates—and, in turn, established unrealistic budgets that led, in part, to the eventual restructuring of the program. Further, an independent review team charged with assessing the NPOESS program found that the program management office did not sufficiently validate the subcontractors’ design work on various sensors. As a result, the sensors were approved to move into production before they reached a sufficient level of technical maturity. This resulted in unexpected technical problems during sensor production. We also reported that the development issues on one critical sensor were attributed, in part, to the subcontractor’s inadequate project management. Specifically, after a series of technical problems, internal review teams sent by the prime contractor and the program office found that the sensor’s subcontractor had deviated from a number of contract, management, and policy directives set out by the main office and that both management and process engineering were inadequate. Neither the contractor nor the program office recognized the underlying problems in time to fix them. Further, an independent review team reported that the program management office did not have the technical system engineering support it needed to effectively manage the contractor. In addition, the program office and contractor set aside less than 10 percent of their budgets in management reserve—an amount which was insufficient to effectively deal with these technical problems. With just 2 years into the contract, the prime contractor had spent or allocated over 90 percent of its reserves. The involvement of the NPOESS executive leadership committee was also inconsistent and indecisive—it wavered from frequent heavy involvement to occasional meetings with few resulting decisions. In the 32-month period from May 2003 through December 2005, the committee met formally six times. Despite mounting evidence of the seriousness of the critical sensor problems, the committee did not effectively challenge the program manager’s optimistic assessments, and from May 2003 through December 2004, convened only twice to consider the program’s status. In May 2006, the Department of Commerce’s (Commerce) Inspector General reported that the NPOESS award fee structure was not an effective system for promoting high-quality performance by the contractor. Despite the significant delays and cost overruns on the program, the contractor received about 84 percent of the available fee pool for the first six billing periods. In its development of the GOES I-M series, NOAA experienced severe technical challenges, massive cost overruns, and risky schedule delays. The overall development cost of the program was over three times greater than planned, escalating from $640 million to approximately $2 billion. Additionally, the launch of the first satellite of this series, which had been planned for July 1989, did not occur until April 1994. This nearly 5-year schedule delay left NOAA in danger of losing geostationary satellite coverage, although no gap in coverage occurred. We reported that these problems were caused by a number of factors, including insufficient technical readiness of the satellite design prior to contract award, unrealistic cost and schedule estimates, and inadequate management by NOAA and NASA. Specifically, NOAA and NASA did not require any engineering analyses to be completed prior to the award of the GOES I-M contract. As a result, both agencies were unable to anticipate the level of complexity of NOAA’s requirements (related to the satellite’s pointing accuracy) or the contractor’s approach to meeting those requirements. This unanticipated design complexity led to additional analyses, redesigns, and remanufacture of parts, which resulted in increased costs and schedule delays. Additionally, the lack of adequate understanding of the system prior to contract award also prevented program officials from establishing realistic cost and schedule estimates for the program. The inadequate management of the GOES I-M program—by both the government and contractor—played a significant part in its cost increases and program delays. Specifically, NASA and NOAA made the decision to forgo preliminary studies of the system because of fiscal constraints and pressure to launch the first satellite as quickly as possible. This decision was compounded by NASA’s limited technical support in the areas of optics, satellite control systems, and thermal engineering. Additionally, both the prime contractor and major subcontractor had little experience in directing the design of complex weather instruments. The subcontractor had also noted that it was not prepared for GOES I-M. For example, the instruments were expected to meet manufacturing and testing standards that the subcontractor had never experienced before. We recommended Congress consider directing NASA and NOAA to report on their progress in resolving these problems and the timeframe and cost for achieving proposed solutions. Further, we recommended that funds for the production and testing of the satellites be withheld until a favorable solution was identified and reported to Congress. SBIRS-High satellites are being developed to replace DOD’s older missile warning satellites. In addition to missile warning and missile defense missions, the satellites are also expected to perform technical intelligence and battlespace characterization missions. After the program was initiated in 1994, it faced cost, scheduling, and technology problems. SBIRS-High had experienced schedule slips of at least 6 years and cost increases that have triggered legislative requirements to reassess and recertify the program several times—most recently in 2005. While DOD’s total program cost estimate was initially about $3.9 billion, it is now $9.9 billion—nearly a 150 percent unit cost increase. DOD is currently reexamining this program, potential alternatives, and cost estimates. Our reviews have attributed past problems on the SBIRS-High program to serious hardware and software design problems, insufficient oversight of contractors, and technology challenges. Further, an independent review team chartered by DOD reported that a root cause of these problems was that system requirements were not well understood by DOD when the program began. Specifically, the requirements-setting process was often ad hoc, many decisions on requirements were deferred to the contractor, and the program was too immature to enter system design and development. As a result, there was too much instability on the program after the contract award—leading DOD to undertake four major replanning efforts. We made multiple recommendations to improve this program, including commissioning an independent task force to assess the development schedule, the stability of the program design, and software development practices, and to provide guidance for addressing the program’s underlying problems. In addition, we recommended that DOD establish a mechanism for ensuring that the knowledge gained from the assessment was used to determine whether further programmatic changes were needed to strengthen oversight, adjust cost and schedule estimates, and address requirements changes. AEHF is a satellite system intended to be DOD’s next generation of high- speed, protected communication satellites and to replace the existing Milstar system. In 2003, we reported that cost estimates developed by the Air Force for this program increased from $4.4 billion in January 1999 to $5.6 billion in June 2001 for five satellites. Moreover, DOD would not meet its accelerated target date for launching the first satellite in December 2004. To minimize costs, DOD then decided to purchase three satellites with options to purchase the fourth and fifth—which brought the program cost to $4.7 billion. Despite this action, AEHF costs grew to about $6.1 billion—an increase of more than 15 percent over the baseline estimate, which triggered legislative requirements to assess and certify the program. Schedule slippages for launching this communication system have now stretched to over 3 years. A number of factors contributed to cost and schedule overruns and performance shortfalls. First, in the early phases of the AEHF program, DOD substantially and frequently altered requirements—resulting in major design modifications that increased costs by millions of dollars. For instance, a new requirement for additional anti-jamming protection led to a cost increase of $100 million and an added set of requirements for training, support, and maintainability that cost an additional $90 million. Second, based on a satellite constellation gap caused by the failure of a Milstar satellite, DOD accepted a high-risk schedule that turned out to be overly optimistic and highly compressed—leaving little room for error and depending on a chain of events taking place at certain times. Third, AEHF allocated 4 percent of its budget to management reserve—which was an inadequate amount to cover unforeseen problems for the duration of the program. Between December 2002 and June 2005, the contractor had depleted about 86 percent of its reserves with 5 years remaining on the contract. Lastly, at the time DOD decided to accelerate the program, it did not have the funding needed to support the activities or the manpower needed to design and build the satellites more quickly. The lack of funding also contributed to schedule delays, which in turn, caused more cost increases. We made a number of recommendations to improve this program and others, including implementing processes and policies that stabilize requirements and addressing shortfalls in staff with science and engineering backgrounds. These recommendations were made to assure that DOD had an investment strategy in place that would better match resources to requirements. NOAA is nearing the end of the preliminary design phase on its GOES-R program and plans to award a contract for the system’s development in August 2007; however, because of concerns with potential cost growth, NOAA’s plans for the GOES-R procurement could change in the near future. To date, NOAA has issued contracts for the preliminary design of the overall GOES-R system to three vendors and expects to award a contract to one of these vendors to develop the system in August 2007. In addition, to reduce the risks associated with developing new instruments, NASA has issued contracts for the early development of one critical instrument and for the preliminary designs of four other instruments. The agency plans to award these contracts and then turn them over to the contractor responsible for the overall GOES-R program. However, this approach is under review and NOAA may wait until the instruments are fully developed before turning them over to the system contractor. Table 5 provides a summary of the status of contracts for the GOES-R program. According to program documentation provided to the Office of Management and Budget in 2005, the current life cycle cost estimate for GOES-R is approximately $6.2 billion (see table 6). However, program officials reported that this estimate is over 2 years old and is under review. NOAA is tentatively planning to launch the first GOES-R series satellite in September 2012. The development of the schedule for launching the satellites was driven by a requirement that the satellites be available to back up the last remaining GOES satellites (GOES-O and GOES-P) should anything go wrong during the planned launches of these satellites. Table 7 provides a summary of the planned launch schedule for the GOES-R series. Commerce is scheduled to make a major acquisition decision before the end of this year. Commerce will decide whether or not the GOES-R series should proceed into the development and production phase in December 2006. Program officials reported that the final request for proposal on the GOES-R contract would be released upon completion of this decision milestone. However, NOAA’s plans for the GOES-R procurement could change in the near future because of concerns with potential cost growth. Given its experiences with cost growth on the NPOESS acquisition, NOAA recently asked program officials to recalculate the total cost of the estimated $6.2 billion GOES-R program. In May 2006, program officials estimated that the life cycle cost could reach $11.4 billion. The agency then requested that the program identify options for reducing the scope of requirements for the satellite series. Program officials reported that there are over 10 viable options under consideration, including options for removing one or more of the planned instruments. The program office is also reevaluating its planned acquisition schedule based on the potential program options. Specifically, program officials stated that if there is a decision to make a major change in system requirements, they will likely extend the preliminary design phase, delay the decision to proceed into the development and production phase, and delay the contract award date. NOAA officials estimated that a decision on the future scope and direction of the program could be made by the end of September 2006. NOAA has taken steps to apply lessons learned from problems encountered on other satellite programs to the GOES-R procurement. Key lessons include (1) establishing realistic cost and schedule estimates, (2) ensuring sufficient technical readiness of the system’s components prior to key decisions, (3) providing sufficient management at government and contractor levels, and (4) performing adequate senior executive oversight to ensure mission success. NOAA has established plans designed to mitigate the problems faced in past acquisitions; however, many activities remain to fully address these lessons. Until it completes these activities, NOAA faces an increased risk that the GOES-R program will repeat the increased cost, schedule delays, and performance shortfalls that have plagued past procurements. We and others have reported that space system acquisitions are strongly biased to produce unrealistically low cost and schedule estimates in the acquisition process. For example, we testified last July on the continued large cost increases and schedule delays being encountered on military space acquisition programs—including NPOESS, SBIRS-High, and AEHF. We noted that during program formulation, the competition to win funding is intense and has led program sponsors to minimize their program cost estimates. Furthermore, a task force chartered by DOD to review the acquisition of military space programs found that independent cost estimates and government program assessments have proven ineffective in countering this tendency. NOAA programs face similar unrealistic estimates. For example, the total development cost of the GOES I-M acquisition was over three times greater than planned, escalating from $640 million to $2 billion. The delivery of the first satellite was delayed by 5 years. NOAA has several efforts under way to improve the reliability of its cost and schedule estimates for the GOES-R program. NOAA’s Chief Financial Officer has contracted with a cost-estimating firm to complete an independent cost estimate, while the GOES-R program office has hired a support contractor to assist with its internal program cost estimating. The program office is re-assessing its estimates based on preliminary information from the three vendors contracted to develop preliminary designs for the overall GOES-R system. Once the program office and independent cost estimates are completed, program officials intend to compare them and to develop a revised programmatic cost estimate that will be used in its decision on whether to proceed into system development and production. In addition, NOAA has planned for an independent review team—consisting of former senior industry and government space acquisition experts—to provide an assessment of the program office and independent cost estimates for this decision milestone. To improve its schedule reliability, the program office is currently conducting a schedule risk analysis in order to estimate the amount of adequate reserve funds and schedule margin needed to deal with unexpected problems and setbacks. Finally, the NOAA Observing System Council submitted a prioritized list of GOES-R system requirements to the Commerce Undersecretary for approval. This list is expected to allow the program office to act quickly in deleting lower priority requirements in the event of severe technical challenges or shifting funding streams. While NOAA acknowledges the need to establish realistic cost and schedule estimates, several hurdles remain. As discussed earlier, the agency is considering reducing the requirements for the GOES-R program to mitigate the increased cost estimates for the program. Therefore, the agency’s efforts to date to establish realistic cost estimates cannot be fully effective in addressing this lesson until this uncertainty is resolved. NOAA suspended the work being performed by its independent cost estimator until a decision is made on the scope of the program. Further, the agency has not yet developed a process to evaluate and reconcile the independent and program office cost estimates once final program decisions are made. Without this process, the agency may lack the objectivity necessary to counter the optimism of program sponsors and is more likely to move forward with an unreliable estimate. Until it completes this activity, NOAA faces an increased risk that the GOES-R program will repeat the cost increases and schedule delays that have plagued past procurements. Space programs often experience unforeseen technical problems in the development of critical components as a result of having insufficient knowledge of the components and their supporting technologies prior to key decision points. One key decision point is when an agency decides on whether the component is sufficiently ready to proceed from a preliminary study phase into a development phase; this decision point results in the award of the development contract. Another key decision point occurs during the development phase when an agency decides whether the component is ready to proceed from design into production (also called the critical design review). Without sufficient technical readiness at these milestones, agencies could proceed into development contracts on components that are not well understood and enter into the production phase of development with technologies that are not yet mature. For example: On the GOES I-M series, NOAA and NASA did not require engineering analyses prior to awarding the development contracts in order to accelerate the schedule and launch the first satellite. The lack of these studies resulted in unexpected technical issues in later acquisition phases—including the inability of the original instrument designs to withstand the temperature variations in the geostationary orbit. Both the NPOESS and SBIRS-High programs committed funds for system development before the design was proven and before the technologies had properly matured. For instance, at the critical design review milestone for a key NPOESS sensor, the program office decided that the sensor was ready to proceed into production even though an engineering model had not been constructed. This sensor has since faced severe technical challenges that directly led to program-wide cost and schedule overruns. To address the lesson learned from the GOES I-M experience, in 1997, NOAA began preliminary studies on technologies that could be used on the GOES-R instruments. These studies target existing technologies and assessed how they could be expanded for GOES-R. The program office is also conducting detailed trade-off studies on the integrated system to improve its ability to make decisions that balance performance, affordability, risk, and schedule. For instance, the program office is analyzing the potential architectures for the GOES-R constellation of satellites—the quantity and configuration of satellites, including how the instruments will be distributed over these satellites. These studies are expected to allow for a more mature definition of the system specifications. NOAA has also developed plans to have an independent review team assess project status on an annual basis once the overall system contract has been awarded. In particular, this team will review technical, programmatic, and management areas; identify any outstanding risks; and recommend corrective actions. This measure is designed to ensure that sufficient technical readiness has been reached prior to the critical design review milestone. The program office’s ongoing studies and plans are expected to provide greater insight into the technical requirements for key system components and to mitigate the risk of unforeseen problems in later acquisition phases. However, the progress currently being made on the only instrument currently under development—the Advanced Baseline Imager—has experienced technical problems and could be an indication of more problems to come in the future. These problems relate to, among other things, the design complexity of the instrument’s detectors and electronics. As a result, the contractor is experiencing negative cost and schedule performance trends. As of May 2006, the contractor incurred a total cost overrun of almost $6 million with the instrument’s development only 28 percent complete. In addition, from June 2005 to May 2006, it was unable to complete approximately $3.3 million worth of work. Unless risk mitigation actions are aggressively pursued to reverse these trends, we project the cost overrun at completion to be about $23 million. (See app. II for further detail on the Advanced Baseline Imager’s cost and schedule performance.) While NOAA expects to make a decision on whether to move the instrument into production (a milestone called the critical design review) in January 2007, the contractor’s current performance raises questions as to whether the instrument designs will be sufficiently mature by that time. Further, the agency does not have a process to validate the level of technical maturity achieved on this instrument or to determine whether the contractor has implemented sound management and process engineering to ensure that the appropriate level of technical readiness can be achieved prior to the decision milestone. Until it does so, NOAA risks making a poor decision based on inaccurate or insufficient information— which could lead to unforeseen technical problems in the development of this instrument. In the past, we have reported on poor performance in the management of satellite acquisitions. The key drivers of poor management included inadequate systems engineering and earned value management capabilities, unsuitable allocation of contract award fees, inadequate levels of management reserve, and inefficient decision-making and reporting structure within the program office. The NPOESS program office lacked adequate program control capabilities in systems engineering and earned value management to effectively manage the contractor’s cost, schedule, and technical performance. Furthermore, Commerce’s Inspector General reported that NOAA awarded the NPOESS contractor excessive award fees for a program plagued with severe technical problems and a consistent failure to meet cost and schedule targets. Additionally, on SBIRS-High, the program management office had fewer systems engineers than other historical space programs. As a result, the program did not have enough engineers to handle the workload of ensuring that system requirements properly flowed down into the designs of the system’s components. Further, the NPOESS and AEHF programs had less than 5 percent of funds allocated to management reserve at the start of the system’s development and spent or allocated over 85 percent of that reserve within 3 years of beginning development. On GOES I-M, NOAA found that it did not have the ability to make quick decisions on problems because the program office was managed by another agency. NOAA has taken numerous steps to restructure its management approach on the GOES-R procurement in an effort to improve performance and to avoid past mistakes. These steps include: The program office revised its staffing profile to provide for government staff to be located on-site at prime contractor and key subcontractor locations. The program office plans to increase the number of resident systems engineers from 31 to 54 to provide adequate government oversight of the contractor’s system engineering, including verification and validation of engineering designs at key decision points (such as the critical design review milestone). The program office has better defined the role and responsibilities of the program scientist, the individual who is expected to maintain an independent voice with regard to scientific matters and advise the program manager on related technical issues and risks. The program office also intends to add three resident specialists in earned value management to monitor contractor cost and schedule performance. NOAA has work under way to develop the GOES-R contract award fee structure and the award fee review board that is consistent with our recent findings, the Commerce Inspector General’s findings, and other best practices, such as designating a non-program executive as the fee- determining official to ensure objectivity in the allocation of award fees. NOAA and NASA have implemented a more integrated management approach that is designed to draw on NASA’s expertise in satellite acquisitions and increase NOAA’s involvement on all major components of the acquisition. The program office reported that it intended to establish a management reserve of 25 percent consistent with the recommendations of the Defense Science Board Report on Acquisition of National Security Space Programs. While these steps should provide more robust government oversight and independent analysis capabilities, more work remains to be done to fully address this lesson. Specifically, the program office has not determined the appropriate level of resources it needs to adequately track and oversee the program and the planned addition of three earned value management specialists may not be enough as acquisition activities increase. By contrast, after its recent problems and in response to the independent review team findings, NPOESS program officials plan to add 10 program staff dedicated to earned value, cost, and schedule analysis. An insufficient level of established capabilities in earned value management places the GOES-R program office at risk of making poor decisions based on inaccurate and potentially misleading information. Finally, while NOAA officials believe that assuming sole responsibility for the acquisition of GOES-R will improve their ability to manage the program effectively, this change also elevates NOAA’s risk for mission success. Specifically, NOAA is taking on its first major system acquisition and an increased risk due to its lack of experience. Until it fully addresses the lesson of ensuring an appropriate level of resources to oversee its contractor, NOAA faces an increased risk that the GOES-R program will repeat the management and contractor performance shortfalls that have plagued past procurements. We and others have reported on NOAA’s significant deficiencies in its senior executive oversight of NPOESS. The lack of timely decisions and regular involvement of senior executive management was a critical factor in the program’s rapid cost and schedule growth. The senior executive committee was provided with monthly status reports that consistently described in explicit detail the growing costs and delays attributable to the development of a key instrument. Despite mounting evidence of the seriousness of the instrument’s problems, this committee convened only twice between May 2003 and December 2004 to consider the program’s status. NOAA formed its program management council in response to the lack of adequate senior executive oversight on NPOESS. In particular, this council is expected to provide regular reviews and assessments of selected NOAA programs and projects—the first of which is the GOES-R program. The council is headed by the NOAA Deputy Undersecretary and includes senior officials from Commerce and NASA. The council is expected to hold meetings to discuss GOES-R program status on a monthly basis and to approve the program’s entry into subsequent acquisition phases at key decision milestones—including contract award and critical design reviews, among others. Since its establishment in January 2006, the council has met regularly and has established a mechanism for tracking action items to closure. The establishment of the NOAA Program Management Council is a positive action that should support the agency’s senior-level governance of the GOES-R program. In moving forward, it is important that this council continue to meet on a regular basis and exercise diligence in questioning the data presented to it and making difficult decisions. In particular, it will be essential that the results of all preliminary studies and independent assessments on technical maturity of the system and its components be reviewed by this council so that an informed decision can be made about the level of technical complexity it is taking on when proceeding past these key decision milestones. In light of the recent uncertainty regarding the future scope and cost of the GOES-R program, the council’s governance will be critical in making those difficult decisions in a timely manner. Procurement activities are under way for the next series of geostationary environmental satellites, called the GOES-R series—which is scheduled to launch its first satellite in September 2012. With the GOES-R system development contract planned for award in August 2007, NOAA is positioning itself to improve the acquisition of this system by incorporating the lessons learned from other satellite procurements, including the need to establish realistic cost estimates, ensure sufficient government and contractor management, and obtain effective executive oversight. However, further steps remain to fully address selected lessons. Specifically, NOAA has not yet developed a process to evaluate and reconcile the independent and government cost estimates. In addition, NOAA has not yet determined how it will ensure that a sufficient level of technical maturity will be achieved in time for an upcoming decision milestone, or determined the appropriate level of resources it needs to adequately track and oversee the program using earned value management. Until it completes these activities, NOAA faces an increased risk that the GOES-R program will repeat the increased cost, schedule delays, and performance shortfalls that have plagued past procurements. Recent concerns about the potential for cost growth on the GOES-R procurement have led the agency to consider reducing the scope of requirements for the satellite series. A decision on the future scope and direction of the program could by made by the end of September 2006. Once the decision is made, it will be important to move quickly to implement the decision in the agency budgets and contracts. To improve NOAA’s ability to effectively manage the procurement of the GOES-R system, we recommend that the Secretary of Commerce direct its NOAA Program Management Council to take the following three actions: Once the scope of the program has been finalized, establish a process for objectively evaluating and reconciling the government and independent life cycle cost estimates. Perform a comprehensive review of the Advanced Baseline Imager, using system engineering experts, to determine the level of technical maturity achieved on the instrument, to assess whether the contractor has implemented sound management and process engineering, and to assert that the technology is sufficiently mature before moving the instrument into production. Seek assistance from an independent review team to determine the appropriate level of resources needed at the program office to adequately track and oversee the contractor’s earned value management. Among other things, the program office should be able to perform a comprehensive integrated baseline review after system development contract award, provide surveillance of contractor earned value management systems, and perform project scheduling analyses and cost estimates. We received written comments on a draft of this report from the Department of Commerce (see app. III). In the department’s response, the Deputy Secretary of Commerce agreed with our recommendations and identified plans for implementing them. Specifically, the department noted that it plans to establish a process for reconciling government and independent cost estimates and to evaluate the process and results with an independent team of recognized senior experts in the satellite acquisition field. The department also noted that an independent review team is planning to perform assessments of the technical maturity of the Advanced Baseline Imager and the extent to which the program management structure and reporting process will provide adequate oversight of the GOES-R system acquisition. Additionally, the department expressed concern regarding our use of a cost estimate that they considered to be premature and misleading. During the course of our review, NOAA provided us with a cost estimate that was later determined by agency officials to be inaccurate and was subsequently corrected. We have incorporated the revised cost estimate of $11.4 billion for the overall GOES-R program to ensure that all cost estimates reported at this time are accurate. The department provided additional technical corrections, which we have incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Commerce, the Administrator of NASA, the Director of the Office of Management and Budget, and other interested parties. In addition, this report will be available at no charge on our Web site at http://www.gao.gov. If you have any questions on matters discussed in this report, please contact me at (202) 512-9286 or by e-mail at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to (1) determine the status of and plans for the Geostationary Operational Environmental Satellites-R series (GOES-R) procurement and (2) identify and evaluate the actions that the project management team is taking to ensure that past problems experienced in procuring other satellite programs are not repeated. To accomplish these objectives, we focused our review on the National Oceanic and Atmospheric Administration’s (NOAA) GOES-R program office, the organization responsible for the overall GOES-R program. To determine the status of and plans for the GOES-R series procurement, we reviewed various program office plans and management reports such as acquisition schedules, cost estimates, and planned system requirements. Furthermore, we conducted interviews with NOAA and National Aeronautics and Space Administration (NASA) officials to determine key dates for future GOES-R acquisitions efforts and milestones, and potential changes in program scope, cost, and schedule. To identify the steps the GOES-R project management team is taking to ensure that past problems experienced in procuring other satellite series are not repeated, we analyzed our past body of work on major space system acquisitions, including the Advanced Extremely High Frequency satellites, the GOES I-M satellites, the National Polar-orbiting Operational Environmental Satellite System, and the Space Based Infrared System High program in order to identify key lessons. We also analyzed findings from other government reports on satellite procurements, such as by the Defense Science Board–Air Force Scientific Advisory Board Joint Task Force and the Department of Commerce’s Office of Inspector General. We assessed relevant management documents, such as cost reports and program risk plans. Our evaluation included the application of earned value analysis techniques to data from contractor cost performance reports over an 11-month period (from June 2005 to May 2006). We also conducted interviews with agency officials to identify and to evaluate the adequacy of the actions taken to address these lessons. We obtained comments on a draft of this report from officials at the Department of Commerce and incorporated these comments as appropriate. We performed our work at NOAA and NASA offices in the Washington, D.C., metropolitan area between December 2005 and August 2006 in accordance with generally accepted government auditing standards. The development of one of the critical GOES-R instruments, the Advanced Baseline Imager (ABI), is experiencing technical challenges and, as a result, the contractor is missing cost and schedule targets. Despite the uncertainty regarding the future scope of the GOES-R program, it is expected that the requirements for this instrument will not change. Contractor-provided data from June 2005 to May 2006 indicates that ABI’s cost performance is experiencing a trend of negative variances. Figure 4 shows the 11-month cumulative cost variance for the ABI contract. As of May 2006, the contractor has incurred a total cost overrun of almost $6 million with ABI development only 28 percent complete. This information is useful because trends tend to continue and can be difficult to reverse unless management attention is focused on key risk areas and risk mitigation actions are aggressively pursued. Studies have shown that, once work is 15 percent complete, the performance indicators are indicative of the final outcome. Based on contractor performance from June 2005 to May 2006, we estimated that the current ABI instrument contract—which is worth approximately $360 million—will overrun its budget by between $17 million and $47 million. Our projection of the most likely cost overrun will be about $23 million. The contractor, in contrast, estimates about a $7 million overrun at completion of the ABI contract. Given that the contractor has 72 percent of work remaining and has already accumulated a cost overrun of $5.9 million, the likelihood that the contractor will meet its estimated projection is small. Our analysis also indicates that the contractor has been unable to meet its planned schedule targets. Figure 5 shows the 11-month cumulative schedule variance for the ABI contract. From June 2005 to May 2006, the contractor was unable to complete approximately $3.3 million worth of scheduled work. The contractor reported that its incorporation of revised subcontractor budgets resulted in the fluctuations in schedule performance data prior to March 2006. The current inability to meet contract schedule performance could be a predictor of future rising costs, as more spending is often necessary to resolve schedule overruns. According to contractor-provided documents, the cost and schedule overruns were primarily caused by design complexity issues experienced in the development of the instrument’s detectors and the electronics design for the cryocooler and the unplanned time and manpower expended to resolve these issues. Other significant cost and schedule drivers include software issues on the scanner and supplier quality issues on some parts. David A. Powner, (202) 512-9286 or pownerd@gao.gov. In addition to the contact named above, Carol Cha, Neil Doherty, Nancy Glover, Kush Malhotra, Colleen Phillips, and Karen Richey made key contributions to this report.
The National Oceanic and Atmospheric Administration (NOAA) plans to procure the next generation of geostationary operational environmental satellites, called the Geostationary Operational Environmental Satellites-R series (GOES-R). This new series is considered critical to the United States' ability to maintain the continuity of data required for weather forecasting through the year 2028. GAO was asked to (1) determine the status of and plans for the GOES-R series procurement, and (2) identify and evaluate the actions that the program management team is taking to ensure that past problems experienced in procuring other satellite programs are not repeated. NOAA is nearing the end of the preliminary design phase of its GOES-R system--which was estimated to cost $6.2 billion and scheduled to have the first satellite ready for launch in 2012. It expects to award a contract in August 2007 to develop this system. However, according to program officials, NOAA's plans for the GOES-R procurement could change in the near future. Recent analyses of the GOES-R program cost--which in May 2006 the program office estimated could reach $11.4 billion--have led the agency to consider reducing the scope of requirements for the satellite series. NOAA officials estimated that a decision on the future scope and direction of the program could be made by the end of September 2006. NOAA has taken steps to implement lessons learned from past satellite programs, but more remains to be done. Prior satellite programs--including a prior GOES series, a polar-orbiting environmental satellite series, and various military satellite programs--often experienced technical challenges, cost overruns, and schedule delays. Key lessons from these programs include the need to (1) establish realistic cost and schedule estimates, (2) ensure sufficient technical readiness of the system's components prior to key decisions, (3) provide sufficient management at government and contractor levels, and (4) perform adequate senior executive oversight to ensure mission success. NOAA has established plans to address these lessons by conducting independent cost estimates, performing preliminary studies of key technologies, placing resident government offices at key contractor locations, and establishing a senior executive oversight committee. However, many steps remain to fully address these lessons. Until it completes these activities, NOAA faces an increased risk that the GOES-R program will repeat the increased cost, schedule delays, and performance shortfalls that have plagued past procurements.
You are an expert at summarizing long articles. Proceed to summarize the following text: The DOD Joint Ethics Regulation defines ethics as standards that guide someone’s behavior based on their values—which the regulation defines as core beliefs that motivate someone’s attitudes and actions. The Joint Ethics Regulation identifies 10 primary ethical values that DOD personnel should consider when making decisions as part of their official duties. These values are: honesty, integrity, loyalty, accountability, fairness, caring, respect, promise-keeping, responsible citizenship, and pursuit of excellence. In addition to DOD’s ethical values, each of the military services has established its own core values. For example, the core values of the Navy and the Marine Corps are honor, courage, and commitment. The Air Force’s core values include integrity and service before self, and the Army’s include loyalty, honor, duty, integrity, respect, and selfless service. For the purposes of this report, we distinguish between compliance-based ethics programs and values-based ethics programs. We refer to compliance-based ethics programs as those that focus primarily on ensuring adherence to rules and regulations related to financial disclosure, gift receipt, outside employment activities, and conflicts of interest, among other things. In contrast, we use values-based ethics programs to refer to ethics programs that focus on upholding a set of ethical principles in order to achieve high standards of conduct. Values- based ethics programs can build on compliance to incorporate guiding principles such as values to help foster an ethical culture and inform decision-making where rules are not clear. Professionalism relates to the military profession, which DOD defines as the values, ethics, standards, code of conduct, skills, and attributes of its workforce. One of the military profession’s distinguishing characteristics is its expertise in the ethical application of lethal military force and the willingness of those who serve to die for our nation. While DOD’s leaders serve as the foundation and driving force for the military profession, DOD considers it the duty of each military professional to set the example of virtuous character and exceptional competence at every unit, base, and agency. There are numerous laws and regulations governing the conduct of federal personnel. The Compilation of Federal Ethics Laws prepared by the United States Office of Government Ethics includes nearly 100 pages of ethics-related statutes to assist ethics officials in advising agency employees. For the purposes of this report, we note some key laws and regulations relevant to military ethics and professionalism. The laws and regulations are complex and the brief summaries here are intended only to provide context for the issues discussed in this report. The Ethics in Government Act of 1978 as amended established the Office of Government Ethics, an executive agency responsible for providing overall leadership and oversight of executive branch agencies’ ethics programs to prevent and resolve conflicts of interest. To carry out these responsibilities, the Office of Government Ethics ensures that executive branch ethics programs are in compliance with applicable ethics laws and regulations through inspection and reporting requirements; disseminates and maintains enforceable standards of ethical conduct; oversees a financial disclosure system for public and confidential financial disclosure report filers; and provides education and training to ethics officials. The Ethics in Government Act of 1978 also requires certain senior officials in the executive, legislative, and judicial branches to file public reports of their finances and interests outside the government, and places certain limitations on outside employment. The main criminal conflict of interest statute, Section 208 of Title 18 of the U.S. Code, prohibits certain federal employees from personally and substantially participating in a particular government matter that will affect their financial interests or the financial interests of their spouse, minor child, or general partner, among others. The Office of Government Ethics implemented this statute in Title 5 of the Code of Federal Regulations (C.F.R.) Part 2640, which further defines financial interests and contains provisions for granting exemptions and individual waivers, among other things. The Uniform Code of Military Justice establishes the military justice system and provides court-martial jurisdiction over servicemembers and other categories of personnel. Among other things, it defines criminal offenses under military law; and it authorizes commanding officers to enforce good order and discipline through the exercise of non-judicial punishment. The Office of Government Ethics issued 5 C.F.R. Part 2635, which contains standards that govern the conduct of all executive branch employees. To supplement Title 5, some agencies have issued additional employee conduct regulations, as authorized by 5 C.F.R. § 2635.105. The Office of Government Ethics also issued Part 2638, which contains the Office of Government Ethics and executive branch agency ethics program responsibilities. For example, 5 C.F.R. § 2638.602 requires an agency to file a report annually with the Office of Government Ethics covering information on each official who performs the duties of a designated agency ethics official; statistics on financial disclosure report filings; and an evaluation of its ethics education, training and counseling programs. Additionally, 5 C.F.R. § 2638.701 requires that an agency establish an ethics training program that includes an initial orientation for all employees, and annual ethics training for employees who are required to file public financial disclosure reports and other covered employees. The Joint Ethics Regulation is DOD’s comprehensive ethics policy and guidance related to the standards of ethical conduct. The regulation incorporates standards and restrictions from federal statutes, Office of Government Ethics regulations, DOD’s supplemental regulation in 5 C.F.R. Part 3601, and Executive Order 12674 to provide a single source of guidance for the department’s employees on a wide range of rules and restrictions, including issues such as post-government employment, gifts, financial disclosure, and political activities. The Joint Ethics Regulation establishes DOD’s ethics program and defines the general roles and responsibilities of the officials who manage the ethics program at the departmental and subordinate organizational levels. For example, the Joint Ethics Regulation requires that the head of each DOD agency assign a designated agency ethics official to implement and administer all aspects of the agency’s ethics program. This regulation also defines the roles and responsibilities of ethics counselors related to ethics program implementation and administration. The Panel on Contracting Integrity was established by DOD in 2007 pursuant to Section 813 of the John Warner National Defense Authorization Act for Fiscal Year 2007. Chaired by the Under Secretary of Defense for Acquisition, Technology, and Logistics, the Panel consists of a cross-section of senior-level DOD officials who review the department’s progress in eliminating areas of vulnerability in the defense contracting system that allow fraud, waste, and abuse to occur, and it recommends changes in law, regulations, and policy. The Panel was due to terminate on December 31, 2009, but Congress extended the Panel’s existence until otherwise directed by the Secretary of Defense, and at a minimum through December 31, 2011. As directed, in 2007, the Panel began submitting annual reports to Congress containing a summary of the Panel’s findings and recommendations. Several of the Panel’s findings and recommendations pertain to DOD ethics. DOD has a management framework to help oversee its required ethics program, and it has initiated steps to establish a management framework to oversee its professionalism-related programs and initiatives. However, DOD has not fully addressed an internal recommendation to develop a department-wide values-based ethics program, and it does not have performance information to assess the Senior Advisor for Military Professionalism’s (SAMP) progress and to inform its decision on whether the office should be retained beyond March 2016. DOD has a decentralized structure to administer and oversee its required ethics program and to ensure compliance with departmental standards of conduct. This structure consists of 17 Designated Agency Ethics Officials positioned across the department. Each Designated Agency Ethics Official, typically the General Counsel, is appointed by the head of his or her organization, and is responsible for administering all aspects of the ethics program within his or her defense organization. This includes managing the financial disclosure reporting process, conducting annual ethics training, and providing ethics advice to employees. To assist in implementing and administering the organization’s ethics program, each Designated Agency Ethics Official appoints ethics counselors. Attorneys designated as ethics counselors support ethics programs by providing ethics advice to the organization’s employees, among other things. Within the military departments, the Judge Advocate Generals provide ethics counselors under their supervision with legal guidance and assistance and support all aspects of the departments’ ethics programs. The DOD Standards of Conduct Office (SOCO), on behalf of the DOD General Counsel, administers the ethics program for the Office of the Secretary of Defense and coordinates component organization ethics programs. SOCO is responsible for developing and establishing DOD- wide ethics rules and procedures and for promoting consistency among the component organizations’ ethics programs by providing information, uniform guidance, ethics counselor training, and sample employee training materials. According to the Joint Ethics Regulation, the DOD General Counsel is responsible for providing SOCO with sufficient resources to oversee and coordinate DOD component organization ethics programs. The DOD General Counsel also represents DOD on matters relating to ethics policy. DOD has taken steps toward developing a values-based ethics program but has not fully addressed the recommendation of the Panel on Contracting Integrity to develop a department-wide values-based ethics program. For instance, DOD has taken steps such as conducting a department-wide survey of its ethical culture and a study of the design and implementation of such a program. DOD also began delivering values-based ethics training annually in 2013 to select personnel. In 2008, the Panel on Contracting Integrity recommended in its report to Congress that DOD develop a department-wide values-based ethics program to complement its existing rules-based compliance program managed by SOCO. The report noted that while SOCO had been effective in demanding compliance for set rules, the ethics program may have provided the false impression that promoting an ethical culture was principally the concern of the Office of General Counsel, when integrity is a leadership issue, and therefore everyone’s concern. In 2010, the Panel also noted that an effective values-based ethics program, as evidenced by the many robust programs employed by DOD contractors, cannot be limited to educating DOD leadership; rather, it must be aimed at promoting an ethical culture among all DOD employees. The Panel’s recommendation was based in part on the Defense Science Board’s 2005 finding that while DOD had in place a number of pieces for an ethically grounded organization, it lagged behind best-in-class programs in creating a systematic, integrated approach and in demonstrating the leadership necessary to drive ethics to the forefront of organizational behavior. The Panel reiterated its recommendation for a department- wide values-based program in its 2009 and 2010 reports to Congress. In response to the Panel’s recommendation, DOD contracted for a 2010 survey and a 2012 study to assess DOD’s ethical culture and to design and implement a values-based ethics program, respectively. The 2010 survey assessed various dimensions of ethical behavior, including the level of leadership involvement in the ethics program and the extent to which employees perceive a culture of values-based ethics and are recognized and rewarded for ethics excellence. The survey report findings showed that DOD’s overall ethics score was comparable to that of other large federal government organizations, but advocated for a values-based approach to address ethical culture weaknesses. For example, the survey report stated that: employees believe that DOD rewards unethical behavior to an extent that is well above average; employees fear retribution for reporting managerial/commander misconduct to an extent that is well above average; and the number of employees who acknowledge regularly receiving ethics information and training is comparatively low. The 2012 study reinforced the need for a department-wide values-based ethics program—noting that DOD lagged behind common practices, among other things—and made 14 recommendations related to establishing such a program. Notably, these recommendations included developing an independent Office of Integrity and Standards of Conduct; adopting a set of core values representing all of DOD; conducting annual core values training for all DOD employees; and periodically measuring program effectiveness. In 2013, the Panel on Contracting Integrity issued a memorandum to SOCO stating that, after reviewing the 2012 study’s recommendations, SOCO was better positioned than the Panel to implement the study’s recommendations. In 2013, SOCO partially implemented 1 of the study’s 14 recommendations by annually delivering values-based ethics training to DOD financial disclosure filers—who are required to receive annual ethics training—as well as other select military and civilian personnel. This training emphasizes DOD and military service core values such as honor, courage, and integrity; highlights cases of misconduct; discusses ethical decision-making; and features senior-leader involvement in presentations to emphasize its importance. In 2014, the Under Secretary of Defense for Acquisition, Technology, and Logistics directed that all acquisition workforce personnel also complete this training annually to reinforce the importance of ethical decision-making. SOCO officials stated that they encourage all DOD organizations to administer this values- based annual ethics training and to extend this training to other personnel not required to receive mandatory annual ethics training. In 2014, DOD reported that about 146,000 department personnel received annual ethics training. We estimate that this represents about 5 percent of DOD’s total workforce. The Federal Sentencing Guidelines, a key source of guidance often used in developing effective ethics programs, encourage organizations to train all employees periodically on ethics. Similarly, DOD’s 2012 study recommended mandatory annual training on integrity and ethics for all DOD employees, and the 2008 Panel report stated that an effective values-based ethics program must be aimed at promoting an ethical culture among all DOD employees. Several of the DOD, foreign military, and industry organizations we spoke with cited the importance of training to convey information about ethics. For example, SOCO officials stated that positive feedback from the initial values-based training rollout in 2013 influenced their decision to continue with this format in 2014, while officials from the SAMP office stated that employees need to be reminded of ethics periodically, and that senior leadership should be retrained continuously on ethics rules. Additionally, officials from each of the four industry and foreign military organizations we contacted stated that ethics training within their organizations was either mandatory for all employees on a periodic basis or available to all employees in one or more formats. As noted above, SOCO encourages DOD organizations to administer values-based annual training to non-mandatory personnel, but neither SOCO nor the military departments have assessed the feasibility of expanding this training to additional personnel. A SOCO official stated that annual training could be expanded to a larger group of employees, potentially on a periodic instead of an annual basis, but that any decision to appreciably expand ethics training would have to consider factors such as associated costs related to the time and effort for leaders and ethics counselors to conduct training, employee hours to take training, and administrative support time to track compliance with the training requirement. This SOCO official noted also that the Army required face- to-face annual ethics training for all employees from approximately 2002 through 2006 but subsequently eliminated the requirement because of the resource burden and the concern that training was not needed for most enlisted personnel and junior officers. Our work on human capital states that agencies should strategically target training to optimize employee and organizational performance by considering whether expected costs associated with proposed training are worth the anticipated benefits over the short and long terms. Without considering such factors in an assessment of the feasibility of expanding mandatory annual values- based ethics training to a greater number of DOD employees, the department may be limited in its ability to properly target this training, and therefore may be missing opportunities to promote and enhance DOD employees’ familiarity with values-based ethical decision-making. With respect to the other 13 recommendations from the 2012 study, SOCO officials stated that they do not plan to take further action. These officials also stated that they have not formally responded to the Panel’s original recommendation to develop a values-based ethics program or its subsequent memorandum. SOCO officials expressed support for developing a values-based ethics program provided that such a program were properly resourced and focused on substantive issues instead of process. Similarly, officials from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics stated that the department would benefit from the creation of such a program, and stressed the need for senior leaders to be involved in promoting awareness of ethical issues. However, SOCO officials stated that the Panel and 2012 study recommendations were not binding, and that SOCO—which is staffed by five attorneys—would not be optimally positioned to develop a department-wide program. These officials also stated that implementing all of the study’s other 13 recommendations was neither feasible nor advisable, and they cited existing practices as being consistent with some of the study’s recommendations. For example: The study’s recommendation to move SOCO from under the Office of General Counsel and rebrand it as an independent Office of Integrity and Standards of Conduct was not possible because ethics counselors are required to be attorneys, according to the Joint Ethics Regulation, and must therefore remain under the supervision of the DOD General Counsel in order to provide the legal advice that the department and its personnel require. The study’s recommendation to create a direct link between senior leadership and the Secretary of Defense on ethics and professionalism matters is addressed, in part, by the SAMP position that was created in March 2014. However, as discussed later in this report, if DOD decides not to renew this position or retain its functions beyond March 2016, DOD will lose its direct link between senior leadership and the Secretary of Defense on ethics and professionalism matters. Both SAMP and SOCO officials stated that there is an enduring need for such a link or the functions performed by the SAMP office, and officials from three of the four industry and foreign military organizations we contacted stated that their organization had in place a direct link to senior leadership on ethics- related matters. The study’s recommendation to assess and mitigate ethical culture and compliance risk is consistent with SOCO’s current practice of informally reviewing misconduct reports and survey results, conducting ethics program reviews, consulting ethics officials, and factoring perceived trends into training plans and appropriate ethics guidance and policy. Federal internal control standards emphasize the need for managers to respond to findings and recommendations from audits and reviews and to complete all actions that correct or otherwise resolve the matters brought to management’s attention within established timeframes, or alternatively to demonstrate that actions are unwarranted. However, DOD has not identified actions or established timeframes for fully responding to the Panel’s recommendation or the 2012 study’s other 13 recommendations; nor has it informed the Panel that it plans to take no further action. While not binding, the Panel’s recommendation to establish a department-wide values-based ethics program represents a need identified by senior leaders from across the department. Without identifying actions DOD intends to take, with timeframes, to address the Panel’s recommendation, including the study’s other 13 recommendations, or demonstrating that further action is unwarranted, the department does not have assurance that the identified need for a values-based ethics program has been addressed. In March 2014, the Secretary of Defense reaffirmed the previous Secretary’s prioritization of professionalism as a top concern for DOD’s senior leadership by establishing the office of the SAMP, headed by a Navy Rear Admiral (Upper Half), which reports directly to the Secretary of Defense. The SAMP position was established for a 2-year term, with an option to renew, and it is supported by an independent office consisting of six permanent staff members comprised variously of Air Force, Army, Navy, Marine Corps, and Army National Guard Lieutenant Colonels, Colonels, Commanders and Captains, and one contract employee who provides administrative support. SAMP officials stated that they were unclear about the rationale behind the initial 2-year term. The office is embedded in the Office of the Under Secretary of Defense for Personnel and Readiness, and it has been fully staffed since July 2014. The purpose of the SAMP office is to coordinate and ensure the integration of the department’s ongoing efforts to improve professionalism, and to make recommendations to senior DOD leadership that complement and enhance such efforts. The office primarily interacts with senior DOD leadership through the Senior Leadership Forum on Military Professionalism, which meets every 5 weeks, and is comprised of the Secretary of Defense, military service secretaries and chiefs, and the DOD General Counsel, among others. The office supports this forum by promulgating an agenda, raising issues for discussion and decision, and briefing leadership on relevant department-wide activities. Recent department-wide activities have been wide-ranging, and include (1) 13 character development initiatives for general and flag officers; (2) a review of ethics content in professional military education; and (3) the development of tools, such as command climate and 360-degree assessments, that can be used to identify and assess ethics-related issues. These and various other initiatives and senior-level communications directed by the President, the Secretary of Defense, and Congress are intended to enhance DOD’s ethical culture and to emphasize the importance of ethics and professionalism to departmental personnel. A timeline of key ethics and professionalism events and communications since 2007 is shown in appendix II. In September 2014, the SAMP office developed a plan outlining its major tasks across three phases: (1) assess the state of the profession, (2) strengthen and sustain professional development, and (3) foster trust through transparent accounting of efforts. Tasks across each respective phase include conducting a survey to assess DOD’s ethical culture; identifying tools for individual professional development and evaluation; and developing an annual report card that highlights trends, best practices, and underperforming professionalism-related programs. DOD does not have timelines or performance measures to assess SAMP’s progress and to inform its decision on whether the SAMP position should be retained. Our work on strategic planning has found that leading practices, such as developing detailed plans outlining major implementation tasks and defining measures and timelines to assess progress, contribute to effective and efficient operations. Additionally, leading organizations that have progressed toward results-oriented management use performance information as a basis for making resource allocation decisions, planning, budgeting, and identifying priorities. The SAMP office has taken steps toward implementing its major tasks, but DOD does not have key performance information to help inform the decision as to whether the SAMP position should be retained beyond its initial 2-year term—which is set to expire in March 2016. The SAMP office has drafted a white paper exploring the relationship between the military profession and the military professional, developed a catalogue documenting tools that can be used to assess ethics-related issues, and initiated steps to update the 2010 department-wide survey of DOD’s ethical culture. In addition, the SAMP office has canvassed the military services to identify service-level initiatives for civilian personnel that are similar to the 13 general and flag officer initiatives, conducted sessions with senior officers to identify areas of interest to senior leadership, and begun to partner with academic institutions to pursue research related to utilizing behavioral science and neuroscience to address issues of ethics, character, and competence in the military. While the SAMP office has taken steps toward completing its major tasks, it has not defined timelines or measures to (1) assess its progress or impact; (2) determine whether it has completed its major tasks; or (3) help inform the decision on whether its initial 2-year term should be renewed. SAMP officials stated that while the office has not defined timelines or measures, they believe that the office’s activities should help to establish self-perpetuating professionalism efforts within the military services. SAMP officials stated that such efforts within the services may somewhat diminish the need for SAMP, but these same officials also noted that the work of the office will remain necessary and that its function should exist beyond the initial 2-year term because building and sustaining an ethical culture and professionalism capacity constitute a continuous effort at every grade level. They added that the Secretary of Defense will also continue to need a mechanism for looking across the services, working with other countries, and influencing departmental policies. The need for senior-level oversight of professionalism or ethics issues also was cited by other DOD, industry, and foreign military organizations we contacted. For example, SOCO officials expressed support for maintaining the SAMP position or function beyond the initial 2-year period, stating that there is enduring value in having an office like SAMP because it provides a sense of permanence to ethics and professionalism and will help institutionalize related improvement efforts. Similarly, as previously stated, officials from three of the four industry and foreign military organizations we contacted stated that their organization had in place a direct link to senior leadership on ethics-related matters. Without timelines or measures to assess the office’s progress, DOD does not have performance information for determining whether SAMP’s efforts are on track to achieve desired outcomes, and the department may find it difficult to determine the future of the office and its function. Further, DOD will not be positioned to assess whether SAMP is the appropriate vehicle to achieve these outcomes or how best to allocate resources within the department to achieve them. DOD has identified a number of mandatory and optional tools that defense organizations can use to identify and assess individual and organizational ethics and professionalism issues. However, two key tools—command climate and 360-degree assessments—have not been fully implemented in accordance with statutory requirements and departmental guidance, and DOD has not yet developed performance metrics to measure its progress in addressing ethics-related issues. DOD has identified several climate, professional development, and psychometric tools that can be used to identify and assess individual and organizational ethics-related issues. Climate tools are designed to assess opinions and perceptions of individuals within an organization, and they include instruments such as surveys. Professional development tools include a range of self-and-peer assessment instruments that are designed to provide individuals with feedback on their development. Psychometric tools include instruments such as the Navy’s Hogan Insights, which are designed to provide a holistic behavioral review of an individual, and are generally used to assess and identify individual behavior and personality traits. The SAMP office is completing an inventory of climate, professional development, and psychometric tools that are used across the department to enhance interdepartmental visibility of these tools and to promote best practices. SAMP officials stated that while these tools could be used to assess ethics-related issues, none of the tools were designed exclusively for that purpose. Figure 1 shows some of the tools identified by the SAMP office that could be used to identify and assess individual and organizational ethics-related issues. Officials from the SAMP office and from each of the military services have cited command climate assessments and 360-degree assessments as the department’s primary tools that could be used for identifying ethics- related issues. Command climate assessments are designed to assess elements that can impact an organization’s effectiveness such as trust in leadership, equal opportunity, and organizational commitment. These assessments can include surveys, focus groups, interviews, records of analyses, and physical observations. The command climate assessment’s main component is a survey administered online by the Defense Equal Opportunity Management Institute. Survey results, which are provided to the unit commander, include a detailed analysis of unit results in comparison to other units within the organization. In addition, 360-degree assessments are a professional developmental tool that allows individuals to gain insights on their character traits by soliciting feedback about their work performance from superiors, subordinates, and peers. A variety of 360-degree assessments are used across the department to enable different levels of personnel to obtain such feedback. For example, the Army conducts three different 360- degree assessments under the Multi-Source Assessment Feedback Program, which are targeted toward officers (Brigadier General and below), non-commissioned officers, and civilian leaders. SAMP officials stated that while none of these tools is specifically designed to assess ethics issues, the office is investigating whether a combination of them can be used to provide a more holistic picture of ethical behavior, and exploring what might be gained by sharing data captured by these tools across the department. The military services have issued guidance to implement command climate assessments, but the Army, the Air Force, and the Marine Corps do not have assurance that they are in compliance with all statutory requirements because their guidance does not fully address implementing and tracking requirements. In addition, the Army’s and the Navy’s guidance do not fully address DOD guidance related to the size of the units required to complete command climate assessments. The National Defense Authorization Act for Fiscal Year 2014 contains requirements related to (1) tracking and verifying that commanders are conducting command climate assessments, (2) disseminating results to the next higher level command, and (3) recording the completion of command climate assessments in commanders’ performance evaluations. As shown in table 1, the Navy has developed guidance that addresses all of the four Fiscal Year 2014 National Defense Authorization Act’s requirements, but the Army’s, the Air Force’s, and the Marine Corps’ guidance do not fully address two of the four requirements that relate to recording in the performance evaluations of a commander whether the commander has conducted a command climate assessment. As table 1 shows, all of the military services’ guidance addresses section 587(a) of the authorization act, which requires that the results of command climate assessments be provided to the commander and to the next higher level command, as well as section 1721(d), which requires that the military departments track and verify whether commanding officers have conducted a command climate assessment. In addition to complying with these requirements, the Army, the Air Force, and the Navy also have command climate assessments reviewed above the next highest level. For example, Navy officials stated that their command climate assessment results are aggregated, analyzed, and reported to Navy leadership annually to inform service policy and training. With respect to sections 587(b) and 587(c) of the authorization act, the Navy’s guidance addresses these sections, but the Army’s, the Air Force’s, and the Marine Corps’ respective guidance do not. For example, the Army’s performance evaluation process requires that raters assess a commander’s performance in fostering a climate of dignity and respect, and in adhering to the requirements of the Army’s Sexual Harassment/Assault Response and Prevention Program, which requires that command climate assessments be conducted. However, this program does not specifically require that commanders include a statement in their performance evaluations as to whether they conducted an assessment, or that failure to do so be recorded in their performance evaluation. In addition, not all of the military services’ guidance fully meets DOD guidance. Specifically, in July 2013, the Acting Under Secretary of Defense for Personnel and Readiness issued a memorandum requiring the secretaries of the military departments to establish procedures in their respective operating instruction and regulations related to the implementation of command climate assessments. Among other things, the guidance addresses the size of units for conducting command climate assessments and the dissemination of assessment results. In response to this guidance, each of the military services has developed written guidance. As shown in table 2, the Air Force’s and the Marine Corps’ guidance address all command climate guidance in the Under Secretary’s memorandum, while the Army’s and the Navy’s guidance do not require that units of fewer than 50 servicemembers shall be surveyed with a larger unit in the command to ensure anonymity and to provide the opportunity for all military personnel to participate in the process, as laid out in the memorandum. Without requiring that commanders include a statement in their performance evaluations about whether they have conducted a command climate assessment, and requiring that the failure of a commander to conduct a command climate assessment be noted in the commander’s performance evaluation, the Army, the Air Force, and the Marine Corps will not be complying with the mandated level of accountability Congress intended during the performance evaluation process. Additionally, without requiring organizations of fewer than 50 servicemembers to be surveyed with a larger unit, the Army and the Navy may be unable to ensure that all unit members are able to participate anonymously in command climate surveys as intended by DOD guidance. The development and use of 360-degree assessments for general and flag officers vary across the military services and the Joint Staff, and they do not cover all intended military personnel. Specifically, the 2013 General and Flag Officer Character Implementation Plan memorandum states that 360-degree assessments would be developed and used for all military service and Joint Staff general and flag officers, and a November 2013 memorandum issued by the Chairman of the Joint Chiefs of Staff to the President reiterates the department’s commitment to developing and implementing 360-degree assessments for all general and flag officers. The Air Force and the Army have developed and implemented 360- degree assessments for all of their general officers, but the Navy, the Marine Corps, and the Joint Staff have developed and implemented 360- degree assessments only for certain general and flag officers. Table 3 shows the extent to which the military services and the Joint Staff have developed and implemented 360-degree assessments for their general and flag officers. The Navy, the Marine Corps, and the Joint Staff cited different reasons for developing and implementing 360-degree assessments only for certain general and flag officers. For example, in 2013, the Navy required new flag officers promoted to the Rear Admiral (Lower Half) rank, as well as Rear Admiral (Lower Half) selects, to complete 360-degree assessments. A Navy official stated that expanding 360-degree assessments to include all Navy flag officers would incur significant costs, particularly with regard to the cost of specially trained personnel to coach individuals on how to respond to the results of their 360-degree assessments. Similarly, officials from the SAMP office and Joint Staff cited coaching as a driver of costs for 360-degree assessments. A RAND study released on behalf of DOD in April 2015 also noted that 360-degree assessments are resource- intensive to design, implement, and maintain. Due to the costs associated with expanding 360-degree assessments and other concerns, such as the value of the feedback elicited by the tool, the Navy is investigating other tools and techniques that can provide critical self- assessment for its personnel. For example, Navy officials stated they are using a similar tool—the Hogan Assessment—as part of a Command Leadership Course for some prospective commanding officers. According to Marine Corps officials, in 2014, two general officers from the Marine Corps participated in a Joint Staff 360-degree assessment pilot program. These officials stated that there are no plans to expand the program to include Marine Corps general officers not assigned to the Joint Staff because Marine Corps senior officials are satisfied with the flexibility and feedback that the Joint Staff pilot provides, and because the Marine Corps also uses the Commandant’s Command Survey, which similarly focuses on the climate and conduct of leaders and commanders. In October 2014, following its pilot, the Joint Staff initiated 360-degree assessments for one and two star general and flag officers to occur at 6 months and 2 years after assignment to the Joint Staff. In July 2015, the Joint Staff issued guidance requiring that Joint Staff three star general and flag officers, civilian senior executives, and one, two, and three star general and flag officers at the combatant commands complete 360- degree assessments. Joint Staff officials stated that 360-degree assessments are not used at the four star rank because at that level the peer and superior populations are significantly smaller, creating a greater possibility of assessor survey fatigue and concerns about anonymity. Further, Joint Staff officials stated that four star level officers already conduct command climate surveys that allow everyone within their unit or organization to assess the leader and organization. While the Navy, the Marine Corps, and the Joint Staff cited varying reasons for implementing 360-degree assessment only for certain general and flag officers, the inconsistent implementation of this tool across the department denies a number of senior military leaders valuable feedback on their leadership skills and an opportunity for developing an understanding of personal strengths and areas for improvement. Taking into account the military services’ and the Joint Staff’s differing reasons, including costs, for implementing 360-degree assessments only for certain general and flag officers, DOD may benefit from reassessing the need and feasibility of developing and implementing 360-degree assessments for all general and flag officers. Federal internal control standards emphasize the importance of assessing performance over time, but DOD is unable to determine whether its ethics and professionalism initiatives are achieving their intended effect because it has not yet developed metrics to measure the department’s progress in addressing ethics and professionalism issues. In 2012, we reported that federal agencies engaging in large projects can use performance metrics to determine how well they are achieving their goals and to identify any areas for improvement. By using performance metrics, decision makers can obtain feedback for improving both policy and operational effectiveness. Additionally, by tracking and developing a baseline for all measures, agencies can better evaluate progress made and whether or not goals are being achieved—thus providing valuable information for oversight by identifying areas of program risk and their causes to decision makers. Through our body of work on leading performance management practices, we have identified several attributes of effective performance metrics (see table 4). SAMP officials stated that they recognize the need to continually measure the department’s progress in addressing ethics and professionalism, and are considering ways to do so; however, challenges exist. For example, the SAMP office plans to update the 2010 ethics survey by administering a department-wide ethics survey in 2015 to reassess DOD’s ethical culture. SAMP officials stated that they expect the new survey to yield valuable information on DOD’s ethical culture, but they have not identified metrics to assess DOD’s ethical culture. Additionally, SAMP officials stated that they plan to modify questions from the 2010 survey to lessen its focus on acquisition-related matters, and to collect new information. While modifying the questions from the 2010 survey may improve DOD’s understanding of its ethical climate, doing so could limit DOD’s ability to assess trends against baseline (2010) data. Moreover, DOD’s ability to assess trends in the future may also be affected by uncertainty as to whether the survey will be administered beyond 2015. SAMP officials attributed this uncertainty, in part, to survey fatigue within the department—a factor cited by SAMP officials that could also affect the response rate for the 2015 survey, and therefore limit the utility of the survey data. To combat this challenge, the SAMP office is considering merging the ethics survey with another related survey, such as the sexual assault prevention and response survey. According to SAMP officials, the Under Secretary of Defense for Personnel and Readiness has established a working group to address survey fatigue within the department. SAMP officials stated that they have also considered using misconduct report data to assess the department’s ethical culture, but that interpreting such data can be challenging. For example, a reduction in reports of misconduct could indicate either fewer occurrences or a decrease in reporting—the latter of which could be induced by concerns over retribution for reporting, officials stated. Additionally, our review found that the department’s ability to assess department-wide trends in ethical behavior is limited because misconduct report data are not collected in a consistent manner across DOD. Specifically, DOD organizations define categories of misconduct differently, thereby precluding comparisons of misconduct data across different organizations, as well as aggregate- level analysis of department-wide data. To address this challenge, the DOD Office of Inspector General is developing common definitions to standardize the collection of misconduct report data across the department. DOD Office of Inspector General officials estimated that the definitions will be finalized in 2016. Because of such challenges, SAMP officials are considering certain activities, such as increased focus on ethics-related matters by DOD senior leadership, to be indicators of progress. Our work on performance management has found that intermediate goals and measures such as outputs or intermediate outcomes can be used to show progress or contribution to intended results. For instance, when it may take years before an agency sees the results of its programs, intermediate goals and measures can provide information on interim results to allow for course corrections. Also, when program results could be influenced by external factors beyond agencies’ control, they can use intermediate goals and measures to identify the program’s discrete contribution to a specific result. Our review found that various mechanisms were used by the industry and foreign military organizations we contacted to assess ethical culture, with officials from all four industry and foreign military organizations stating that their organization had used one or more tools to assess the ethical culture of their organizations. For example, one of the foreign military organizations we contacted administers a survey periodically to both civilian and military personnel to measure the organization’s ethical culture against a baseline that was established in 2003. SAMP officials similarly stated that a variety of data sources— including organizational, survey, attitudinal, behavioral, and perception of trust data—should be used to assess DOD’s ethical culture. However, without identifying specific sources, DOD will not have the information necessary to assess its progress. Moreover, without establishing clear, quantifiable, and objective metrics that include a baseline assessment of current performance to measure progress, or intermediate or short-term goals and measures, decision-makers in DOD and Congress will find it difficult to determine whether the department’s ethics and professionalism initiatives are on track to achieve desired outcomes. Maintaining a workforce characterized by professionalism and commitment to ethical values is key to executing DOD’s mission to protect the security of the nation; limiting conduct that can result in misuse of government resources; and maintaining servicemember, congressional, and public confidence in senior military leadership. As recent cases of misconduct demonstrate, ethical and professional lapses can carry significant operational consequences, waste taxpayer resources, and erode public confidence. Since 2007, DOD has taken significant steps to improve its ethical culture, for instance by conducting a department-wide ethics survey and follow-on study. The department has also acted to enhance oversight of its professionalism-related initiatives and issues, for example through creating the SAMP office. However, its overall effort could be strengthened by taking a number of additional steps. In particular, without fully considering the Panel on Contracting Integrity’s recommendation to create a values-based ethics program and the subsequent 2012 study recommendations, as well as assessing the feasibility of expanding annual values-based ethics training beyond the current mandated personnel, DOD will not have assurance that it is doing enough to promote an ethical culture, and it may face challenges in identifying areas for future action. Similarly, without performance information, including timelines and measures, DOD will not be optimally positioned to determine whether the SAMP—a key oversight position—should be renewed after its initial 2-year term, or to assess the SAMP office’s progress. At the military service level, further actions also could improve oversight of ethics and professionalism-related issues for senior leaders. For instance, without revising current guidance to comply with statutory requirements and departmental guidance and assure that commanders are conducting command climate assessments, the Army, the Air Force, the Navy, and the Marine Corps will be unable to discern whether commanders are obtaining feedback on their performance and promoting an effective culture. Furthermore, without examining the need for and feasibility of implementing 360-degree assessments for all general and flag officers, the Navy, the Marine Corps, and the Joint Staff will not have information that could enhance individual ethics and professional values. Finally, given the initiatives that DOD is planning and has under way, it is important that there be reliable means by which to gauge progress. Without identifying information sources and developing intermediate goals and performance metrics that are clear, quantifiable, and objective—and that are linked to an identified baseline assessment of current performance—decision makers in DOD and Congress will not have full visibility into the department’s progress on professionalism-related issues. As the department realigns itself to address new challenges, a sustained focus on ethics and professionalism issues will contribute to fostering the ethical culture necessary for DOD to carry out its mission. We recommend that the Secretary of Defense take the following six actions: 1. To promote and enhance familiarity with values-based ethical decision-making across the department, direct appropriate departmental organization(s), in consultation with the Office of General Counsel and the SAMP or its successor organization(s), to assess the feasibility of expanding annual values-based ethics training to include currently non-mandatory recipients. 2. To ensure that the need for a department-wide values-based ethics program has been addressed, direct appropriate departmental organization(s), in consultation with the Office of General Counsel, to identify actions and timeframes for responding to the Panel on Contracting Integrity recommendation, including the 14 related 2012 study recommendations, or alternatively demonstrate why additional actions are unwarranted. 3. To help inform decision makers on the SAMP’s progress as well as the decision regarding the extension of the SAMP’s term, direct the SAMP to define timelines and measures to assess its progress in completing its major tasks. 4. To increase assurance that commanders are conducting command climate assessments in accordance with statutory requirements and departmental guidance, direct the Secretaries of the Air Force, the Army, and the Navy, and the Commandant of the Marine Corps to modify existing guidance or develop new guidance to comply with requirements set forth in the Fiscal Year 2014 National Defense Authorization Act and internal DOD guidance. 5. To better inform the department’s approach to senior officers’ professional development, direct the Secretary of the Navy, the Commandant of the Marine Corps, and the Chairman of the Joint Chiefs of Staff to assess the need for and feasibility of implementing 360-degree assessments for all general and flag officers. 6. To improve DOD’s ability to assess its progress in addressing ethics and professionalism issues, direct the SAMP, through the Under Secretary of Defense for Personnel and Readiness, or SAMP’s successor organization(s), to identify information sources and develop intermediate goals and performance metrics. At minimum, these performance metrics should be clear, quantifiable, and objective, and they should include a baseline assessment of current performance. We provided a draft of this report to DOD for review and comment. In written comments, DOD concurred with comments on three of our six recommendations, partially concurred with two recommendations, and did not concur with one recommendation. DOD’s comments are summarized below and reprinted in appendix III. DOD also provided technical comments on the draft report, which we incorporated as appropriate. DOD concurred with comment on our first, second, and sixth recommendations, which relate to annual values-based ethics training, a department-wide values-based ethics program, and performance metrics, respectively. With regard to the first and sixth recommendations, DOD stated that the SAMP is a temporary office established by Secretary Hagel with a term ending no later than March 2016. As noted in our report, the SAMP office was established in March 2014 for an initial 2- year term, with an option to renew. Because the future of the SAMP office had not been determined at the time of this review, we directed these recommendations toward the SAMP or its successor organization(s). In its comments on our second recommendation, for DOD to respond to the Panel on Contracting Integrity recommendation, including the 14 related 2012 study recommendations, or alternatively to demonstrate why actions are unwarranted, DOD raised concerns regarding whether we are endorsing the 2012 study’s recommendations. We are not endorsing them. Our recommendation is for DOD to fully consider the Panel on Contracting Integrity’s recommendation and the subsequent 2012 study recommendations. If DOD does not believe such a program or the actions recommended by the 2012 study are warranted, then it should demonstrate why additional actions are unwarranted. Without fully considering the Panel’s recommendation, including the 2012 study recommendations, DOD will not have assurance that it is doing enough to promote an ethical culture. In addition, DOD voiced concern that the statement in the draft report that SOCO officials “do not plan to take any further action” with respect to the remaining 13 recommendations from the Phase II study could be misunderstood to imply that SOCO is unwilling to consider additional values-based ethics program initiatives. DOD elaborated that SOCO embraces values-based ethics training and other initiatives. DOD added that, as noted elsewhere in the report, DOD has practices in place that are consistent with a number of the recommendations in the Phase II study, and that SOCO is most receptive to assessing and recommending implementation of additional measures where appropriate and feasible. As noted in our report, in 2013, SOCO partially implemented 1 of the study’s 14 recommendations by annually delivering values-based ethics training to select military and civilian personnel. In addition, SOCO cited existing practices as being consistent with some of the study’s remaining 13 recommendations. However, SOCO officials told us that they do not plan to take further action, and that the Panel and 2012 study recommendations were not binding. These officials also stated that implementing all of the study’s remaining 13 recommendations was neither feasible nor advisable. We continue to believe that without identifying actions DOD intends to take, with timeframes, to address the Panel’s recommendation, including the study’s other 13 recommendations, or demonstrating that further action is unwarranted, the department does not have assurance that the identified need for a values-based ethics program has been addressed. DOD partially concurred with our fourth recommendation, that the Air Force, the Army, the Navy, and the Marine Corps modify existing guidance or develop new guidance to comply with requirements set forth in the National Defense Authorization Act for Fiscal Year 2014 and internal DOD guidance, to increase assurance that commanders are conducting command climate assessments in accordance with these statutory requirements and departmental guidance. In its comments, DOD stated that the Army’s performance evaluation process requires that raters assess a commander’s performance in fostering a climate of dignity and respect, thereby in DOD’s view satisfying the National Defense Authorization Act’s requirement that commanders must include a statement in their performance evaluations as to whether or not they conducted an assessment. In addition, DOD commented that although DOD guidance calls for organizations of fewer than 50 servicemembers to be surveyed with a larger unit, Army guidance calls for command climate surveys to be conducted at the company level and states that units of between 30 and 50 personnel may conduct their surveys separately or together with another unit, at the commander’s discretion; and that, since the survey response rate is sufficiently high (58 percent), the Army can survey organizations with fewer than 50 servicemembers. Therefore, DOD believes that the Army meets the intent of departmental guidance for command climate survey utilization. As noted in our report, the Army’s Sexual Harassment/Assault Response and Prevention Program requires that command climate assessments be conducted. However, this program does not specifically require that commanders include a statement in their performance evaluations as to whether they conducted an assessment, or that failure to do so be recorded in their performance evaluation, as required by the National Defense Authorization Act for Fiscal Year 2014. Therefore, we continue to believe that without requiring that commanders include a statement in their performance evaluations about whether they have conducted a command climate assessment, and requiring that the failure of a commander to conduct a command climate assessment be noted in the commander’s performance evaluation, the Air Force, the Army, and the Marine Corps will not be complying with the mandated level of accountability that Congress intended during the performance evaluation process. In addition, as noted in the report DOD guidance requires that organizations of fewer than 50 servicemembers shall be surveyed with a larger unit in the command to ensure anonymity and provide the opportunity for all military personnel to participate in the process. We continue to maintain that, regardless of the survey response rate, without requiring organizations of fewer than 50 servicemembers to be surveyed with a larger unit, the Army may be unable to ensure that all unit members are able to participate in command climate surveys, and to do so anonymously, as intended by DOD guidance. DOD partially concurred with our fifth recommendation, that the Navy, the Marine Corps, and the Joint Chiefs of Staff assess the need and feasibility of implementing 360-degree assessments for all general and flag officers, to better inform the department’s approach to senior officers’ professional development. In its comments, DOD stated that it concurs with the recommendation to assess the need for and feasibility of implementing 360-degree assessments, or 360-degree-like feedback assessments, where they are not already being performed. However, DOD stated that it does not believe it should assess the need and feasibility of implementing this tool for all general and flag officers, but rather only for three star ranks and below. As noted in our report, the 2013 General and Flag Officer Character Implementation Plan memorandum states that 360-degree assessments would be developed and used for all military service and Joint Staff general and flag officers, and a November 2013 memorandum issued by the Chairman of the Joint Chiefs of Staff to the President reiterates the department’s commitment to developing and implementing 360-degree assessments for all general and flag officers. The Air Force and the Army have developed and implemented 360-degree assessments for all of their general officers. However, as noted in the report, the Navy, the Marine Corps, and the Joint Staff have developed and implemented 360-degree assessments only for certain general and flag officers, citing varying reasons, including costs, for doing so. We continue to believe that, given the inconsistency of the implementation of this tool across the department, DOD may benefit from reassessing the need for and feasibility of developing and implementing 360-degree assessments for all general and flag officers. Further, we continue to maintain that such a reassessment would support the department’s approach to senior officers’ professional development by increasing and improving the consistency of the information provided to leadership. DOD did not concur with our third recommendation, that the SAMP define timelines and measures to assess its progress in completing its major tasks, in order to help inform decision makers on the SAMP’s progress as well as the decision regarding the extension of the SAMP’s term. In its written comments, DOD stated that the department will submit its Fiscal Year 2015 National Defense Authorization Act report on military programs and controls regarding professionalism to Congress on September 1, 2015, thereby satisfying the requirements of this recommendation. Although DOD states that the intent of our recommendation will be satisfied by the September 1, 2015, report to Congress, we have not been provided a copy of the draft report and cannot determine whether the report will include timelines and measures. Further, while DOD stated that SAMP's dissolution will occur in March 2016, a formal decision has not yet been made. As we discussed in our report, DOD officials stated that there is an enduring need for the work and functions of the SAMP office because, among other things, building and sustaining an ethical culture and professionalism capacity constitute a continuous effort at every grade level, and because of the importance of having a direct link between senior leadership and the Secretary of Defense on ethics and professionalism matters. The intent of our recommendation is to help equip decision makers with the information necessary to assess SAMP's progress and thereby determine next steps regarding its future. We continue to believe that without timelines or measures to assess the office’s progress, DOD will not be positioned to assess whether SAMP is the appropriate vehicle to achieve these outcomes, or how best to allocate resources within the department to achieve them. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Chairman, Joint Chiefs of Staff; the Secretaries of the Military Departments; and the Commandant of the Marine Corps. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To evaluate the extent to which the Department of Defense (DOD) has developed and implemented a management framework to oversee its programs and initiatives on professionalism and ethics for active duty officers and enlisted servicemembers we assessed—against leading practices for strategic planning and performance management, and federal internal control standards—guidance, plans, and work products to determine the extent to which DOD has defined roles, responsibilities, measures, and timelines for managing its existing ethics program and professionalism oversight framework. For example, we reviewed the Code of Federal Regulations and DOD guidance such as the Joint Ethics Regulation, which governs DOD’s ethics program and the management of related activities including training, financial disclosure reporting, and gift receipt. We also reviewed work plans and timelines that define the Senior Advisor for Military Professionalism (SAMP) position and the scope of its activities. We compared, against federal internal control standards and practices for effective ethics programs and strategic training, actions and work products related to the department’s ongoing and planned initiatives to establish a values-based ethics program and to develop an ethical and professional culture. These documents included studies commissioned by DOD to assess its ethical culture and to design and implement a values-based program; memorandums and work products related to the 13 general and flag officer character initiatives; and Secretary of Defense memorandums requiring actions including ethics training and professional military education reviews. We also interviewed officials responsible for ethics and professionalism from the Office of the Secretary of Defense, the military services, and the Joint Staff to identify additional actions and determine progress in these areas. We assessed these documents by comparing them against leading practices for strategic planning and performance measurement that relate to the need for detailed plans outlining major implementation tasks and defined measures and timelines to measure progress; and federal internal control standards related to the need for performance measures and indicators, and the importance of managers determining proper actions in response to findings and recommendations from audits and reviews and completing such actions within established timeframes. We obtained and analyzed Fiscal Year 2012 to 2014 misconduct data from the DOD Office of Inspector General to identify discernible trends in reported misconduct, as well as data regarding the number of DOD personnel receiving annual ethics training. Specifically, we obtained calendar year 2014 DOD annual ethics training data that included active duty, reserve, and civilian personnel reported to the Office of Government Ethics by the 17 DOD Designated Agency Ethics Officials, excluding the National Security Agency. These are the most current data available on annual ethics training, and they are the data used by the Office of Government Ethics to determine DOD’s compliance with the annual training requirement for financial disclosure filers. We did not assess the reliability of these data, but we have included them in the report to provide context. We did not use these data to support our findings, conclusions, or recommendations. To determine the percentage of DOD personnel who have completed annual ethics training, we obtained Fiscal Year 2014 data from the Office of the Under Secretary of Defense (Comptroller) on the number of DOD personnel, including active duty and reserve component military personnel and civilian full-time equivalents. We also reviewed relevant literature to identify ethics-related issues and best practices within DOD, and we met with foreign military officials, defense industry organizations, and commercial firms that we identified during our preliminary research and in discussion with DOD officials as having experience in implementing and evaluating compliance-based or values- based ethics programs in the public and private sectors, both domestically and internationally, to define the concept of values-based ethics and to gather lessons learned from values-based ethics program implementation. A full listing of these organizations can be found in table 5. To evaluate DOD’s tools and performance metrics for identifying, assessing, and measuring its progress in addressing ethics and professionalism issues, we examined assessment tools identified by DOD as containing ethics-related content, including command climate surveys and 360-degree assessments. We used content analysis to review and assess actions the department has taken to implement and use the results of command climate and 360-degree assessments in accordance with statutory requirements and departmental guidance. These requirements pertain to the implementation, tracking, and targeting of these tools, among other things. To do this, we met with officials from the Office of the Secretary of Defense, the military services, and the Joint Staff to obtain information on the status of their efforts to implement and track command climate assessments, and to develop and implement 360- degree assessments for general and flag officers in accordance with statutory requirements and departmental initiatives. We then assessed guidance and instructions developed by the military services and the Joint Staff to determine whether they addressed each of the statutory requirements and departmental guidance related to command climate assessments and 360-degree assessments. To ensure accuracy, one GAO analyst conducted the initial content analysis by coding the military services’ and the Joint Staff’s actions with respect to each requirement, and a GAO attorney then checked the analysis for accuracy. We determined that command climate guidance and instructions addressed a statutory or departmental requirement if it addressed each aspect of the requirement. Similarly, we determined the extent to which the military services and the Joint Staff had developed and implemented 360-degree assessments for all general and flag officers by evaluating the steps they had taken to develop and implement these tools for each general and flag officer rank within each organization. Any disagreements in the coding were discussed and reconciled by the analyst and attorney. We also spoke with officials within the Office of the Secretary of Defense, the Joint Staff, and the military services to identify performance metrics that could be used by the department to measure its progress in addressing ethics and professionalism issues, and we assessed the department’s efforts to identify such metrics against federal internal control standards and our prior work on performance measurement leading practices. In addressing both of our audit objectives, we interviewed officials from the organizations identified in table 5. We conducted this performance audit from September 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Brenda S. Farrell, (202) 512-3604 or farrellb@gao.gov. In addition to the contact named above, Marc Schwartz, Assistant Director; Tracy Barnes; Ryan D’Amore; Leia Dickerson; Tyler Kent; Jim Lager; Amie Lesser; Leigh Ann Sheffield; Michael Silver; Christal Ann Simanski; Cheryl Weissman; and Erik Wilkins-McKee made key contributions to this report.
Professionalism and sound ethical judgment are essential to executing the fundamental mission of DOD and to maintaining confidence in military leadership, but recent DOD and military service investigations have revealed misconduct related to, among other things, sexual behavior, bribery, and cheating. House Report 113-446 included a provision for GAO to review DOD's ethics and professionalism programs for military servicemembers. This report examines the extent to which DOD has developed and implemented (1) a management framework to oversee its programs and initiatives on ethics and professionalism; and (2) tools and performance metrics to identify, assess, and measure progress in addressing ethics and professionalism issues. GAO analyzed DOD guidance and documents related to military ethics and professionalism, reviewed literature to identify ethics issues and practices, and interviewed DOD, industry, and foreign military officials experienced in implementing ethics and professionalism programs. The Department of Defense (DOD) has a management framework to help oversee its existing ethics program and has initiated steps to establish such a framework to oversee its professionalism-related programs and initiatives, but its efforts could be strengthened in both areas. DOD has a decentralized structure to administer and oversee its existing, required compliance-based ethics program, which focuses on ensuring adherence to rules. However, DOD has not fully addressed a 2008 internal recommendation to develop a department-wide values-based ethics program, which would emphasize ethical principles and decision-making to foster an ethical culture and achieve high standards of conduct. In 2012, DOD studied the design and implementation of a values-based ethics program and in 2013 delivered related training to certain DOD personnel. DOD has decided to take no further actions to establish a values-based ethics program, but it has not demonstrated that additional actions are unwarranted or assessed the feasibility of expanding training to additional personnel. As a result, the department neither has assurance that it has adequately addressed the identified need for a values-based ethics program nor has information needed to target its training efforts appropriately. DOD established a 2-year, potentially renewable, position for a Senior Advisor for Military Professionalism, ending in March 2016, to oversee its professionalism-related efforts. Since 2014 the Advisor's office has identified and taken steps toward implementing some of its major tasks, which relate to coordinating and integrating DOD's efforts on professionalism. Professionalism relates to the values, ethics, standards, code of conduct, skills, and attributes of the military workforce. However, the office has not developed timelines or information to assess its progress in completing its major tasks. Thus, DOD does not have information to track the office's progress or assess whether the SAMP position should be retained after March 2016. DOD has not fully implemented two key tools for identifying and assessing ethics and professionalism issues, and it has not developed performance metrics to measure its progress in addressing ethics-related issues. DOD has identified several tools, such as command climate and 360-degree assessments, that can be used to identify and assess ethics and professionalism issues. However, guidance issued by the military services for command climate assessments does not meet all statutory requirements and DOD guidance. As a result, the services do not have the required level of accountability during the performance evaluation process over the occurrence of these assessments, or assurances that all military personnel are able to anonymously participate in them. Further, the Navy, Marine Corps, and Joint Staff have developed and implemented 360-degree assessments for some but not all general and flag officers, and therefore some of these officers are not receiving valuable feedback on their performance as intended by DOD guidance. Finally, federal internal control standards emphasize the assessment of performance over time, but DOD is unable to determine whether its ethics and professionalism initiatives are achieving their intended effect because it has not developed metrics to measure their progress. GAO recommends DOD determine whether there is a need for a values-based program, assess the expansion of training, modify guidance, assess the use of a key tool for identifying ethics and professionalism issues, and develop performance metrics. DOD generally or partially concurred with these recommendations but did not agree to develop information to assess the Advisor's office. GAO continues to believe the recommendations are valid, as further discussed in the report.
You are an expert at summarizing long articles. Proceed to summarize the following text: Congress enacted SCRA in December 2003 as a modernized version of the Soldiers’ and Sailors’ Civil Relief Act of 1940. In addition to providing protections related to residential mortgages, the act covers other types of loans (such as credit card and automobile) and other financial contracts, products, and proceedings, such as rental agreements, eviction, installment contracts, civil judicial and administrative proceedings, motor vehicle leases, life insurance, health insurance, and income tax payments. SCRA provides the following mortgage-related protections to servicemembers: Interest rate cap. Servicemembers who obtain mortgages prior to serving on active duty status are eligible to have their interest rate and The servicer is to forgive interest and any fees capped at 6 percent.fees above 6 percent per year. Servicemembers must provide written notice to their servicer of their active duty status to avail themselves of this provision. Foreclosure proceedings. A servicer cannot sell, foreclose, or seize the property of a servicemember for breach of a preservice obligation unless a court order is issued prior to the foreclosure or unless the servicemember executes a valid waiver. If the servicer files an action in court to enforce the terms of the mortgage, the court may stay any proceedings or adjust the obligation. Fines and penalties. A court may reduce or waive a fine or penalty incurred by a servicemember who fails to perform a contractual obligation and incurs the penalty as a result if the servicemember was in military service at the time the fine or penalty was incurred and the servicemember’s ability to perform the obligation was materially affected by his or her military service. Federal authorities have applied this provision to prepayment penalties incurred by servicemembers who relocate due to permanent change-of-station orders and consequently sell their homes and pay off mortgages early. Adverse credit reporting. A servicer may not report adverse credit information to a credit reporting agency solely because servicemembers exercise their SCRA rights, including requests to have their mortgage interest rates and fees capped at 6 percent. Both servicemembers and servicers have responsibility for activating or applying SCRA protections. For example, to receive the interest-rate benefit, servicemembers must identify themselves as active duty military and provide a copy of their military orders to their financial institution. However, the responsibility of extending SCRA foreclosure protections to eligible servicemembers often falls to mortgage servicers. The burden is on the financial institution to ensure that borrowers are not active duty military before conducting foreclosure proceedings. Eligible servicemembers are protected even if they do not tell their financial institution about their active duty status. One of the primary tools mortgage servicers use to comply with SCRA is a website operated by DOD’s Defense Manpower Data Center (DMDC) that allows mortgage servicers and others to query DMDC’s database to determine the active duty status of a servicemember. Under SCRA, the Secretaries of each military service and the Secretary of Homeland Security have the primary responsibility for ensuring that servicemembers receive information on their SCRA rights and protections. Typically, legal assistance attorneys on military installations provide servicemembers with information on SCRA during routine briefings, in handouts, and during one-on-one sessions. Additionally, DOD has established public and private partnerships to assist in the financial education of servicemembers. The limited data we obtained from four financial institutions showed that a small fraction of their borrowers qualified for SCRA protections. Our analysis suggests that SCRA-protected borrowers generally had higher rates of delinquency, although this pattern was not consistent across the institutions in our sample and cannot be generalized. However, SCRA protections may benefit some servicemembers. SCRA-protected borrowers at two of the three institutions from which we had usable data were more likely to cure their mortgage delinquencies than other military borrowers. Some servicemembers also appeared to have benefitted from the SCRA interest rate cap. Financial institutions we contacted could not provide sufficient data to assess the impact of different protection periods, but our analysis indicates that mortgage delinquencies appeared to increase in the first year after active duty. Based on our interviews and the data sources we reviewed, the number of servicemembers with mortgages eligible for SCRA protections is not known because servicers have not systematically collected this information, although limited data are available. Federal banking regulators do not generally require financial institutions to report information on SCRA-eligible loans or on the number and size of loans that they service for servicemembers. SCRA compliance requires that financial institutions check whether a borrower is an active duty servicemember and therefore eligible for protection under SCRA before initiating a foreclosure proceeding. However, institutions are not required to conduct these checks on loans in the rest of their portfolio, and two told us that they do not routinely check a borrower’s military status unless the borrower is delinquent on the mortgage. Consequently, the number of SCRA-eligible loans that these two institutions reported to us only includes delinquent borrowers and those who reported their SCRA eligibility to the financial institution. Two other institutions were able to more comprehensively report the number of SCRA-eligible loans in their portfolio because they routinely check their portfolio against the DMDC database. Additionally, only one of the financial institutions we contacted was able to produce historical data on the total number of known SCRA- eligible loans in its portfolio. Although exact information on the total number of servicemembers eligible for the mortgage protections under SCRA is not known, DOD data provide some context for approximating the population of servicemembers who are homeowners with mortgage payments and who therefore might be eligible for SCRA protections. According to DOD data, in 2012 there were approximately 1.4 million active duty servicemembers and an additional 848,000 National Guard and Reserve members, of which approximately 104,000 were deployed. While DOD does not maintain data on the number of servicemembers who are homeowners, DOD’s 2012 SOF survey indicated that approximately 30 percent of active duty military made mortgage payments. For reservists, DOD’s most recent survey of homeownership in June 2009 indicated that 53 percent of reservists made mortgage payments. According to DOD officials, industry trade group representatives, SCRA experts, and military service organizations, the servicemembers most likely to be eligible for SCRA mortgage protections are members of the Reserve components because they were more likely to have had mortgages before entering active duty service. Although comprehensive data on the number of servicemembers eligible for SCRA are not available, four financial institutions provided us with some data on the servicemembers they have identified in their portfolios in 2012. According to these data, a small percentage of the financial institutions’ total loan portfolios were identified as being eligible for SCRA protections. Table 1 details the number of loans held by each of the institutions from which we obtained data, including the estimated number of loans belonging to servicemembers and the number of loans the institutions identified as SCRA-eligible. Collectively, we estimate that the financial institutions from which we received useable data service approximately 27-29 percent of the mortgages held by servicemembers. This estimate is based on information from DOD’s SOF results on the estimated percentage of active duty servicemembers and reservists who make mortgage payments and the reported and estimated number of military borrowers that each of these institutions reported in their portfolios. Representatives with three of the financial institutions told us they have made changes to their data systems over the past 2 years to help better identify whether mortgage holders were active duty military and eligible for SCRA protections. They attributed these changes, in part, to DOD’s April 2012 upgrade of the DMDC database to allow financial institutions to check on the active duty status of up to 250,000 borrowers at once, as opposed to checking one individual at a time. Since then, some of the institutions had made changes to their systems to use the DMDC database to routinely check the military status of borrowers, thereby improving their available data on SCRA-eligible borrowers. Of the financial institutions we contacted, representatives with two told us that they now regularly check their entire loan portfolio against the DMDC database. Representatives with the other institutions said that they only check the military status of delinquent borrowers. To illustrate the extent to which these changes could improve the accuracy of the data on SCRA- eligible borrowers, representatives of one financial institution told us they used to rely on postal codes to help identify borrowers on or near military bases to determine whether they were likely servicemembers. This institution has since switched to a data system that allows a check of its entire portfolio against the DMDC database so that the institution can more accurately identify which borrowers are also servicemembers. Our analysis of data from three financial institutions suggests that SCRA- protected borrowers were substantially more likely to experience delinquency at any time than their non-SCRA-protected military counterparts, with one exception. The institutions provided us data with substantial inherent limitations that prevented us from fully analyzing the repayment practices of their military borrowers. However, the limited data allowed us to conduct some analyses of borrowers’ delinquency rates and the rates at which delinquent borrowers became current on their mortgages. At two servicers, we found that SCRA-protected borrowers had delinquency rates from 16 to 20 percent. In contrast, non-SCRA- protected military borrowers had delinquency rates that ranged from 4 to 8 percent. These rates also varied across time within an institution. However, delinquency rates for the large credit union we analyzed were significantly smaller, and its SCRA-protected borrowers were less likely to be delinquent. For example, in the fourth quarter of 2012, 0.01 percent of SCRA-protected borrowers at this institution were delinquent on their loans, while 0.56 percent of the remaining borrowers in its loan portfolio were delinquent. The variation in delinquency rates among these financial institutions indicates that factors in addition to SCRA protection likely influence an institution’s delinquency rates, including differences among each institution’s lending standards and policies or borrower characteristics, such as income and marital status. Although it should be interpreted with caution because the results were not consistent at all three institutions for which we could conduct the analysis, our data analysis also suggests that borrowers protected by SCRA may have a better chance of curing their mortgage delinquency— making payments sufficient to restore their loan to current status—than those without the protections. The summary loan data we obtained from one institution show that its SCRA-protected military borrowers who were 90 or more days delinquent were almost twice as likely to cure their delinquency within a year than civilian borrowers and almost five times as likely as other military borrowers who were not SCRA-protected. Our analysis of loan-level data from another institution also suggested that its SCRA-protected borrowers had a higher likelihood of curing their mortgage delinquency than military borrowers not SCRA-protected, although their chances of curing the delinquency declined after leaving active duty.suggested that cure rates for active duty SCRA-protected servicemembers were substantially lower than their noneligible active duty counterparts. Again, these differences in cure rates among the three institutions could reflect differences in institution policies or borrower characteristics. However, our analysis of data provided by a third institution Our data analysis also indicates that at least some servicemembers have benefitted from the SCRA interest rate cap. As discussed earlier, servicemembers must provide written notice to their servicer of their active duty status to avail themselves of this provision. Analysis of one institution’s data showed that approximately 32 percent of identified SCRA-eligible borrowers had a loan with an interest rate above 6 percent at origination. According to data provided by this institution—which included the initial interest rate and a current interest rate for 9 consecutive months in 2013—some SCRA-eligible borrowers saw their interest rates reduced to 6 percent or less, but almost 82 percent of the loans for those eligible for such a reduction retained rates above 6 percent. However, SCRA-eligible borrowers with interest rates higher than 6 percent had a larger average drop in interest rates from origination through the first 9 months of 2013 than non-SCRA-eligible military borrowers or SCRA-protected borrowers with initial rates below 6 percent. We cannot determine how many rate reductions resulted from the application of SCRA protections; other potential reasons for rate decreases include refinancing or a rate reset on adjustable-rate loans. Several financial institutions told us that more servicemembers could benefit from the rate cap protection if they provided proof of their active duty status to their mortgage servicer. For example, representatives from one financial institution told us that they receive military documentation (orders, commanding officer letters, etc.) on 31 percent of their SCRA- eligible borrowers—as a result, up to 69 percent may not be receiving the full financial benefit that SCRA affords. The data financial institutions we contacted were able to provide were generally not sufficient to assess the impact of the various protection periods in effect since the enactment of SCRA: 90 days, 9 months, and 1 year. Because most of the institutions we interviewed reported that they made enhancements to their data systems in 2012 to better identify SCRA-eligible borrowers, they were unable to provide data for both SCRA-eligible borrowers and a comparison group of other military borrowers before the end of 2011, when the protection periods were shorter. Furthermore, none of our data that included SCRA-eligible borrowers and a comparison group of non-SCRA-eligible borrowers covered more than a 1-year span. As a result, the data were insufficient to evaluate the effectiveness of SCRA in enhancing the longer-term financial well-being of the servicemember leaving active duty or over the life of the mortgage. Finally, our measures of financial well-being— likelihood of becoming delinquent, curing a delinquency, and obtaining a reduction in the mortgage interest rate—are not comprehensive measures of financial well-being, but were the best measures available to us in the data. Our analysis of one servicer’s data suggests that all military borrowers— SCRA-protected or not—had a higher likelihood of becoming delinquent in the first year after they left active duty than when in the military. For example, in the loan-level data from an institution that used the DMDC database to check the military status of its entire loan portfolio, all of its military borrowers had a higher likelihood of becoming delinquent in the first year after they left active duty than when in service, with that risk declining somewhat over the course of the year for non-SCRA-protected military borrowers. Although not generalizeable, these findings are consistent with concerns, described below, that servicemembers may face financial vulnerability after separating from service. Those who were SCRA-protected had a smaller increase in delinquency rates in the first year after leaving active duty than other military borrowers, but this may be due to SCRA-protected borrowers having their loans become delinquent at higher rates before leaving active duty and not to a protective effect of SCRA. Although we were generally unable to obtain data to analyze the impact of the varying protection periods, data from one institution provided some indication of a positive effect of SCRA protection for servicemembers receiving up to a year of protection. Analyzable data from one institution on the mortgage status of all its military borrowers for a 9-month period in 2013, including those who had left active duty service within the last year, indicated that SCRA-protected borrowers who were within the 1-year protection period after leaving active duty service had a higher chance of curing their delinquencies than did the institution’s other military borrowers who had left active duty service. We found this effect despite this being the same institution where we found that SCRA-eligible borrowers were less likely to cure their mortgage delinquencies when still on active duty (compared with non-SCRA-eligible borrowers). Overall, the findings from our data analysis on delinquencies and cure rates were consistent with our interviews and past work showing that the first year after servicemembers leave active duty can be a time of financial vulnerability. We previously reported that while the overall unemployment rate for military veterans was comparable to that of non- veterans, the unemployment rate for veterans more recently separated from the military was higher than for civilians and other veterans. Additionally, representatives from the National Guard and Army Reserve said that Guard and Reserve members may return to jobs in the civilian sector that could be lower paying or less stable than their previous military work. Based on a June 2012 DOD SOF survey of Reserve component members, an estimated 40 percent of reservists considered reemployment, returning to work, or financial stability as their biggest concern about returning from their most recent activation or deployment. As we reported in 2012, some financial institutions extended SCRA protections beyond those stated in the act, as a result of identified SCRA violations and investigations in 2011. For example, three mortgage servicers we included in this review noted that they had reduced the interest rate charged on servicemembers’ mortgages to 4 percent—below the 6 percent required in SCRA. Additionally, the National Mortgage Settlement in February 2012 required five mortgage servicers to extend foreclosure protections to any servicemember—regardless of whether their mortgage was obtained prior to active duty status—who receives Hostile Fire/Imminent Danger Pay or serving at a location more than 750 miles away from his or her home.meeting these conditions may not be foreclosed upon without a court order. Two financial institutions we interviewed extended SCRA foreclosure protections to all active duty servicemembers. One of the financial institutions told us that they have made SCRA foreclosure protections available to all active duty servicemembers for the loans that As a result, any servicemember they own and service (thus, about 16 percent of their mortgage portfolio receives SCRA protection). However, officials at this institution said that they were bound by investor guidelines for the loans they service for other investors, such as Fannie Mae, the Department of Housing and Urban Development, and private investors. The officials said that many of the large investors have not revised their rules to extend SCRA protections; as a result, the institution has been unable to extend SCRA protections to all noneligible borrowers whose loans are owned by these entities. None of the financial institutions we interviewed advocated for a change in the length of time that servicemembers received SCRA protection. Officials at one institution told us that they considered a 1-year period a reasonable amount of time for servicemembers to gain financial stability after leaving active duty and that they implemented the 1-year protection period before it became law. One attorney we interviewed who has a significant SCRA-related practice supported the extension of the SCRA foreclosure protection to 1 year because the revised timeframe matches the mortgage interest-rate protection period, which has remained at 1 year since 2008, when mortgages were added to the SCRA provision that limits interest rates to 6-percent. In contrast, a representative of one of the military support organizations we interviewed noted that, based on his interactions with servicemembers, the effect of extending the foreclosure protection from 9 months to 1 year has been negligible, although he also said that the extension was a positive development. DOD has entered into partnerships with many federal agencies and nonprofit organizations to help provide financial education to servicemembers, but limited information on the effectiveness of these efforts exists. Under SCRA, the Secretaries of the individual services and the Secretary of Homeland Security have the primary responsibility for ensuring that servicemembers receive information on SCRA rights and protections. Servicemembers are informed of their SCRA rights in a variety of ways. For example, briefings are provided on military bases and during deployment activities; legal assistance attorneys provide counseling; and a number of outreach media, such as publications and websites, are aimed at informing servicemembers of their SCRA rights. DOD also has entered into partnerships with many other federal agencies and nonprofit organizations to help provide financial education to servicemembers. These efforts include promoting awareness of personal finances, helping servicemembers and their families increase savings and reduce debt, and educating them about predatory lending practices. As shown in fig. 1, the external partners that worked with DOD have included financial regulators and nonprofit organizations. According to DOD officials, these external partners primarily focus on promoting general financial fitness and well-being as part of DOD’s For example, partners including the Financial Readiness Campaign.Consumer Federation of America, the Better Business Bureau Military Line, and the Financial Industry Regulatory Authority’s Investor Education Foundation provide financial education resources free of charge to servicemembers. DOD and the Consumer Federation of America also conduct the Military Saves Campaign every year, a social marketing campaign to persuade, motivate, and encourage military families to save money every month and to convince leaders and organizations to aggressively promote automatic savings. DOD has partnerships with the Department of the Treasury and the Federal Trade Commission to address consumer awareness, identity theft, and insurance scams targeted at servicemembers and their families. In addition, DOD officials noted that some partners provide SCRA outreach and support to servicemembers. For example, the Bureau of Consumer Financial Protection has an Office of Servicemember Affairs that provides SCRA outreach to servicemembers and mortgage servicers responsible for complying with the act. This agency also works directly with servicemembers by collecting consumer complaints against depository institutions and coordinating those complaints with depository institutions to get a response from them and, if necessary, appropriate legal assistance offices. Similarly, nonprofit partners including the National Military Family Association, the Association of Military Banks of America, and the National Association of Federal Credit Unions provide information on SCRA protections to their members. But DOD officials also noted that partners are not required by DOD to provide SCRA education, and that such education may represent a rather small component of the partnership efforts. DOD established its financial education partnerships by signing memorandums of understanding (MOU) with the federal agencies and nonprofit organizations engaged in its Financial Readiness Campaign. The MOUs include the organizations’ pledges to support the efforts of military personnel responsible for providing financial education and financial counseling to servicemembers and their families as well as additional responsibilities of the individual partners. According to the program manager of DOD’s Financial Readiness Program (in the Office of Family Policy, Children and Youth, which collaborates with the partners), there are no formal expectations that any of the partners provide education about SCRA protections. She noted that such a requirement would not make sense for some partners, including those that do not interact directly with servicemembers but instead provide educational materials about financial well-being. The manager said that it was important that all of DOD’s partners be aware of the SCRA protections, and she planned to remind each of them about the SCRA protections in an upcoming partners meeting. The program manager noted that although her office has not conducted any formal evaluations of the partnerships to determine how effective the partners have been in fulfilling the educational responsibilities outlined in their MOUs, she believes that they have functioned well. According to personal financial managers in the individual services (who work with the personal financial advisors who provide financial education to servicemembers at military installations) and representatives from a military association, the education partnerships have been working well overall. But they also told us that obtaining additional information about the educational resources available through the partnerships and their performance would be helpful. For example, one association noted that it could benefit from a central website to serve as a clearinghouse for educational information from the various financial education partners. Staff from another organization said that DOD should regularly review all of these partners to ensure they were fulfilling their responsibilities. DOD officials told us they would likely discuss these suggestions at upcoming meetings with their financial education partners. The program manager of the Personal Financial Readiness Program also noted that to manage the partnerships, she regularly communicates with the partners to stay informed of their activities. In addition, she said that the Office of Family Policy, Children and Youth has been encouraging individual installation commanders to enter into agreements with local nonprofit organizations. The local partners would provide education assistance more tailored to servicemembers’ situations than the more global information the DOD partners provided. As we noted in our 2012 report, DOD has surveyed servicemembers on whether they had received training on SCRA protections, but had not assessed the effectiveness of its educational methods. To assess servicemembers’ awareness of SCRA protections, in 2008 DOD asked in its SOF surveys if active duty servicemembers and members of the Reserve components had received SCRA training. Forty-seven percent of members of the Reserve components—including those activated in 2008—reported that they had received SCRA training and 35 percent of regular active duty servicemembers reported that they had received training. Without an assessment of the effectiveness of its educational methods (for example, by using focus groups of servicemembers or results of testing to reinforce retention of SCRA information), we noted that DOD might not be able to ensure it reached servicemembers in the most effective manner. We recommended that DOD assess the effectiveness of its efforts to educate servicemembers on SCRA and determine better ways for making servicemembers aware of their SCRA rights and benefits, including improving the ways in which reservists obtain such information. In response to our recommendation, as of December 2013, DOD was reviewing the results of its recent surveys on the overall financial well- being of military families. The surveys have been administered to three groups: servicemembers, military financial counselors, and military legal assistance attorneys. While the surveys are not focused solely on SCRA, they take into account all financial products, including mortgages and student loans, covered by SCRA. DOD officials explained that they would use the results, including any recommendations from legal assistance attorneys, to adjust training and education on SCRA benefits, should such issues be identified. Our findings for this report—that many servicemembers appeared not to have taken advantage of their ability to reduce their mortgage interest rates as entitled—appear to reaffirm that DOD’s SCRA education efforts could be improved and that an assessment of the effectiveness of these efforts is still warranted. We provided a draft of this report to the Department of Defense, the Board of Governors of the Federal Reserve System, the Office of the Comptroller of the Currency, and the Bureau of Consumer Financial Protection for comment. The Department of Defense and the Office of the Comptroller of the Currency provided technical comments that were incorporated, as appropriate. We are sending copies of this report to interested congressional committees. We will also send copies to the Chairman of the Board of Governors of the Federal Reserve System, the Secretary of Defense, the Comptroller of the Currency, and the Director of the Consumer Financial Protection Bureau. In addition, this report will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or EvansL@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. This report examines (1) available information on changes in the financial well-being of servicemembers who received foreclosure-prevention and mortgage-related interest rate protections under SCRA, including the extent to which servicemembers became delinquent on their mortgages after leaving active duty and the impact of protection periods; and (2) the Department of Defense’s (DOD) partnerships with public- and private- sector entities to provide financial education and counseling about SCRA mortgage protections to servicemembers and views on the effectiveness of these partnerships. To assess changes in the financial well-being of servicemembers who receive SCRA mortgage protections, including the extent to which servicemembers became delinquent on their mortgages after leaving active duty and the impact of protection periods, we analyzed legislation and reviewed our prior work on SCRA. We obtained and analyzed loan- level data, institution-specific summary data, or both, from four financial institutions (three large single-family mortgage servicers and a large credit union). A fifth institution (a large single-family servicer) we contacted was unable to provide us with data for inclusion in our review. We did not identify financial institutions to protect the privacy of individual borrower data. Table 2 provides a summary of the data we obtained. We conducted a quantitative analysis of the data, which included information on (1) loan history, including loan status and total fees; (2) loan details such as the loan-to-value ratio and principal balance; and (3) financial outcomes of borrowers, such as initial and updated credit scores and whether the borrowers filed for bankruptcy or cured mortgage defaults. After controlling for loan and demographic characteristics and other factors to the extent that such data were available, we developed logistic regression models to estimate the probability of different populations becoming delinquent on their mortgage and curing their mortgage delinquency (by bringing their payments current). The estimates from these models may contain some degree of bias because we could not control for economic or military operations changes, such as changes in housing prices or force deployment that might affect a servicemember’s ability to repay a mortgage. Our analysis is not based on a representative sample of all servicemembers eligible to receive SCRA mortgage protections and therefore is not generalizable to the larger population. Moreover, we identified a number of limitations in the data of the four financial institutions. For example, the various servicer datasets identify SCRA status imperfectly and capture activity over different time periods with different periodicities. We also cannot rule out missing observations or other inaccuracies. Other issues include conflicting data on SCRA eligibility, data reliability issues related to the DOD database used to identify servicemembers (which is operated by the Defense Manpower Data Center, or DMDC), data quality differences across time within a given servicer’s portfolio, and data artifacts that may skew the delinquency statistics for at least one institution. Lastly, as servicer systems vary across institutions, none of the servicers from which we requested data provided us with every data field we requested for our loan-level analysis. Due to the differences in the data provided by each institution, we conducted a separate quantitative analysis of the data from each institution that provided loan-level data. To the extent that data were available, we also calculated summary statistics for each institution on the changes in financial well-being of the servicemembers, which allowed for some basis of comparison across institutions in levels of delinquency and cure rates. To conduct as reliable analyses as the data allowed, we also corrected apparent data errors, addressed inconsistencies, and corroborated results with past work where possible. Through these actions, and interviews with knowledgeable financial institution officials, we determined that the mortgage data and our data analysis were sufficiently reliable for the limited purposes of this report. However, because some servicer practices related to SCRA have made it difficult to distinguish SCRA-protected servicemembers from other military personnel, the relative delinquency and cure rates we derived from these data represent approximations, are not definitive, and should be interpreted with caution. Furthermore, we analyzed data from DOD’s Status of Forces (SOF) surveys from 2007 to 2012, which are administered to a sample of active duty servicemembers and reservists on a regular basis and cover topics such as readiness and financial well-being. We determined the survey data we used were sufficiently reliable for our purposes. We also analyzed DOD data on the size of the active duty military population and DOD survey data to estimate the percentage of servicemembers who make payments on a mortgage and may be eligible for SCRA protections, and the percentage of military borrowers that our sample of borrowers from selected financial institutions covers. Lastly, we also interviewed two lawyers with knowledge of SCRA, five selected financial institutions, DOD officials (including those responsible for individual military services, the Status of Forces Surveys, and a database of active duty status of servicemembers), and representatives of military associations and selected financial institutions to obtain available information or reports on the impact of SCRA protections on the long-term financial well-being of servicemembers and their families. To examine the effectiveness of DOD’s partnerships, we analyzed documentation on DOD’s partnerships with public and private entities that provide financial education and counseling to servicemembers. For example, we reviewed memorandums of understanding DOD signed with the federal agencies and nonprofit organizations engaged in its Financial Readiness Campaign. We reviewed the nature of such partnerships, including information or efforts related to SCRA mortgage protections. We also conducted interviews with DOD officials, including the program manager of DOD’s Personal Financial Readiness Program and personal financial managers in each of the individual military services; selected DOD partners that provide SCRA-related education to servicemembers; a military support association; and two lawyers with knowledge of SCRA. We asked about how such partnerships provide SCRA mortgage education and counseling and gathered views on and any assessments of the partnerships’ effectiveness. We conducted this performance audit from June 2013 to January 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Cody Goebel, Assistant Director; James Ashley; Bethany Benitez; Kathleen Boggs; Abigail Brown; Rudy Chatlos; Grant Mallie; Deena Richart; Barbara Roesmann; and Jena Sinkfield made key contributions to this report.
SCRA seeks to protect eligible active duty military personnel in the event that their military service prevents them from meeting financial obligations. Mortgage-related protections include prohibiting mortgage servicers from foreclosing on servicemembers' homes without court orders and capping fees and interest rates at 6 percent. Traditionally, servicemembers received 90 days of protection beyond their active duty service, but this period was extended to 9 months in 2008 and to 1 year in 2012. The legislation that provided the 1-year protection period also mandated that GAO report on these protections. This report examines (1) available information on changes in the financial well-being of servicemembers who received foreclosure-prevention and mortgage-related interest rate protections under SCRA, including the extent to which they became delinquent and the impact of protection periods; and (2) DOD's partnerships with public- and private-sector entities to provide financial education and counseling about SCRA mortgage protections to servicemembers and views on the effectiveness of these partnerships. To address these objectives, GAO sought and received data from three large mortgage servicers and a large credit union covering a large portion of all mortgage loans outstanding and potentially SCRA-eligible borrowers. GAO also reviewed documentation on DOD's partnerships and relevant education efforts related to SCRA mortgage protections. GAO interviewed DOD officials and partners who provided SCRA mortgage education and counseling. The number of servicemembers with mortgages eligible for Servicemembers Civil Relief Act (SCRA) mortgage protections is unknown because servicers have not collected this information in a comprehensive manner. Based on the limited and nongeneralizeable information that GAO obtained from the three mortgage servicers and the credit union, a small percentage of the total loan portfolios were identified as eligible for SCRA protections. Two large servicers had loan-level data on delinquency rates. For those identified as SCRA-eligible, rates ranged from 16 to 20 percent and from 4 to 8 percent for their other military borrowers. Delinquencies at the credit union were under 1 percent. Some servicemembers appeared to have benefitted from the SCRA interest rate cap of 6 percent, but many eligible borrowers had apparently not taken advantage of this protection. For example, at one institution 82 percent of those who could benefit from the interest rate caps still had mortgage rates above 6 percent. The data also were insufficient to assess the impact of SCRA protections after servicemembers left active duty, although one institution's limited data indicated that military borrowers had a higher risk of delinquency in the first year after leaving active duty. But those with SCRA protections also were more likely to cure delinquencies during this period than the institution's other military borrowers. Given the many limitations to the data, these results should only be considered illustrative. Most of these institutions indicated that they made recent changes to better identify SCRA-eligible borrowers and improve the accuracy of the data. The Department of Defense (DOD) has partnerships with many federal agencies and nonprofit organizations to help provide financial education to servicemembers, but limited information on the effectiveness of these partnerships exists. DOD and its partners have focused on promoting general financial fitness rather than providing information about SCRA protections. But some partners provide SCRA outreach and support to servicemembers. For example, the Bureau of Consumer Financial Protection has an Office of Servicemember Affairs that provides SCRA outreach to servicemembers and mortgage servicers responsible for complying with the act. Although stakeholders GAO interviewed generally offered favorable views of these partnerships, some said obtaining additional information about educational resources and partnership performance could improve programs. However, DOD has not undertaken any formal evaluations of the effectiveness of these partnerships. This finding is consistent with GAO's July 2012 review of SCRA education efforts, which found that DOD had not assessed the effectiveness of its educational methods and therefore could not ensure it reached servicemembers in the most effective manner. GAO recommended in July 2012 that DOD assess the effectiveness of its efforts to educate servicemembers on SCRA to determine better ways for making servicemembers (including reservists) aware of SCRA rights and benefits. In response to that recommendation, as of December 2013, DOD was reviewing the results of its recent surveys on the overall financial well-being of military families and planned to use these results to adjust training and education for SCRA, as appropriate. GAO's current finding that many servicemembers did not appear to be taking advantage of the SCRA interest rate cap appears to reaffirm that DOD's SCRA education efforts could be improved and that an assessment of the effectiveness of these efforts is still warranted.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Great Lakes network director established the VACHCS integration process. This separate and temporary process involved an ICC, a Stakeholders Advisory Group (SAG), and service-specific work groups. (See app. I for an illustration of VACHCS’ integration process.) The network director, the VACHCS director, service chiefs, and stakeholders determined the membership of the committees and the groups participating in the VACHCS integration process. The Chief of Staff, VA New Jersey Health Care System, chaired the ICC, which consisted of 15 members. The other 14 members included representatives from the Great Lakes network, unions, VACHCS employees, Chicago medical schools, and the veterans service organizations (see app. II for a list of ICC members). The ICC held its first meeting in October 1996 and established service-specific work groups to review services and propose recommendations for integration. Between October 1996 and October 1997, the ICC held nine meetings to review, rework, and modify work group recommendations. Work group recommendations approved by the ICC were forwarded to the VACHCS director for review, approval, and implementation. Any integration recommendations having an impact on network initiatives were reviewed and approved by the network director. The VHA Under Secretary for Health reviewed and approved integration recommendations affecting clinical services and programs. The service-specific work groups had responsibility for conducting analyses and proposing integration recommendations to the ICC. Work group participants included VACHCS staff, representatives of affiliated medical schools, unions, community groups, and veterans’ representatives. The service-specific work groups fell into three categories: administrative, direct patient care, and patient support. Five work groups reviewed administrative services, such as engineering, information resource management, and medical administration services. Fourteen work groups reviewed direct patient care services, such as medical, surgical, psychiatric, and dental. The remaining nine work groups analyzed patient support services, such as chaplain, nutrition and food, and pathology and laboratory services. (App. III contains a list of the services, by category.) The Stakeholders Advisory Group provided input and advice to the ICC regarding work group activities and proposed integration recommendations. The SAG consisted of 17 members, including representatives of elected officials, affiliated medical schools, community groups, labor unions, and veterans service organizations (see app. IV for a list of members). It met seven times over a 12-month period. In addition, the ICC, the SAG, the network director, and the VACHCS director determined that voluntary, recreational therapy, and payroll services did not require work groups. The VACHCS director made integration decisions for these three services. The VACHCS integration process produced a total of 200 integration recommendations. Forty-six recommendations maintain the status quo or will not be implemented; therefore, no changes occurred within the services as a result of those recommendations. Thirty-eight recommendations have been deferred to the VACHCS director or the deans’ committee for further consideration. The remaining 116 integration recommendations are in various stages of implementation, with 90 percent either having been or in the process of being implemented, as table 1 shows. Most integration decisions will reengineer services, while the least number of decisions will consolidate services, as table 2 shows. When unifying management, VACHCS eliminated a chief of service position at one of the two hospitals. For example, before the integration, a service, such as medical administration, had two chiefs of service—one at each hospital. By unifying management, one chief assumed responsibility for the service at both hospitals, and the chief’s position at one hospital was eliminated. Reengineering may involve either standardizing VACHCS policies, procedures, and databases within a service for both hospitals or establishing more effective or efficient approaches for conducting business. Before the integration, each service at each hospital had its own policies and procedures. Several chiefs of service told us they adopted the best policy or procedure from one hospital and created a standard to be used at both hospitals. For example, the nursing service standardized the professional standards boards for registered and licensed practical nurses at both hospitals. The medical administration service created a more efficient approach to its transcription activity by negotiating one transcription contract, which resulted in enhanced productivity and consistency for discharge summaries and other patient-related reports. Consolidation may involve moving an entire service, or some part of a service, to a single location. VACHCS decisions consolidated parts of a service, not an entire service. For example, the VACHCS director consolidated payroll, within the fiscal service, by transferring five employees from West Side Hospital to Lakeside Hospital. In addition, specific testing is now done at one hospital within the pathology and laboratory service. Although the VACHCS ICC has completed its work, future integration recommendations for 12 services have been deferred to the newly created joint deans’ committee. (See app. V for a list of these services.) Integration recommendations to unify management and to reengineer and consolidate the largest services, such as medicine, surgery, and psychiatry, could be the most significant and most difficult to accomplish. The VACHCS integration decisions affected veterans, employees, and medical schools. Most of the integration decisions affected the administrative and patient support services. Integration decisions affecting the direct patient care services, such as medicine, surgery, and psychiatry, have been deferred. These services continue to be provided at both hospitals. VACHCS integration appears to have had a small but positive impact on veterans. Veterans continue to obtain medical, surgical, and psychiatric services at the same hospitals as they have in the past. VACHCS officials reported that the level of service to veterans is being maintained while some changes enhance access and quality of care. For example, the pharmacy service reported reducing patient prescription waiting time from 90 minutes to 20 minutes by expanding its hours of operation and using new technology to fill prescriptions. Also, VACHCS officials stated that veterans’ access to the social work service improved by transferring some administrative activities to the medical administration service, thus giving social workers more time to spend with patients. In addition, a greater percentage of nurses are spending more time with patients, thus enhancing the quality of care, according to VACHCS officials. Three VACHCS consolidation decisions affected veterans. As a result, the number of veterans who may be inconvenienced by traveling to either Lakeside Hospital or West Side Hospital for such care is small, as shown in table 3. Although there were seven other consolidation decisions, they will not affect where veterans receive their care. For example, consolidation of flow cytometry within the pathology and laboratory service at Lakeside will not affect veterans because only the blood sample is sent to Lakeside for analysis. The veteran can have blood drawn at West Side, if that is more convenient, and the sample will be sent to the laboratory at Lakeside. The VACHCS integration affected employees in three ways. First, it eliminated 80 positions; however, only 6 positions were staffed at the time of their elimination. The remaining 74 positions were unstaffed, as table 4 shows. In anticipation of the integration of the Lakeside and West Side hospitals, vacancies created by attrition were left unstaffed with the expectation that a smaller number of employees would be required, according to the VACHCS director. By eliminating unstaffed positions, VACHCS minimized the hardship on currently employed staff. Second, employees from one hospital were transferred to the other hospital. VACHCS officials reported transferring about 29 employees. For example, 5 employees performing payroll functions were transferred from West Side to Lakeside, and 10 employees performing medical care cost recovery functions were transferred from Lakeside to West Side. Third, employees will travel intermittently to each hospital to perform work. For example, 14 single chiefs of service told us they will shuttle the 6 miles between the Lakeside and West Side hospitals to perform their duties. VACHCS integration appears to have had a positive impact on the affiliated medical schools. Clinical services, such as medicine, surgery, and psychiatry remain unchanged. These services and medical education continue to be provided at both hospitals using the same management structure and operating procedures. However, educational opportunities for residents and research opportunities for staff have been enhanced in limited instances by integrating the two hospitals, according to VACHCS officials. For example, the goal of the new joint deans’ committee is to offer privileges to residents at both hospitals. This will provide affiliates with greater diversity in their education and research programs. VACHCS officials estimated that integration decisions will result in savings of several million dollars, although it is not possible to estimate the full magnitude of savings at this time. This is because most savings involved reengineering decisions. These savings, by their nature, are difficult to estimate in terms of the extent of efficiencies that will be realized. At present, VACHCS officials estimate that a savings of at least $4.9 million annually and about $2.25 million in one-time savings can be attributed to the integration. Most of the recurring savings generated came from decisions to reengineer patient support services, as table 5 shows. Given that hospitals are service providers and are labor intensive, most recurring savings generated came from eliminating personnel positions. Of the estimated $4.9 million, about $3.7 million of the savings are attributed to eliminating 74 unstaffed positions and 6 staffed positions. Other savings resulted from decisions such as standardizing drug formularies and reducing the need for contracting by performing activities in-house. The VACHCS integration generated one-time savings of about $2.25 million. For example, the network director approved replacement of only one angiography suite for VACHCS, resulting in a one-time cost avoidance of $1.25 million. In addition, VACHCS officials said that by replacing only one of two cameras in the nuclear medicine service, VACHCS avoided spending $500,000. Furthermore, it refrained from spending another $500,000 by sharing one computer system and thus eliminating the need to upgrade a second system. VACHCS officials reported that the integration will likely lead to additional savings but that the annual savings were not measurable at this time. For example, the Lakeside and West Side hospitals will be jointly purchasing supplies and equipment. VACHCS officials believe that this joint purchasing will result in lower costs, but they were unable to estimate the amount of the savings. Overall, the officials who reported nonmeasurable savings also indicated that the amounts would be insignificant compared with the measurable savings. We provided copies of a draft of this report for review and comment to VA, the University of Illinois College of Medicine, and Northwestern University Medical School, and we received comments from each of them. These comments are summarized in the following sections. The comments in their entirety are in appendixes VI, VII, and VIII, respectively. The VHA Under Secretary for Health reviewed the report and acknowledged that it will be of interest to the Great Lakes network in its future planning as well as to other networks contemplating integrations of their facilities. In this regard, he noted that the report will be provided for consideration in the planned contractor study of health care delivery in the Great Lakes network. This study is being done in response to our recent report, VA Health Care: Closing a Chicago Hospital Would Save Millions and Enhance Access to Services (GAO/HEHS-98-64, Apr. 16, 1998). The Dean of the University of Illinois College of Medicine commented that our report accurately reflects the recent VACHCS integration process and decisions to date. He emphasized that future integration recommendations are being evaluated by the deans’ committee, which is expected to assist VACHCS in realizing further operational efficiencies, preserving high-quality care for U.S. veterans, and maintaining the educational and research environment afforded by the VA health care system. He noted that the goal of the new deans’ committee is to offer privileges to residents at both hospitals, thus providing affiliates with greater diversity in their education and research programs. Finally, he stated that the deans’ committee should be given sufficient time to complete its work and have its performance evaluated before future changes are considered. He noted that integration recommendations to unify management of, reengineer, and consolidate the largest services, such as medicine, surgery, and psychiatry, could be the most significant and most difficult to accomplish. These areas have yet to be considered by the deans’ committee. The Dean of Northwestern University Medical School commented that the report accurately describes the VACHCS structure and integration process. He stated that the integration structure and process allowed veterans, employees, health care providers, and affiliated institutions an effective voice in the deliberations. Collaboration between the leadership of the affiliated medical schools and the management of each facility has produced a more efficient health care delivery system for veterans without sacrificing quality, he said. He provided assurance that as the joint deans’ committee considers and implements integration decisions on the remaining major services, the issues of access and quality of care for veterans will be at the forefront of its deliberations. The Dean highlighted a positive impact of the integration that received brief mention in the report. While the integration’s predominant goal of achieving cost savings has been and will continue to be realized, the integration process has created a level of cooperation between all involved institutions that is expected to provide benefits to veterans’ health care in Chicago for years to come. In addition, the integration process is building new relationships between the two medical schools that could lead to future collaboration on many levels. Though nearly impossible to quantify, these ancillary benefits are important, he said. Finally, the Dean commented that with respect to efficiencies and cost savings, additional savings are expected to be realized as decisions about major services are made. He stated that a thorough analysis of the integration process and its benefits cannot be done until all integration decisions have been implemented and the integrated facilities have had sufficient time to absorb the changes and produce results. As agreed with your offices, copies of this report are being sent to the Secretary of Veterans Affairs, interested congressional committees, and other interested parties. Copies will be made available to others upon request. Please contact me on (202) 512-7101 if you have any questions about this report. Other GAO contacts and staff acknowledgments for this report are listed in appendix IX. The physical medicine and rehabilitation service (direct care) work group did not provide a report by October 1, 1997. In addition to those named above, Lesia Mandzia and John Borrelli collected and analyzed information about the status and impacts of the integration recommendations. Joan Vogel provided technical support. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed: (1) what impacts the Veterans Affairs (VA) Chicago Health Care System (VACHCS) had on veterans, employees, and medical schools in the Chicago area; (2) VACHCS' integration process; (3) the integration decisions made; and (4) dollar savings for these decisions. GAO noted that: (1) the VACHCS integration process, which began in 1996, included 28 work groups that studied administrative, patient support, and direct care services and made recommendations to an Integrated Coordinating Committee (ICC); (2) the ICC reviewed, reworked, and modified work group recommendations; (3) work group recommendations approved by ICC were sent to the VACHCS director for review, approval, and implementation; (4) recommendations involving changes to clinical services were also reviewed and approved by the Great Lakes network director and the Veterans Health Administration Under Secretary for Health; (5) VACHCS involved stakeholders in its integration process; (6) the VACHCS integration: (a) unified the management of 16 services; (b) reengineered 23 services by standardizing operating policies, practices, and databases or by establishing more efficient practices; and (c) consolidated parts of eight services in a single location; (7) the integration appears to have had a small but positive impact on veterans, employees, and medical schools; (8) VACHCS officials report that they have maintained the level of service to veterans and, in some instances, even improved access and quality while minimizing the hardship on VA employees by not dismissing any current employees; (9) medical school affiliations remain largely unchanged, and medical education continues to be provided at both hospitals, using the same management structure and operating practices; (10) the VACHCS integration saved about $7 million; and (11) VACHCS saved about $4 million by eliminating 80 positions, of which 74 were vacant, and approximately $2 million by avoiding the purchase of duplicate equipment and related construction.
You are an expert at summarizing long articles. Proceed to summarize the following text: As a result of controversy and litigation surrounding the 1990 Decennial Census, the U.S. Census Bureau recognized the need for a full-scale review of its decennial census program. The Congress, OMB, and GAO also agreed that this review was needed and that it must occur early in the decade to implement viable actions for the 2000 Census and to prepare for the 2010 Census. Early in the 1990s, in reports and testimonies, we stressed the importance of strong planning and the need for fundamental reform to avoid the risk of a very expensive and seriously flawed census in 2000. To address a redesign effort, in November 1990 the bureau formed the Task Force for Planning the Year 2000 Census and Census-Related Activities for 2000-2009. The task force was to consider lessons learned from the 1990 Census, technical and policy issues, constitutional and statutory mandates, changes in U.S. society since earlier decennial censuses, and the most current knowledge of statistical and social measurement. The bureau also established a Year 2000 Research and Development Staff to assist the task force and conduct numerous research projects designed to develop new approaches and techniques for possible implementation in the 2000 Census. In June 1995, the task force issued its report, Reinventing the Decennial Census. Concerns about the 1990 Census also led the Congress to pass the Decennial Census Improvement Act of 1991 (Public Law 102-135) requiring the National Academy of Sciences to study the means by which the government could achieve the most accurate population count possible and collect other demographic and housing data. The academy established a panel on methods to provide an independent review of the technical and operational feasibility of design alternatives and tests conducted by the U.S. Census Bureau. The panel issued its final report in September 1994. A second academy panel on requirements examined the role of the decennial census within the federal statistical system and issued its final report in November 1994. In March 1995, the bureau conducted the 1995 Census Test which provided a critical source of information to decide by December 1995 the final design of the 2000 Census. These efforts resulted in a planned approach for reengineering the 2000 Census which was presented in a May 19, 1995, U.S. Census Bureau report, The Reengineered 2000 Census. In October 1995, we testified on the bureau’s plans for the 2000 Census. In that testimony, we concluded that the established approach used to conduct the 1990 Census had exhausted its potential for counting the population cost-effectively and that fundamental design changes were needed to reduce census costs and to improve the quality of data collected. We also raised concerns about the bureau proceeding with design plans for the 2000 Census without input from the Congress. In the intervening months, the bureau was unable to come to agreement with the Congress on critical design and funding decisions. In February 1997, we designated the 2000 Decennial Census a new high-risk area because of the possibility that further delays could jeopardize an effective census and increase the likelihood that billions of dollars could be spent and the nation be left with demonstrably inaccurate census results. In July 1997, we updated our 1995 testimony on bureau design and planning initiatives for the 2000 Census and assessed the feasibility of bureau plans for carrying out the 2000 Census. To respond to Title VIII of Public Law 105-18, which required the Department of Commerce to provide detailed data about the bureau’s plans by July 12, 1997, the bureau issued its Report to Congress, The Plan For Census 2000. This plan also incorporated the bureau’s Census 2000 Operational Plan that was updated annually. In November 1997, Public Law 105-119 established the Census Monitoring Board to observe and monitor all aspects of the bureau’s preparation and implementation of the 2000 Census. Section 209 (j) of this legislation also required the bureau to plan for dual tracks of the traditional count methodology and the use of statistical sampling to identify historically undercounted populations of children and minorities. As 1 of 13 bureaus within the Department of Commerce, the U.S. Census Bureau must submit its annual budget for review and inclusion in the department’s budget. The department must then make choices in consideration of its overall budget to OMB and will therefore make adjustments to bureau-requested budgets as deemed necessary. OMB will review and further adjust department and bureau budgets to consider the programs and priorities of the entire federal government that become the President’s Budget. The Congress may then adjust the President’s Budget through the appropriation process that becomes the budget of the departments and the bureaus after signature by the President. The appropriations for decennial census are no-year funds that are available until expended, rescinded, transferred, or until the account is closed. As shown in table 1, the Department of Commerce requested a total of $268.7 million for 2000 Census planning and development in the President’s Budgets for fiscal years 1991 through 1997. The program received total funding of $223.7 million from the Congress, or about 83 percent of the amount requested. Although the 2000 Census received all of the funding requested in the President’s Budgets for fiscal years 1991 and 1992, it received reduced funding for each fiscal year from 1993 through 1997. According to the bureau, these reductions resulted in the elimination, deferral, or scaling back of certain projects in planning for the 2000 Census. The bureau subsequently obligated 99 percent of its appropriated 2000 Census funding through fiscal year 1997. Bureau records indicated that the bulk of $86 million of decennial funding received through the end of fiscal year 1995 was obligated for program development and evaluation methodologies, testing and dress rehearsals, and planning for the acquisition of automated data processing and telecommunications support. For fiscal years 1996 and 1997, bureau records indicated that the bulk of $138 million of decennial funding received was obligated for planning the establishment of field data collection and support systems, refining data content and products, evaluating test results, and procuring automated data processing and telecommunications support. For the planning and development phase, personnel costs consumed about 53 percent of planning and development funds; contractual services consumed 16 percent; and space, supplies, travel, and other expenses consumed the remaining 31 percent. Because of different major program categories used by the bureau from fiscal years 1991 through 1997, we could not present a comprehensive table of funding for the period. However, we were able to analyze the funding by fiscal year and a detailed analysis of funding requested, received, and obligated, and funds budgeted by major program category for fiscal years 1991 through 1997 are presented in appendix II. The U.S. Census Bureau was responsible for carrying out its mission within the budget provided and bureau management determined the specific areas in which available resources were invested. We could not determine what effect, if any, that higher funding levels might have had on census operations as this is dependent upon actual implementation and the results of management decisions that may or may not have occurred. However, according to bureau officials, lower than requested funding levels for fiscal years 1993 through 1997 adversely affected the bureau’s planning and development efforts for the 2000 Census. As examples, they cited the following 10 areas where reduced funding levels caused the bureau to curtail planning initiatives. Although lower funding levels may have affected these areas, information from previous bureau and GAO reports and testimony indicated that operational, methodological, and other factors also contributed to weaknesses in the bureau’s planning efforts. 1. Difficulties in retaining knowledgeable staff. Although many key bureau personnel and project managers involved with the 2000 Census had also worked on the 1990 and earlier decennial censuses, bureau officials stated that many experienced people retired or left the bureau after the 1990 Census. According to the bureau, a contributing factor was lower funding levels to pay personnel compensation and benefits, which in turn affected the number of personnel with institutional knowledge of the decennial census to lend support to the 2000 Census planning and development effort. We noted that soon after a major event such as the decennial census count, it is not unusual for personnel to leave the bureau, as did three senior executives after the 2000 Census. In addition, Office of Personnel Management data indicated that over half of the bureau’s full-time, nonseasonal work force of 5,345 employees as of March 2002 is eligible for retirement by 2010. Thus, the human capital issue will remain a key planning area to ensure that the bureau has the skill mix necessary to meet its future requirements. 2. Scaled-back plans for testing and evaluating 1990 Census data. A bureau official stated that the amount of qualitative and quantitative data from the 1990 Census was limited and hampered the quality and results of planning and development efforts for the 2000 Census. Additionally, many opportunities were lost in capitalizing on the 1990 Census data that did exist and more funding to evaluate this data could have facilitated 2000 Census research and planning efforts. Bureau officials stated that as they moved forward with planning for the 2000 Census, they had to scale back plans for testing and evaluating 1990 Census data because of a lack of funding. For example, they cited the inability to update a 1990 Census study of enumerator supervisor ratios. 3. Delays in implementing a planning database. Bureau officials stated that they were unable to implement an effective planning database in the early years of the 2000 Census. In one of its first plans, the bureau conceived of a planning database that would capture data down to very small geographic levels and would be continuously updated over the decade for a number of census purposes. This database would have enabled the bureau to target areas where language resources were needed, identify areas where enumeration and recruiting could be difficult, and position data capture centers to support the most cost- efficient and effective infrastructure. However, according to bureau officials, with lower funding through fiscal year 1995, the planning database was put on hold. Later in the decade, the bureau resurrected the planning database but did not develop and use it fully. 4. Limited resources to update address databases. According to bureau officials, sufficient resources to update and coordinate large databases of addresses and physical locations provided a continuous challenge to the bureau. At the end of the 1990 Census, the bureau’s database contained 102 million addresses, each assigned to the census block area in which it was located. At that point, the U.S. Census Bureau’s Geography Division initiated discussions with the U.S. Postal Service to utilize its Delivery Sequence File (DSF) that contained millions of addresses used to deliver the U.S. mail. The bureau planned to use the DSF in updating its address database which became the Master Address File (MAF). With lower funding through 1995, bureau officials cited limited resources to update the MAF database and to assess the quality of entered information. 5. Program to identify duplicate responses was not fully developed. Bureau officials stated the program to identify duplicate responses was not fully developed for the 2000 Census and more emphasis and funding were needed to develop appropriate software and procedures. It is important to be able to identify duplications in the MAF and multiple responses from a person or household that contribute to a population overcount. This includes operations to identify multiple responses for the same address and computer matching of census responses received against all other people enumerated in the block. Duplications also occurred due to college students counted both at school and at home, people with multiple residences, and military personnel residing outside their home state. 6. Abandoned plans to use administrative records. In early planning for the 2000 Census, the bureau funded efforts to use records from nonbureau sources of information (such as driver licenses, voter registrations, and other government programs) to supplement the census count. This administrative records project was the result of extensive research studies conducted by the bureau beginning in 1993 that focused on initial plans for three uses of nonbureau information to derive census totals for some nonresponding households, enhance the coverage measurement operations, and help provide missing content from otherwise responding households. Although bureau officials determined that administrative records had the potential to improve coverage, the bureau abandoned plans to fund and more fully develop an administrative records database in February 1997. While the lack of funding may have been a contributing factor, bureau documents indicated that this action was primarily due to questions about the accuracy and quality of administrative records and issues of privacy protection. 7. Problems with multiple language questionnaires. Bureau officials cited several funding and operational problems with census questionnaires in the five languages that were used other than English. In 1995, the bureau planned to mail forms in both Spanish and English to areas with high concentrations of Spanish speakers and produce forms in other languages as needed. In March 1997, in response to requests for forms in other languages, the bureau announced its intent to print questionnaires in multiple languages in an effort to increase the mail response rate. The bureau selected four additional languages as a manageable number based upon a perceived demand. However, the bureau could not determine how to pinpoint the communities that needed the non-English questionnaires. Instead, the bureau indicated in a mailing that the questionnaires were available in five languages and if an individual wanted a questionnaire in a language other than English, the individual had to specifically request the questionnaire in that language. As a result, the bureau did not know the number of questionnaires to print in the five languages until late in the process. Finally, the bureau did not have the time to comprehensively assess the demand for questionnaires in other languages. 8. Cost-effective use of emerging data capture technology. Early bureau research assessed current and emerging data capture technologies, such as electronic imaging, optical mark recognition, and hand-held devices, which offered the potential for significant cost reductions in processing large volumes of data. Bureau officials indicated they were unsure of their exact requirements for the emerging data capture technologies, and this resulted in most contracts being cost- reimbursement contracts that required more funding than planned. The bureau estimated that it ultimately spent about $500 million on contracts to improve the data capturing process. Bureau officials also stated that they did not have the time to fully develop and test the data capture systems or data capture centers, both of which were contracted for the first time in the 2000 Census. For example, the bureau said it could not adequately prepare for the full development and testing of the imaging contract. As a consequence, the first imaging test did not occur until 1998, and bureau officials stated that it became clear that imaging was not working due to technical and implementation problems. To some extent, this is not unexpected when implementing new technologies. Although the contractor and the bureau felt the system was not ready, it was tested anyway due to the short time frame and major problems developed. Even though the system eventually became operational in time for the 2000 Census count, bureau officials indicated that this occurred at a higher than anticipated risk and cost. 9. More use of the Internet. In the early 1990s, the full impact of the Internet as a global communications tool was not yet envisioned. Officials indicated that the bureau did not have sufficient time and funding during the planning phase to fully understand and test all the implications of using the Internet as a vehicle for census responses. In addition, the bureau’s major concern was that computer security issues had not been adequately addressed, particularly since census information must be protected and significant penalties may be imposed for unauthorized disclosure. Also, the public perception of using the Internet as a response medium had not been fully explored. Nevertheless, in February 1999, the bureau established a means for respondents to complete the 2000 Census short forms on the Internet protected by a 22-digit identification number. According to bureau officials, they received about 60,000 short forms via the Internet. The rapid evolution of the Internet has the potential to significantly reduce bureau workload and the large volume of paper forms for the 2010 Census. 10. Preparation for dress rehearsals. Bureau officials cited many problems during the fiscal year 1998 dress rehearsals for the 2000 Census that were a direct result of funding levels in the early planning and development years. They stated that because of delays in receiving funding in the fall of 1997, they had to delay the dress rehearsal census day from April 4 to April 18, 1998. In addition, because many new items were incomplete or still under development, the bureau said it could not fully test them during the dress rehearsals with any degree of assurance as to how they would affect the 2000 Census. However, despite these problems, the bureau testified in March 1998 that all preparatory activities for the dress rehearsal—mapping, address listing, local updates of addresses, opening and staffing offices, and printing questionnaires—had been completed. In 1999, the bureau issued an evaluation that concluded that all in all, the Census 2000 dress rehearsal was successful. The evaluation also stated that the bureau produced population numbers on time that compared favorably with independent benchmarks. It also acknowledged some problems, but devised methods to address those problems. Although the bureau conceded that planning efforts could be improved, the lack of funding did not appear to be a significant issue, except as it affected the ability to earlier plan the dress rehearsal. The bureau’s experience in preparing for the 2000 Census underscores the importance of solid, upfront planning and adequate funding levels to carry out those plans. As we have reported in the past, planning a decennial census that is acceptable to stakeholders includes analyzing the lessons learned from past practices, identifying initiatives that show promise for producing a better census while controlling costs, testing these initiatives to ensure their feasibility, and convincing stakeholders of the value of proposed plans. Contributing factors to the funding reductions for the 2000 Census were the bureau’s persistent lack of comprehensive planning and priority setting, coupled with minimal research, testing, and evaluation documentation to promote informed and timely decision making. Over the course of the decade, the Congress, GAO, and others criticized the bureau for not fully addressing such areas as (1) capitalizing on its experiences from past decennial censuses to serve as lessons to be learned in future planning, (2) documenting its planning efforts, particularly early in the process, (3) concentrating its efforts on a few critical projects that significantly affected the census count, such as obtaining a complete and accurate address list, (4) presenting key implementation issues with decision milestones, and (5) identifying key performance measures for success. Capitalizing on experiences from past censuses. In a fiscal year 1993 conference report, the Congress stated that the bureau should direct its resources towards a more cost-effective census design that would produce more accurate results than those from the 1990 Census. Further, the Congress expected the bureau to focus on realistic alternative means of collecting data, such as the use of existing surveys, rolling sample surveys, or other vehicles and that cost considerations should be a substantial factor in evaluating the desirability of design alternatives. In March 1993 we testified that time available for fundamental census reform was slipping away and important decisions were needed by September 1993 to guide planning for 1995 field tests, shape budget and operational planning for the rest of the census cycle, and guide future discussions with interested parties. We noted that the bureau’s strategy for identifying promising census designs and features was proving to be cumbersome and time consuming, and the bureau had progressed slowly in reducing the design alternatives for the next census down to a manageable number. Documenting early planning efforts. It is particularly important early in the planning process to provide a roadmap for further work. We found that the bureau did not document its 2000 Census planning until late in the planning phase. While the U.S. Census Bureau prepared a few pages to justify its annual budget requests for fiscal years 1991 through 1997, it did not provide a substantive document of its 2000 Census planning efforts until May 1995, and this plan was labeled a draft. Finally, the Congress mandated that the bureau issue a comprehensive and detailed plan for the 2000 Census within 30 days from enactment of the law. On July 12, 1997, the bureau issued its Report to the Congress—The Plan for Census 2000, along with its Census 2000 Operational Plan. Concentrating efforts on a few critical projects. While the bureau required many activities to count a U.S. population of 281 million residing in 117.3 million households, a few critical activities significantly affected the Census 2000 count, such as obtaining a complete and accurate address list. Although the bureau was aware of serious problems with its address list development process, it did not acknowledge the full impact of these problems until the first quarter of 1997. Based upon its work with the postal service database, the 1995 Census Test, and pilot testing at seven sites, the bureau had gained sufficient evidence that its existing process would result in an unacceptably inaccurate address list due to inconsistencies in the quality of the postal service database across missing addresses for new construction; difficulties in identifying individual units in multiunit structures, such as apartment buildings; and inability of local and tribal governments to provide usable address lists. In September 1997, the bureau acknowledged these problems and proposed changes. However, we believe that this action occurred too late in the planning process and was not given a higher priority to benefit the 2000 Census enumeration. Presenting key implementation issues and decision milestones. The bureau discussed program areas as part of its annual budget requests for fiscal years 1991 through 1997, but the requests did not identify key implementation issues with decision milestones to target its planning activities. Decision milestones did not appear until July 1997, when the bureau issued its Census 2000 Operational Plan. Stakeholders such as the Congress are more likely to approve plans and funding requests when they are thoroughly documented and include key elements such as decision milestones. Identifying key performance measures. Census planning documents provided to us through fiscal year 1997 did not identify key performance measures. We believe that identifying key performance measures is critical to assessing success in the planning phase of the census and can provide quantitative targets for accomplishments by framework, activity, and individual projects. Such measures could include performance goals such as increasing mail response rates, reducing population overcount and undercount rates, and improving enumerator productivity rates. The lessons learned from planning the 2000 Census become even more crucial in planning for the next decennial census in 2010, which has current unadjusted life cycle cost estimates ranging from $10 billion to $12 billion. Thorough and comprehensive planning and development efforts are crucial to the ultimate efficiency and success of any large, long-term project, particularly one with the scope, magnitude, and deadlines of the U.S. decennial census. Initial investment in planning activities in areas such as technology and administrative infrastructure can yield significant gains in efficiency, effectiveness, and cost reduction in the later implementation phase. The success of the planning and development activities now occurring will be a major factor in determining whether this large investment will result in an accurate and efficient national census in 2010. Critical considerations are a comprehensive and prioritized plan of goals, objectives, and projects; milestones and performance measures; and documentation to support research, testing, and evaluation. A well-supported plan early in the process that includes these elements will be a major factor in ensuring that stakeholders have the information to make funding decisions. As the U.S. Census Bureau plans for the 2010 Census, we recommend that the Secretary of Commerce direct that the bureau provide comprehensive information backed by supporting documentation in its future funding requests for planning and development activities, that would include, but is not limited to, such items as specific performance goals for the 2010 Census and how bureau efforts, procedures, and projects would contribute to those goals; detailed information on project feasibility, priorities, and potential risks; key implementation issues and decision milestones; and performance measures. In commenting on our report, the department agreed with our recommendation and stated that the bureau is expanding the documents justifying its budgetary requests. For example, the bureau cited a document which outlines planned information technology development and activities throughout the decennial cycle of the 2010 Census. The bureau also included a two-page document, Reengineering the 2010 Census, which presented three integrated components and other plans to improve upon the 2000 Census. In this regard, it is essential that, as we recommended, the bureau follow through with details and documentation to implement these plans, define and quantify performance measures against goals, and provide decision milestones for specific activities and projects. As agreed with you office, unless you announce its contents earlier, we plan no futher distribution of this report until 7 days after its issuance date. At that time, we will send copies of this report to the Chairman and Ranking Minority Member of the Senate Committee on Governmental Affairs, the House Committee on Government Reform, and the House Subcommittee on Civil Service, Census, and Agency Organization. We will also send copies to the Director of the U.S. Census Bureau, the Secretary of Commerce, the Director of the Office of Management and Budget, the Secretary of the Treasury, and other interested parties. This report will also be available on GAO’s home page at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact Gregory D. Kutz at (202) 512-9095 or kutzg@gao.gov, Patricia A. Dalton at (202) 512-6806 or daltonp@gao.gov, or Roger R. Stoltz, Assistant Director, at (202) 512-9408 or stoltzr@gao.gov. Key contributors to this report were Corinne P. Robertson, Robert N. Goldenkoff, and Ty B. Mitchell. The objectives of our review focused on the planning and development phase of the 2000 Census that we classified as covering fiscal years 1991 through 1997 and (1) the funding requested, received, and obligated with funding received and obligated by major planning category, (2) funding and other factors that affected planning efforts, and (3) lessons learned for the 2010 Census. To determine the amount of 2000 Census planning and development funding requested, received, and obligated, we obtained and analyzed annual decennial census budgets included in the President’s Budgets for fiscal years 1991 through 1997, budgets subsequently received after appropriation by the Congress, and amounts later obligated for the purchase of goods and services by the bureau against those budgets. We then obtained explanations from senior bureau officials for significant variances in these budgets and the effect on decennial planning and development. However, we did not assess the efficiency of the budgeting process and the validity, accuracy, and completeness of obligations against budgeted amounts received. To determine the funding received and obligated by major planning category for 2000 Census planning and development, we obtained and analyzed funding requested, received, and obligated by framework, activity, project, and object class and examined annual operational plans. However, our analysis was hampered by the bureau’s inconsistent use of categories that evolved from 1 activity of general planning in fiscal year 1991, 8 major study areas in fiscal years 1992 and 1993, and 8 to 15 broad categories called frameworks beginning in 1994. For internal management and reporting, the bureau further identified program efforts by activities and projects that have varied since fiscal year 1991. Additionally, the bureau expanded, contracted, or modified program names and descriptions making comparisons more difficult. We also obtained explanations from bureau officials for significant efforts and variances in its funding received and obligation of planning and development funding for the 2000 Census. However, we did not assess the merits of budgeting by program and the subsequent validity, accuracy, and completeness of obligations. To identify funding and other factors that affected planning efforts, we analyzed significant changes in funding requested, received, and obligated at the framework level; identified initiatives that were reduced, eliminated, or severely curtailed; discussed the effect of these areas with bureau officials; and evaluated bureau responses. We also reviewed various reports, testimony, and supporting documents prepared by the bureau, GAO, and others. However, we could not determine what effect, if any, that higher levels of funding might have had on 2000 Census operations. These factors are dependent upon actual implementation and the results of management decisions that may or may not have occurred. To provide lessons learned for the 2010 Census, we identified areas for improvement and obtained support from bureau, GAO, and congressional reports, testimony, interviews, and other documents. Our work was performed in Washington, D.C. and at U.S. Census Bureau headquarters in Suitland, Maryland between January and July 2001 when our review was suspended due to an inability to obtain access to certain budget records. After lengthy discussions with senior officials of the bureau, Department of Commerce, and OMB, and consultation with your staffs, this access issue was resolved in May 2002 and we completed our analysis in June 2002. Our work was done in accordance with U.S. generally accepted government auditing standards, except that we did not audit budget and other financial data provided by the U.S. Census Bureau. On October 16, 2002, the Department of Commerce provided written comments on a draft of this report, including two attachments. These comments are presented in the “Agency Comments and Our Evaluation” section of the report and are reprinted in appendix III, except for the second attachment, Potential Life-Cycle Savings for the 2010 Census, which is currently under revision and is outside the scope of our review. This appendix includes our analysis of 2000 Census funding requested, received, and obligated, and funding received and obligated by major planning category for fiscal years 1991 through 1997. Our analysis was hampered by the bureau’s inconsistent use of major planning categories that evolved over the period as follows: 1 activity of general planning in fiscal year 1991, 8 major study areas in fiscal years 1992 and 1993, and 8 to 15 broad categories called frameworks beginning in 1994. For internal management and reporting, the bureau further identified program efforts by activities and projects that have varied since fiscal year 1991. In addition, the bureau expanded, contracted, or modified program names and descriptions making comparisons more difficult. In March 1991 we testified that fundamental census reform was needed because escalating costs and the apparently increased undercount of the 1990 Census suggested that the current census methodology may have reached the limits of its effectiveness. Of three principles we presented, the last was that the Department of Commerce must be willing to invest sufficient funds early in the decade to achieve cost savings and census improvements in 2000. In fact, OMB deemed some of the Department of Commerce requests to fund early census reform as insufficient and doubled the department’s requested amounts to $1.5 million for fiscal year 1991 and $10.1 million for fiscal year 1992. These amounts were included in the President’s Budgets and the Congress concurred by authorizing the full amount requested. Census planning officials said that if OMB had not augmented the department’s request, testing of reform options for 2000 would have been constrained. For the first year of the 7-year 2000 Census planning and development phase, the fiscal year 1991 funding received was $1.5 million and the bureau obligated the entire amount. The funding contained only one category of general planning for the 2000 Census with funds to be used for: completion of detailed cost-benefit studies of alternatives designs for conducting the decennial census; exploration of new technologies to improve the 2000 Census; establishment of research and development efforts for administrative methods and modeling and estimation techniques; and planning of field tests in fiscal year 1993 to include new census content, methods, technologies, and field structures. Because total amounts were small and involved only general planning, there were no significant variances. We noted that about 46 percent of the funding was obligated for personnel costs relating to 19 full-time equivalent (FTE) staff, 29 percent for services including consultants; and the remaining 25 percent for space, supplies, travel, and other costs. Fiscal year 1992 funding received was $10.1 million and the bureau obligated $9.4 million against it. The funding now identified eight major study areas for the 2000 Census as indicated in table 2. For fiscal year 1992, the bureau experienced almost a six-fold increase in its funding received of $10.1 million over the $1.5 million for fiscal year 1991. About half of the fiscal year 1992 funding was obligated for personnel costs as a result of almost a five-fold increase in FTE staff from 19 in fiscal year 1991 to 111 in fiscal year 1992 to work on decennial planning and development issues. Services, including consultants, accounted for another quarter of the obligations with the remaining quarter for space, supplies, travel, and other costs. Technology options included a $1.7 million services contract to develop emerging data capture technology to compile census statistics. For fiscal year 1993, the Congress reduced the President’s Budget request of $19.4 million for 2000 Census planning and development to $13.7 million, for a reduction of about 29 percent. As a result of this $5.7 million reduction, the bureau made significant cuts in its funding of techniques for special areas and subpopulations by $2.2 million, or about 70 percent, and also eliminated activities to: establish contacts with state and local government budgeted for $1.6 assess customer needs budgeted for $1.0 million, survey public motivation budgeted for $.8 million, and prepare infrastructure for a 1995 Census Test budgeted for $.5 million. In a fiscal year 1993 conference report, the Congress stated that the bureau should direct its resources towards a more cost-effective census design that will produce more accurate results than those from the 1990 Census. For example, the bureau’s research in fiscal year 1992 indicated that reducing the number of questions on the census form is an important way to increase response, thereby increasing accuracy and reducing cost. Therefore, the Congress expected the bureau to focus on realistic alternative means of collecting data, such as the use of existing surveys, rolling sample surveys, or other vehicles and that cost considerations should be a substantial factor in evaluating the desirability of design alternatives. In March 1993 we testified that time available for fundamental census reform was slipping away and important decisions were needed by September 1993 to guide planning for 1995 field tests, shape budget and operational planning for the rest of the census cycle, and guide future discussions with interested parties. The bureau’s strategy for identifying promising census designs and features was proving to be cumbersome and time consuming, and the bureau had progressed slowly in reducing the design alternatives for the next census down to a manageable number. Fiscal year 1993 funding received was $13.7 million and the bureau obligated $13.5 million against it. The budget continued to identify eight major study areas for the 2000 Census as indicated in table 3. For fiscal year 1993, the bureau experienced a 36 percent increase in its funding received of $13.7 million over the $10.1 million for fiscal year 1992. About 53 percent of the fiscal year 1993 funding was obligated for personnel costs as a result of a 48 percent increase in FTE staff from 111 in fiscal year 1992 to 164 in fiscal year 1993 to work on decennial planning and development issues. Services, including consultants, accounted for about 11 percent of the funding with the remaining 36 percent used for space, supplies, travel, and other costs. Fiscal year 1993 was identified by the bureau as the beginning of a 3-year period to identify the most promising changes to be integrated in the 1995 Census Test. For fiscal year 1994, the Congress reduced the President’s Budget request of $23.1 million for 2000 Census planning and development to $18.7 million, for a reduction of about 19 percent. As a result of this $4.4 million reduction, the bureau eliminated decennial operational preparation for $2.5 million, and reduced funding for questionnaire design and cost modeling by $1.6 million or 70 percent. In May 1993 we testified that the U.S. Census Bureau had altered its decision-making approach and refocused its 2000 Census research and development efforts. Driven by its impending September 1993 deadline for deciding which designs to test in 1995 for the 2000 Census, the bureau recommended rejecting all 14 design alternatives that had formed the framework of its research program that was under study for a year. Instead, the bureau reverted to an earlier approach of concentrating favorable features into the design for application in the 2000 Census. A fiscal year 1994 House Appropriations Committee report cited our May 1993 testimony and stated that it was unacceptable for the bureau to conduct the 2000 Census under a process that followed the general plan used in the 1990 Census. A fiscal year 1994 conference report expressed concern that the U.S. Census Bureau had not adequately addressed cost and scope issues for the 2000 Census and expected the Department of Commerce and OMB to take a more active role in planning for the decennial census to ensure that data requirements for federal agencies and state and local government were considered in the planning effort. In October 1993 we testified that the U.S. Census Bureau’s research and development efforts had been slowed by its changing planning strategy and that the bureau still faced the difficult task of integrating its Test Design Recommendation proposals into a detailed implementation plan for the 1995 census test. We noted that the bureau’s plans to conduct research and evaluations for such promising proposals as the one-number census, sampling for nonresponse, and defining the content of the census were in a state of flux. Other important research and planning activities, such as improving the address list and using new automated techniques to convert respondent answers to machine-readable format, were behind schedule. Funding for research and test census preparation in fiscal years 1994 and 1995 was in doubt as evidenced by the budget cuts proposed by the House Appropriations Committee and the opinions expressed in its report accompanying the fiscal year 1994 appropriations bill. The bureau obligated the entire amount of its fiscal year 1994 funding received of $18.7 million. Funding originally contained 6 design areas for 2000 Census research and development, the 1995 Census test, and decennial operational preparation but was later revised to present funds received and obligated in 13 frameworks of effort as indicated in table 4. For fiscal year 1994, the bureau experienced a 36 percent increase in its funding received of $18.7 million over the $13.7 million for fiscal year 1993. About 44 percent of the fiscal year 1994 funding was obligated for personnel costs as a result of a 34 percent increase in FTE staff from 164 in fiscal year 1993 to 220 in fiscal year 1994 to work on decennial planning and development issues. Services, including consultants, accounted for another 13 percent of obligations with the remaining 43 percent for space, supplies, travel, and other costs. We noted that six frameworks received little or no funding and three frameworks accounted for 89 percent of the fiscal year 1994 funds received and obligated as follows: Framework 5 - Evaluation and development consumed $7.1 million or 38 percent of funding received and obligated for research and developmental work to support the 1995 census test. This included research on the use of matching keys beyond just a person’s residence address to develop matching procedures that would allow the bureau to make use of person-based administrative records files that do not have a current residential address; research on various uses of sampling including technical and policy issues on conducting the entire census on a sample basis and conducting only the nonresponse follow-up portion of the census on a sample basis; and race and ethnicity studies including extensive consultation with stakeholders, focus group testing, and planning of field tests. Framework 3 - Test census and dress rehearsal consumed $5.5 million or 29 percent of funding received and obligated to increase 1995 Census Test activities from preliminary studies and planning to the full-scale preparatory level program. These included such activities as completion of questionnaire content determination, analysis of a database of population characteristics by geographic area to make selections of test sites, determination of evaluation program objectives for the test, and determination of objectives for and design stakeholder consultation. Framework 11 - Automation/telecommunication support consumed $4.0 million or 21 percent of funding received and obligated for automated systems design and acquisition of data capture technology to upgrade the 1990 Census system (FACT90) to a 2000 Census system (DCS 2000). For fiscal year 1995, the Congress reduced the President’s Budget request of $48.6 million for 2000 Census planning and development to $42.0 million for a reduction of about 14 percent. As a result of this $6.6 million reduction, the bureau eliminated $9.0 million for decennial operation preparation and $.8 million for 1996 testing while increasing funding for program development and other areas by $3.2 million. In January 1994 we testified that while we were encouraged by the U.S. Census Bureau’s recent focus on testing specific proposals to modify the census methodology, we believed that the bureau must aggressively plan for and carefully implement its research, testing, and evaluation programs. Further, the results of those efforts must be available to make fully informed and timely decisions and build needed consensus among key stakeholders and customers for changes in the 2000 Census. A fiscal year 1995 Senate Appropriations Committee report strongly recommended that the bureau adopt more cost-effective means of conducting the next census as the budgetary caps and strict employment ceilings adopted by the President and the Congress would not accommodate a repeat of the process used in the 1990 Census. Fiscal year 1995 funding received was $42.0 million and the bureau obligated $40.9 million against it. The number of frameworks increased to 15 as indicated in table 5. For fiscal year 1995, the bureau experienced a 125 percent increase in its funding received of $42.0 million over the $18.7 million for fiscal year 1994. About 51 percent of the fiscal year 1995 funding was obligated for personnel costs as a result of a 211 percent increase in FTE staff from 220 in fiscal year 1994 to 685 in fiscal year 1995 to work on decennial planning and development issues. Services, including consultants, accounted for about 7 percent of the obligations with the remaining 42 percent for space, supplies, travel, and other costs. We noted that eight frameworks received little or no funding and Framework 3 accounted for over 70 percent of fiscal year 1995 funds received and obligated. The main focus of Framework 3 was conducting the 1995 Census Test, in order to select by December 1995 the features to be used for the 2000 Census. According to census plans and our discussions with officials, the bureau focused on the following major areas. Complete preparation for the 1995 Census Test, conduct the test, and begin evaluations in order to select the features to be used for the 2000 Census. In addition, the bureau would conduct a full-scale census test in four district office areas that would be the culmination of the research and development program. Investigate, develop, test, and evaluate components of a continuous measurement system as a replacement for the 2000 Census sample data questionnaire. Develop, test, and evaluate various matching keys for the automated and clerical matching and unduplicating systems developed under the direction of the matching research and specifications working group. Conduct activities independent of the research and development program; these are preparatory activities required to implement the 2000 Census regardless of the design. This included such activities as planning the address list update activities as necessary to supplement the Master Address File (MAF) for use in the 2000 Census and begin initial planning of the field organization structure for the 2000 Census. Recommend the broad scope of content that should be included in the 2000 Census questionnaire based on consulting with both federal and nonfederal data users, and begin planning for small special purpose tests to supplement or follow up on the 1995 Census Test. For fiscal year 1996, the Congress reduced the President’s Budget request of $60.1 million for 2000 Census planning and development to $51.3 million, for a reduction of about 15 percent. As a result of this $8.8 million reduction, the bureau reduced funding for field data collection and support systems by $9.9 million or 43 percent while increasing funding in other areas. In October 1995 we testified that the U.S. Census Bureau had decided to make fundamental changes to the traditional census design such as shortening census questionnaires, developing an accurate address list, and sampling households that fail to respond to questionnaires. However, we noted that successful implementation of these changes would require aggressive management by the bureau and that the window of opportunity for the Congress to provide guidance on these changes and applicable funding was closing. A fiscal year 1996 conference report continued to express concern about progress related to the next decennial census. It cautioned the bureau that the cost of the 2000 Census had to be kept in check and only through early planning and decision making could costs be controlled. The report further recognized that fiscal year 1996 was a critical year in planning for the decennial census, and that numerous decisions will be made and preparations taken which will have a significant bearing on the overall cost of conducting the census, as well as the design selected. The bureau obligated the entire amount of its fiscal year 1996 funding received of $51.3 million. Beginning with fiscal year 1996, the number of frameworks was reduced to eight as indicated in table 6 below. For fiscal year 1996, the bureau experienced a 22 percent increase in its funding received of $51.3 million over the $42.0 million for fiscal year 1995. About 44 percent of the fiscal year 1996 funding was obligated for personnel costs as a result of a 5 percent decrease in FTE staff from 685 in fiscal year 1995 to 653 in fiscal year 1996 to work on decennial planning and development issues. Services, including consultants, accounted for about 13 percent of the obligations with the remaining 43 percent for space, supplies, travel, and other costs. Three frameworks incurred over 60 percent of funding received and obligated for the following. Framework 3 - Field data collection and support systems incurred costs of $13.3 million including $4.4 million to develop personnel and administrative systems for field office enumeration; $3.1 million for precensus day data collection activities; and $2.0 million for automation acquisition and support for field offices. Framework 2 - Data content and products incurred costs of $9.6 million including $4.4 million to develop and produce questionnaires and public use forms for the census including conduct of a National Content Test; $2.9 million for race and ethnicity testing of concepts and respondent understanding and wording of the race and ethnicity questions; and $1.6 million for continued work with federal and nonfederal data users in the content determination process to prepare for the congressional submission by April 1, 1997. Framework 6 - Testing, evaluations, and dress rehearsals incurred costs of $9.4 million including $3.3 million for an Integrated Coverage Measurement (ICM) special test; $2.6 million for research and development on sampling and sampling methods for the 2000 decennial count; and $2.1 million for 1995 Census Test coverage and evaluation. For fiscal year 1997, the Congress reduced the President’s Budget request of $105.9 million for 2000 Census planning and development to $86.4 million, for a reduction of about 18 percent. As a result of this $19.5 million reduction, the bureau reduced funding for marketing, communications, and partnerships by $14.4 million or 76 percent, and field data collection and support systems by $23.6 million or 53 percent, while increasing amounts in other areas by $18.5 million. A fiscal year 1996 House Appropriation Committee report expressed concern that the bureau appeared not to have developed options and alternative plans to address issues of accuracy and cost. In addition, sufficient progress had not been made on issues the committee had highlighted many times—the number of questions on the long-form and reimbursement from other agencies for inclusion of such questions to assure that the question is important. The bureau obligated the entire amount of its fiscal year 1997 budget of $86.4 million. Planning continued in eight frameworks as indicated in table 7. For fiscal year 1997, the bureau experienced a 68 percent increase in its funding received of $86.4 million over the $51.3 million for fiscal year 1996. About 63 percent of the fiscal year 1997 funding was obligated for personnel costs as a result of a 36 percent increase in FTE staff from 653 in fiscal year 1996 to 891 in fiscal year 1997 to work on decennial planning and development issues. Services, including consultants, accounted for about 25 percent of the obligations with the remaining 12 percent for space, supplies, travel, and other costs. The bureau viewed fiscal year 1997 as pivotal, since this was the year when research and testing activities culminated into operational activities and marked the end of the planning and development phase of the 2000 Census. For the fiscal year, four frameworks incurred about 85 percent of funding received and obligated as follows. Framework 3 - Field data collection and support systems incurred $20.9 million for activities under precensus day operations and support systems, and postcensus day operations. Projects included: $4.1 million for geographic patterns including questionnaire delivery methodologies by area and corresponding automated control systems; $4.0 million for planning of data collection efforts including activities for truncation and/or the use of sampling for nonresponse follow-up and increased efforts to develop procedures for enumerating special populations such as the military, maritime, institutional, migrant, reservation, and those living in other than traditional housing units; $3.8 million for direction and control by 12 regional offices that would provide logistical support and direct enumeration efforts by local census offices; and $3.1 million for planning and developing personnel and administrative systems to support 2000 Census data collection and processing activities, such as types of positions, pay rates, personnel and payroll processes, and systems, space, and security requirements. Framework 5 - Automation/telecommunication support incurred $20.2 million for activities to include evaluating proposals for the acquisition of automation equipment and related services, funding the development of prototype systems, and moving toward awarding contracts to implement such systems for the 2000 Census. Projects included setting up data capture systems and support to process census questionnaire responses and telecommunication systems required to provide nationwide toll-free 800 number services to answer respondent questions and to conduct interviews. Framework 6 - Testing, evaluation, and dress rehearsal incurred $19.8 million for the following activities: $3.7 million to begin gearing up for the 1998 dress rehearsal in order to prepare personnel to conduct the census testing efficiently and effectively; and $7.0 million to conduct activities for ICM special testing and American Indian Reservation (AIR) test census such as questionnaire delivery and mail return check-in operations, ICM computer-assisted personal visit interviews, computer and clerical matching, follow-up and after follow-up matching, and evaluation studies. Framework 2 - Data content and products incurred $12.3 million for activities related to the development of computer programs and systems for data tabulation and for the production of paper, machine-readable, and on-line data products. Projects included: $4.5 million to move from research in fiscal year 1996 to implementation in fiscal year 1997 of the Data Access and Dissemination System (DADs), including development of the requirements for Census 2000 tabulations from DADs, and development of computer programs and control systems that will format the processed Census 2000 data for use in DADs; and $2.2 million towards development of a redistricting program for Census 2000. The following are GAO’s comments on the letter dated October 16, 2002, from the Department of Commerce. 1. The objectives of our report did not include assessing the degree of success of the 2000 Census. 2. See “Agency Comments and Our Evaluation” section of this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
GAO reviewed the funding of 2000 Census planning and development efforts and the impact it had on census operations. Total funding for the 2000 Census, referred to as the life cycle cost, covers a 13-year period from fiscal year 1991 through fiscal year 2003 and is expected to total $6.5 billion adjusted to 2000 year dollars. This amount was almost double the reported life cycle cost of the 1990 Census of $3.3 billion adjusted to 2000 year dollars. Considering these escalating costs, the experience of the U.S. Census Bureau in preparing for the 2000 Census offers valuable insights for the planning and development efforts now occurring for the 2010 Census. Thorough and comprehensive planning and development efforts are crucial to the ultimate efficiency and success of any large, long-term project, particularly one with the scope, magnitude, and the deadlines of the U.S. decennial census. For fiscal years 1991 through 1997, $269 million was requested in the President's Budgets for 2000 Census planning and development and the program received funding of $224 million by Congress, or 83 percent of the amount requested. According to U.S. Census Bureau records, the bulk of the $86 million in funding received through the end of fiscal year 1995 was obligated for program development and evaluation methodologies, testing and dress rehearsals, and planning for the acquisition of automated data processing and telecommunications support. The U.S. Census Bureau was responsible for carrying out its mission within the budget provided and bureau management determined the specific areas in which available resources were invested. GAO could not determine what effect, if any, that higher funding levels might have had on bureau operations as this is dependent upon actual implementation and the results of management decisions that may or may not have occurred. According to bureau officials, early planning and development efforts for the 2000 Census were adversely affected by lower funding than requested for fiscal years 1993 through 1997. They identified 10 areas where additional funding could have been beneficial. These included difficulties in retaining knowledgeable staff, scaled back plans for testing and evaluating 1990 Census data, delays in implementing a planning database, and limited resources to update address databases. The bureau's experience in preparing for the 2000 Census underscores the importance of solid, upfront planning and adequate funding levels to carry out those plans.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Social Security Act of 1935 authorized the SSA to establish a recordkeeping system to help manage the Social Security program, and resulted in the creation of the SSN. Through a process known as “enumeration,” unique numbers are created for every person as a work and retirement benefit record for the Social Security program. Today, SSNs are generally issued to most U.S. citizens and are also available to noncitizens lawfully admitted to the United States with permission to work. Lawfully admitted noncitizens may also qualify for a SSN for nonwork purposes when a federal, state, or local law requires a SSN to obtain a particular welfare benefit or service. SSA staff collect and verify information from such applicants regarding their age, identity, citizenship, and immigration status. Most of the agency’s enumeration workload involves U.S. citizens who generally receive SSNs via SSA’s birth registration process handled by hospitals. However, individuals seeking SSNs can also apply in person at any of SSA’s field locations, through the mail, or via the Internet. The uniqueness and broad applicability of the SSN have made it the identifier of choice for government agencies and private businesses, both for compliance with federal requirements and for the agencies’ and businesses’ own purposes. In addition, the boom in computer technology over the past decades has prompted private businesses and government agencies to rely on SSNs as a way to accumulate and identify information for their databases. As such, SSNs are often the identifier of choice among individuals seeking to create false identities. Law enforcement officials and others consider the proliferation of false identities to be one of the fastest growing crimes today. In 2002, the Federal Trade Commission received 380,103 consumer fraud and identity theft complaints, up from 139,007 in 2000. In 2002, consumers also reported losses from fraud of more than $343 million. In addition, identity crime accounts for over 80 percent of social security number misuse allegations according to the SSA. As we reported to you last year, federal, state, and county government agencies use SSNs. When these entities administer programs that deliver services and benefits to the public, they rely extensively on the SSNs of those receiving the benefits and services. Because SSNs are unique identifiers and do not change, the numbers provide a convenient and efficient means of managing records. They are also particularly useful for data sharing and data matching because agencies can use them to check or compare their information quickly and accurately with that from other agencies. In so doing, these agencies can better ensure that they pay benefits or provide services only to eligible individuals and can more readily recover delinquent debts individuals may owe. In addition to using SSNs to deliver services or benefits, agencies also use or share SSNs to conduct statistical research and program evaluations. Moreover, most of the government departments or agencies we surveyed use SSNs to varying extents to perform some of their responsibilities as employers, such as paying their employees and providing health and other insurance benefits. Many of the government agencies we surveyed in our work last year reported maintaining public records that contain SSNs. This is particularly true at the state and county level where certain offices such as state professional licensing agencies and county recorders’ offices have traditionally been repositories for public records that may contain SSNs. These records chronicle the various life events and other activities of individuals as they interact with the government, such as birth certificates, professional licenses, and property title transfers. Generally, state law governs whether and under what circumstances these records are made available to the public, and they vary from state to state. They may be made available for a number of reasons, including the presumption that citizens need key information to ensure that government is accountable to the people. Certain records maintained by federal, state, and county courts are also routinely made available to the public. In principle, these records are open to aid in preserving the integrity of the judicial process and to enhance public trust and confidence in the judicial process. At the federal level, access to court documents generally has its grounding in common law and constitutional principles. In some cases, public access is also required by statute, as is the case for papers filed in a bankruptcy proceeding. As with federal courts, requirements regarding access to state and local court records may have a state common law or constitutional basis or may be based on state laws. Although public records have traditionally been housed in government offices and court buildings, to improve customer service, some state and local government entities are considering placing more public records on the Internet. Because such actions would create new opportunities for gathering SSNs from public records on a broad scale, we are beginning work for this Subcommittee to examine the extent to which SSNs in public records are already accessible via the Internet. In our current work, we found that some private sector entities also rely extensively on the SSN. Businesses often request an individual’s SSN in exchange for goods or services. For example, some businesses use the SSN as a key identifier to assess credit risk, track patient care among multiple providers, locate bankruptcy assets, and provide background checks on new employees. In some cases, businesses require individuals to submit their SSNs to comply with federal laws such as the tax code. Currently, there is no federal law that generally prohibits businesses from requiring a person’s SSN as a condition of providing goods and services. If an individual refuses to give his or her SSN to a company or organization, they can be refused goods and services unless the SSN is provided. To build on previous work we did to determine certain private sector entities use of SSNs, we have focused our initial private sector work on information resellers and consumer reporting agencies (CRAs). Some of these entities have come to rely on the SSN as an identifier to accumulate information about individuals, which helps them determine the identity of an individual for purposes such as employment screening, credit information, and criminal histories. This is particularly true of entities, known as information resellers, who amass personal information, including SSNs. Information resellers often compile information from various public and private sources. These entities provide their products and services to a variety of customers, although the larger ones generally limit their services to customers that establish accounts with them, such as entities like law firms and financial institutions. Other information resellers often make their information available through the Internet to persons paying a fee to access it. CRAs are also large private sector users of SSNs. These entities often rely on SSNs, as well as individuals’ names and addresses to build and maintain credit histories. Businesses routinely report consumers’ financial transactions, such as charges, loans, and credit repayments to CRAs. CRAs use SSNs to determine consumers’ identities and ensure that incoming consumer account data is matched correctly with information already on file. Certain laws such as the Fair Credit Reporting Act, the Gramm-Leach- Bliley Act, and the Driver’s Privacy Protection Act have helped to limit the use of personal information, including SSNs, by information resellers and CRAs. These laws limit the disclosure of information by these entities to specific circumstances. In our discussion with some of the larger information resellers and CRAs, we were told that they take specific actions to adhere to these laws, such as establishing contracts with their clients specifying that the information obtained will be used only for accepted purposes under the law. The extensive public and private sector uses of SSNs and availability of public records and other information, especially via the Internet, has allowed individuals’ personal information to be aggregated into multiple databases or centralized locations. In the course of our work, we have identified numerous examples where public and private databases has been compromised and personal data, including SSNs, has been stolen. In some instances, the display of SSNs in public records and easily accessible Web sites provided the opportunity for identity thieves. In other instances, databases not readily available to outsiders have had their security breached by employees with access to key information. For example, in our current work, we identified a case where two individuals obtained the names and SSNs of 325 high-ranking U.S. military officers from a public Web site, then used those names and identities to apply for instant credit at a leading computer company. Although criminals have not accessed all public and private databases, such cases illustrate that these databases are vulnerable to criminal misuse. Because SSA is the issuer and custodian of SSN data, SSA has a unique role in helping to prevent the proliferation of false identities. Following the events of September 11, 2001, SSA began taking steps to increase management attention on enumeration and formed a task force to address weaknesses in the enumeration process. As a result of this effort, SSA has developed major new initiatives to prevent the inappropriate assignment of SSNs to noncitizens. However, our preliminary findings to date identified some continued vulnerabilities in the enumeration process, including SSA’s process for issuing replacement Social Security cards and assigning SSNs to children under age one. SSA is also increasingly called upon by states to verify the identity of individuals seeking driver licenses. We found that fewer than half the states have used SSA’s service and the extent to which they regularly use the service varies widely. Factors such as costs, problems with system reliability, and state priorities have affected states’ use of SSA’s verification service. We also identified a key weakness in the service that exposes some states to inadvertently issuing licenses to individuals using the SSNs of deceased individuals. We plan to issue reports on these issues in September that will likely contain recommendations to improve SSA’s enumeration process and its SSN verification service. SSA has increased document verifications and developed new initiatives to prevent the inappropriate assignment of SSNs to noncitizens who represent the bulk of all initial SSNs issued by SSA’s 1,333 field offices. Despite SSA’s progress, some weaknesses remain. SSA has increased document verifications by requiring independent verification of the documents and immigration status of all noncitizen applicants with the issuing agency—namely DHS and the Department of State (State Department) prior to issuing the SSN. However, many field office staff we interviewed are relying heavily on DHS’s verification service, while neglecting standard, in-house practices for visually inspecting and verifying identity documents. We also found that while SSA has made improvements to its automated system for assigning SSNs, the system is not designed to prevent the issuance of a SSN if field staff by-pass essential verification steps. SSA also has begun requiring foreign students to show proof of their full-time enrollment, and a number of field office staff told us they may verify this information if the documentation appears suspect. However, SSA does not require this verification step, nor does the agency have access to a systematic means to independently verify students’ status. Consequently, SSNs for noncitizen students may still be improperly issued. SSA has also undertaken other new initiatives to shift the burden of processing noncitizen applications from its field offices. SSA recently piloted a specialized center in Brooklyn, New York, which focuses exclusively on enumeration and utilizes the expertise of DHS document examiners and SSA Office of Inspector General’s (OIG) investigators. However, the future of this pilot project and DHS’ participation has not yet been determined. Meanwhile, in late 2002, SSA began a phased implementation of a long-term process to issue SSNs to noncitizens at the point of entry into the United States, called “Enumeration at Entry” (EAE). EAE offers the advantage of using State Department and DHS expertise to authenticate information provided by applicants for subsequent transmission to SSA who then issues the SSN. Currently, EAE is limited to immigrants age 18 and older who have the option of applying for a SSN at one of the 127 State Department posts worldwide that issue immigrant visas. SSA has experienced problems with obtaining clean records from both the State Department and DHS, but plans to continue expanding the program over time to include other noncitizen groups, such as students and temporary visitors. SSA also intends to evaluate the initial phase of EAE in conjunction with the State Department and DHS. While SSA has embarked on these new initiatives, it has not tightened controls in two key areas of its enumeration process that could be exploited by individuals seeking fraudulent SSNs. One area is the assignment of SSNs to children under age one. Prior work by SSA’s Inspector General identified the assignment of SSNs to children as an area prone to fraud because SSA did not independently verify the authenticity of various state birth certificates. Despite the training and guidance provided to field office employees, the OIG found that the quality of many counterfeit documents was often too good to detect simply by visual inspection. Last year, SSA revised its policies to require that field staff obtain independent third party verification of the birth records for U.S. born individuals age one and older from the state or local bureau of vital statistics prior to issuing a SSN card. However, SSA left in place its policy for children under age one and continues to require only a visual inspection of documents, such as birth records. SSA’s policies relating to enumerating children under age one expose the agency to fraud. During our fieldwork, we found an example of a noncitizen who submitted a counterfeit birth certificate in support of a SSN application for a fictitious U.S. born child under age one. In this case, the SSA field office employee identified the counterfeit state birth certificate by comparing it with an authentic one. However, SSA staff acknowledged that if a counterfeit out-of-state birth certificate had been used, SSA would likely have issued the SSN because of staff unfamiliarity with the specific features of the numerous state birth certificates. Further, we were able to prove the ease with which individuals can obtain SSNs by exploiting SSA’s current processes. Working in an undercover capacity our investigators were able to obtain two SSNs. By posing as parents of newborns, they obtained the first SSN by applying in person at a SSA field office using a counterfeit birth certificate and baptismal certificate. Using similar documents, a second SSN was obtained by our investigators who submitted all material via the mail. In both cases, SSA staff verified our counterfeit documents as being valid. SSA officials told us that they are re- evaluating their policy for enumerating children under age one. However, they noted that parents often need a SSN for their child soon after birth for various reasons, such as for income tax purposes. They acknowledge that a challenge facing the agency is to strike a better balance between serving the needs of the public and ensuring SSN integrity. In addition to the assignment of SSNs to children under the age of one, SSA’s policy for replacing Social Security cards also increases the potential for misuse of SSNs. SSA’s policy allows individuals to obtain up to 52 replacement cards per year. Of the 18 million cards issued by SSA in fiscal year 2002, 12.4 million, or 69 percent, were replacement cards. More than 1 million of these cards were issued to noncitizens. While SSA requires noncitizens applying for a replacement card to provide the same identity and immigration information as if they were applying for an original SSN, SSA’s evidence requirements for citizens are much less stringent. Citizens applying for a replacement card need not prove their citizenship; they may use as proof of identity such documents as a driver’s license, passport, employee identification card, school identification card, church membership or confirmation record, life insurance policy, or health insurance card. The ability to obtain numerous replacement SSN cards with less documentation creates a condition for requestors to obtain SSNs for a wide range of illicit uses, including selling them to noncitizens. These cards can be sold to individuals seeking to hide or create a new identity, perhaps for the purpose of some illicit activity. SSA told us the agency is considering limiting the number of replacement cards with certain exceptions such as for name changes, administrative errors, and hardships. However, they cautioned that while support exists for this change within the agency, some advocacy groups oppose such a limit. Field staff we interviewed told us that despite their reservations regarding individuals seeking excessive numbers of replacement cards, they were required under SSA policy to issue the cards. Many of the field office staff and managers we spoke to acknowledged that the current policy weakens the integrity of SSA’s enumeration process. The events of September 11, 2001, focused attention on the importance of identifying people who use false identity information or documents, particularly in the driver licensing process. Driver licenses are a widely accepted form of identification that individuals frequently use to obtain services or benefits from federal and state agencies, open a bank account, request credit, board an airplane, and carry on other important activities of daily living. For this reason, driver licensing agencies are points at which individuals may attempt to fraudulently obtain a license using a false name, SSN, or other documents such as birth certificates to secure this key credential. Given that most states collect SSNs during the licensing process, SSA is uniquely positioned to help states verify the identity information provided by applicants. To this end, SSA has a verification service in place that allows state driver licensing agencies to verify the SSN, name, and date of birth of customers with SSA’s master file of SSN owners. States can transmit requests for SSN verification in two ways. One is by sending multiple requests together, called the “batch” method, to which SSA reports it generally responds within 48 hours. The other way is to send an individual request on-line, to which SSA responds immediately. Twenty-five states have used the batch or on-line method to verify SSNs with SSA and the extent to which they use the service on a regular basis varies. About three-fourths of the states that rely on SSA’s verification service used the on-line method or a combination of the on-line and batch method, while the remaining states used the batch method exclusively. Over the last several years, batch states estimated submitting over 84 million batch requests to SSA compared to 13 million requests submitted by on-line users. States’ use of SSA’s on-line service has increased steadily over the last several years. However, the extent of use has varied significantly, with 5 states submitting over 70 percent of all on-line verification requests and one state submitting about one-third of the total. Various factors, such as costs, problems with system reliability, and state priorities affect states’ decisions regarding use of SSA’s verification service. In addition to the per-transaction fees that SSA charges, states may incur additional costs to set up and use SSA’s service, including the cost for computer programming, equipment, staffing, training, and so forth. Moreover, states’ decisions about whether to use SSA’s service, or the extent to which to use it, are also driven by internal policies, priorities, and other concerns. For example, some of the states we visited have policies requiring their driver licensing agencies to verify all customers’ SSNs. Other states may limit their use of the on-line method to certain targeted populations, such as where fraud is suspected or for initial licenses, but not for renewals of in-state licenses. The nonverifying states we contacted expressed reluctance to use SSA’s verification service based on performance problems they had heard were encountered by other states. Some states cited concerns about frequent outages and slowness of the on-line system. Other states mentioned that the extra time to verify and resolve SSN problems could increase customer waiting times because a driver license would not be issued until verification was complete. Indeed, weaknesses in SSA’s design and management of its SSN on-line verification services have limited its usefulness and contributed to capacity and performance problems. SSA used an available infrastructure to set up the system and encountered capacity problems that continued and worsened after the pilot phase. The capacity problems inherent in the design of the on-line system have affected state use of SSA’s verification service. Officials in one state told us that they have been forced to scale back their use of the system because they were told by SSA that their volume of transactions were overloading the system. In addition, because of issues related to performance and reliability, no new states have used the service since the summer of 2002. At the time of our review, 10 states had signed agreements with SSA and were waiting to use the on-line system and 17 states had received funds from the Department of Transportation for the purpose of verifying SSNs with SSA. It is uncertain how many of the 17 states will ultimately opt to use SSA’s on-line service. However, even if they signed agreements with SSA today, they may not be able to use the service until the backlog of waiting states is addressed. More recently, SSA has made some necessary improvements to increase system capacity and to refocus its attention to the day-to-day management of the service. However, at the time of our review, the agency still has not established goals for the level of service it will provide to driver licensing agencies. In reviewing SSA’s verification service, we identified a key weakness that expose some states to issuing licenses to applicants using the personal information of deceased individuals. Unlike the on-line service, SSA does not match batch requests against its nationwide death records. As a result, the batch method will not identify and prevent the issuance of a license in cases where an SSN name and date of birth of a deceased individual is being used. SSA officials told us that they initially developed the batch method several years ago and they did not design the system to match SSNs against its death files. However, in developing the on-line system for state driver licensing agencies, a death match was built into the new process. At the time of our review, SSA acknowledged that it had not explicitly informed states about the limitation of the batch service. Our own analysis of one month of SSN transactions submitted to SSA by one state using the batch method identified at least 44 cases in which individuals used the SSN, name, and date of birth of persons listed as deceased in SSA’s records to obtain a license or an identification card. We forwarded this information to state investigators who quickly confirmed that licenses and identification cards had been issued in 41 cases and were continuing to investigate the others. To further assess states’ vulnerability in this area, our own investigators working in an undercover capacity were able to obtain licenses in two batch states using a counterfeit out-of-state license and other fraudulent documents and the SSNs of deceased persons. In both states, driver licensing employees accepted the documents we submitted as valid. Our investigators completed the transaction in one state and left with a new valid license. In the second state, the new permanent license arrived by mail within weeks. The ease in which they were able to obtain these licenses confirmed the vulnerability of states currently using the batch method as a means of SSN verification. Moreover, states that have used the batch method in prior years to clean up their records and verify the SSNs of millions of driver license holders, may have also unwittingly left themselves open to identity theft and fraud. The use of SSNs by both public and private sector entities is likely to continue given that it is used as the key identifier by most of these entities and there is currently no other widely accepted alternative. To help control such use, certain laws have helped to safeguard such personal information, including SSNs, by limiting disclosure of such information to specific purposes. To the extent that personal information is aggregated in public and private sector databases, it becomes vulnerable to misuse. In addition, to the extent that public record information becomes more available in an electronic format, it becomes more vulnerable to misuse. The ease of access the Internet affords could encourage individuals to engage in information gathering from public records on a broader scale than they could before when they had to visit a physical location and request or search for information on a case-by-case basis. SSA has made substantial progress in protecting the integrity of the SSN by requiring that the immigration and work status of every non-citizen applicant be verified before an SSN is issued. However, without further system improvements and assurance that field offices will comply fully with the new policies and procedures this effort may be less effective than it could be. Further, as SSA closes off many avenues of unauthorized access to SSNs, perpetrators of fraud will likely shift their strategies to less protected areas. In particular, SSA’s policies for enumerating children and providing unlimited numbers of replacement cards may well invite such activity, unless they too are modified. State driver license agencies face a daunting task in ensuring that the identity information of those to whom they issues licenses is verified. States’ effectiveness verifying individuals’ identities is often dependent on several factors, including the receipt of timely and accurate identity information from SSA. Unfortunately, design and management weaknesses associated with SSA’s verification service have limited its effectiveness. States that are unable to take full advantage of the service and others that are waiting for the opportunity to use it remain vulnerable to identity crimes. In addition, states that continue to rely primarily or partly on SSA’s batch verification service still risk issuing licenses to individuals using the SSNs and other identity information of deceased individuals. This remains a critical flaw in SSA’s service and states’ efforts to strengthen the integrity of the driver license. GAO is preparing to publish reports covering the work I have summarized within the next several months, which will include recommendations aimed at ensuring the integrity of the SSN. We look forward to continuing to work with this Subcommittee on these important issues. I would be happy to respond to any questions you or other members of the Subcommittee may have. For further information regarding this testimony, please contact Barbara D. Bovbjerg, Director, or Dan Bertoni, Assistant Director, Education, Workforce, and Income Security at (202) 512-7215. Individuals making key contributions to this testimony include, Andrew O’Connell, John Cooney, Tamara Cross, Paul DeSaulniers, Patrick DiBattista, Jason Holsclaw, George Ogilvie, George Scott, Jacquelyn Stewart, Robyn Stewart, and Tony Wysocki.
In 1936, the Social Security Administration (SSA) established the Social Security Number (SSN) to track worker's earnings for social security benefit purposes. However, the SSN is also used for a myriad of non-Social Security purposes. Today, the SSN is used, in part, as a verification tool for services such as child support collection, law enforcement enhancements, and issuing credit to individuals. Although these uses of SSNs are beneficial to the public, SSNs are also a key piece of information in creating false identities. Moreover, the aggregation of personal information, such as SSNs, in large corporate databases, as well as the public display of SSNs in various public records, may provide criminals the opportunity to commit identity crimes. SSA, the originator of the SSN, is responsible for ensuring SSN integrity and verifying the authenticity of identification documents used to obtain SSNs. Although Congress has passed a number of laws to protect an individual's privacy, the continued use and reliance on SSNs by private and public sector entities and the potential for misuse underscores the importance of identifying areas that can be strengthened. Accordingly, this testimony focuses on describing (1) public and private sector use and display of SSNs, and (2) SSA's role in preventing the proliferation of false identities. Public and some private sector entities rely extensively on SSNs. We reported last year that federal, state and county government agencies rely on the SSN to manage records, verify eligibility of benefit applicants, and collect outstanding debt. SSNs are also displayed on a number of public record documents that are routinely made available to the public. To improve customer service, some state and local government entities are considering placing more public records on the Internet. In addition, some private sector entities have come to rely on the SSN as an identifier, using it and other information to accumulate information about individuals. This is particularly true of entities that amass public and private data, including SSNs, for resale. Certain laws have helped to restrict the use of SSN and other information by these private sector entities to specific purposes. However, as a result of the increased use and availability of SSN information and other data, more and more personal information is being centralized into various corporate and public databases. Because SSNs are often the identifier of choice among individuals seeking to create false identities, to the extent that personal information is aggregated in public and private sector databases it becomes vulnerable to misuse. As the agency responsible for issuing SSNs and maintaining the earnings records for millions of SSN holders, SSA plays a unique role in helping to prevent the proliferation of false identities. Following the events of September 11, 2001, SSA formed a task force to address weaknesses in the enumeration process and developed major new initiatives to prevent the inappropriate assignment of SSNs to non-citizens, who represent the bulk of new SSNs issued by SSA's 1,333 field offices. SSA now requires field staff to verify the identity information and immigration status of all non-citizen applicants with the Department of Homeland Security (DHS), prior to issuing an SSN. However, other areas remain vulnerable and could be targeted by those seeking fraudulent SSNs. These include SSA's process for assigning social security numbers for children under age one and issuing replacement social security cards. SSA also provides a service to states to verify the SSNs of driver license applicants. Fewer than half the states have used SSA's service and the extent to which they regularly use it varies. Factors such as cost, problems with system reliability, and state priorities and policies affect states' use SSA's service. We also identified a weakness in SSA's verification service that exposes some states to fraud by those using the SSNs of deceased persons.
You are an expert at summarizing long articles. Proceed to summarize the following text: The TacSat experiments and efforts to develop small, low-cost launch vehicles are part of a larger DOD initiative: Operationally Responsive Space (ORS). In general, ORS was created by DOD’s Office of Force Transformation (OFT) in response to the Secretary of Defense’s instruction to create a new business model for developing and employing space systems. Under ORS, DOD aims to rapidly deliver to the warfighter low-cost, short-term joint tactical capabilities defined by field commanders—capabilities that would complement and augment national space capabilities, not replace them. ORS would also serve as a test bed for the larger space program by providing a clear path for science and technology investments, enhancing institutional and individual knowledge, and providing increased access to space for testing critical research and development payloads. ORS is a considerable departure from the approach DOD has used over the past two decades to acquire the larger space systems that currently dominate its space portfolio. These global multipurpose systems, which have been designed for longer life and increased reliability, require years to develop and a significant investment of resources. The slow generational turnover—currently 15 to 25 years— does not allow for a planned rate of replacement for information technology hardware and software. In addition, the data captured through DOD’s larger space systems generally go through many levels of analysis before being relayed to the warfighter in theater. The TacSat experiments aim to quickly provide the warfighter with a capability that meets an identified need within available resources—time, funding, and technology. Limiting the TacSats’ scope allows DOD to trade off reliability and performance for speed, responsiveness, convenience, and customization. Once each TacSat satellite is launched, DOD plans to test its level of utility to the warfighter in theater. If military utility is established, according to a DOD official, DOD will assess the acquisition plan required to procure and launch numerous TacSats—forming constellations—to provide wider coverage over a specific theater. As a result, each satellite’s capability does not need to be as complex as that of DOD’s larger satellites and does not carry with it the heightened consequence of failure as if each satellite alone were providing total coverage. DOD currently has four TacSat experiments in different stages of development (see figure 1). According to Naval Research Laboratory officials, TacSat 2’s delay is primarily the result of overestimating the maturity of its main payload—an off-the-shelf imager that was being refurbished for space use. Officials also noted that the contracting process, which took longer than expected, used multiple and varied contracts awarded under standard federal and defense acquisition regulations. DOD is also using the TacSat experiments as a means for developing “bus” standards—the platform that provides power, attitude, temperature control, and other support to the satellite in space. Currently, DOD’s satellite buses are custom-made for each space system. According to DOD officials, establishing bus standards with modular or common components would facilitate building satellites—both small and large—more quickly and at a lower cost. To achieve one of the TacSat experiments’ goals—getting new capabilities to the warfighter sooner—DOD must secure a small, low-cost launch vehicle that is available on demand. Instead of waiting months or years to carry out a launch, DOD is looking to small launch vehicles that could be launched in days, if not hours, and whose cost would better match the small budgets of experiments. A 2003 Air Force study determined that DOD’s current class of launchers—the Evolved Expendable Launch Vehicle—would not be able to satisfy these requirements. DOD delivered the TacSat 1 satellite within cost and schedule targets. To develop the first TacSat, DOD effectively managed requirements, employed mature technologies, and built the satellite in the science and technology environment, all under the guidance of a leader who provided a clear vision and prompt funding for the project. DOD is also moving forward with developing additional TacSats; bus standards; and a small, low-cost launch vehicle available on demand. In May 2004, 12 months after TacSat 1 development began, the Naval Research Laboratory delivered the satellite to OFT at a cost of about $9.3 million, thereby meeting its targets to develop the satellite within 1 year and an estimated budget of $8.5 million to $10 million. Once TacSat 1 is placed into orbit, it is expected to provide capabilities that will allow a tactical commander to directly task the satellite and receive data over DOD’s Secure Internet Protocol Router—a need identified by the warfighter. Before TacSat 1’s development began, OFT and the Naval Research Laboratory worked together to reach consensus on known warfighter requirements that would match the cost, schedule, and performance objectives for the satellite. Our past work has found that when requirements are matched with resources, goals can be met within estimated schedule and budget. To inform the requirements selection process, the Naval Research Laboratory used an informal systems engineering approach to assess relevant technologies and determine which could meet TacSat 1 mission objectives within budget and schedule. Once TacSat 1’s requirements were set, OFT did not change them. To meet its mission objectives, OFT sought a capability that would be “good enough” for the warfighter, given available resources—rather than attempting to provide a significant leap in capability. OFT and the Naval Research Laboratory agreed to limit TacSat 1’s operational life span to 1 year, which allowed the laboratory to build the satellite with lower radiation protection levels, less fuel capacity, and fewer backups than would have been necessary for a satellite designed to last 6 years or longer. The use of existing technologies for the satellite and the bus also helped to keep TacSat 1 on schedule and within cost. For example, hardware from unmanned aerial vehicles and other aircraft were modified for space flight to protect them in the space environment, and bus components were purchased from a satellite communications company. Using items on hand at the Naval Research Laboratory—such as the space ground link system transponder and select bus electronics—resulted in a savings of about $5 million. Using and modifying existing technologies provided the laboratory better knowledge about the systems than if it had tried to develop the technologies from scratch. According to a laboratory official, the TacSat 1 experiment also achieved efficiencies by using the same software to test the satellite in the laboratory and fly the satellite. Developing the TacSat within the science and technology environment also helped the experiment meet its goals. As we have stressed in our reports on systems development, the science and technology environment is more forgiving and less costly than the acquisition environment. For example, when engineers encountered a blown electronics part during TacSat 1’s full system testing, they were able to dismantle the satellite, identify the source of the problem, replace the damaged part, and rebuild the satellite—all within 2 weeks of the initial failure. According to the laboratory official, this problem would have taken months to repair in a major space acquisition program simply because there would have been stricter quality control measures, more people involved, and thus more sign-offs required at each step. Moreover, the contracting mechanism in place at the Naval Research Laboratory allows the laboratory to respond quickly to DOD requests. Specifically, the center used several existing engineering and technical support contracts that are competed, generally, at 5-year intervals, rather than competing a specific contract for TacSat 1. According to a number of DOD officials, the ultimate success of the TacSat 1 procurement was largely the result of the former OFT director, who provided the original impetus and obtained support for the experiment from high levels within DOD and the Congress; negotiated a customized mission assurance agreement with Air Force leaders to launch TacSat 1 from Vandenberg Air Force Base at a cost that was affordable given the experiment’s budget; empowered TacSat 1’s project manager at the Naval Research Laboratory to make appropriate trade-off decisions to deliver the satellite on time and within cost; and helped OFT staff develop an efficient work relationship with the Naval Research Laboratory team and provided the laboratory with prompt decisions. DOD is currently working on developing three additional TacSat experiments—along with bus standards—and a low-cost, on-demand launch vehicle. These efforts are generally in the early stages. DOD expects to launch TacSat 2—which began as an Air Force science and technology experiment and was altered to improve upon TacSat 1’s capability—in May 2007. TacSat 3, which will experiment with imaging sensors, is in the development phase. TacSat 4, which will experiment with friendly forces tracking and data communication services, is in the design phase. Table 1 shows the development cost and schedule estimates and the target launch date for each satellite. With TacSat 3, the Air Force began to formalize the process for evaluating and selecting potential capabilities for the TacSats, leveraging the experiences from the first two TacSats. The selection process, which currently takes 3 to 4 months, includes a presentation of capability gaps and shortfalls from the combatant commands and each branch of the military, and analyses of the suitability, feasibility, and transferability of the capabilities deemed the highest priority. According to DOD officials, this process allows the science and technology community to obtain early buy-in from the warfighter, thereby increasing the likelihood that requirements will remain stable and the satellite will have military utility. Obtaining warfighter involvement in this way represents a new approach for the TacSat series. See figure 3 for a more complete description of this evolving process. The Air Force has also begun to create plans for procuring TacSats for the warfighter should they prove to have military utility. The Air Force has developed a vision of creating TacSat reserves that could be deployed on demand, plans to establish a program office within its Space and Missile Systems Center, and plans to begin acquiring operational versions of successful TacSat concepts in 2010. DOD is also working to develop bus standards. Establishing bus standards would allow DOD to create a “plug and play” approach to building satellites—similar to the way personal computers are built. The service research labs, under the sponsorship of OFT, and the Space and Missile Systems Center are in the process of developing small bus standards, each using a different approach. The service labs expect to test some standardized components on the TacSat 3 bus, and system standards by prototyping a TacSat 4 bus. The Space and Missile Systems Center is also proposing to develop three standardized bus models for different-weight satellites, one of which may be suitable for a TacSat. The service labs expect to transition bus standards to the Space and Missile Systems Center in fiscal year 2008, at which time the center will select a final version for procurement for future TacSats. Both DOD and private industry are working to develop small, low-cost, on- demand launch vehicles. DOD’s Defense Advanced Research Projects Agency (DARPA), along with the Air Force, established FALCON, a joint technology development program to accelerate efforts to develop a launch vehicle that meets these objectives. Through FALCON, DARPA expects to develop a vehicle that can send 1,000 pounds to low-earth orbit for less than $5 million with an operational cost basis of 20 flights per year for 10 years. FALCON is expected to flight-test hypersonic technologies and be capable of launching small satellites such as TacSats. DARPA is currently pursuing two candidates for its FALCON launch vehicle— AirLaunch, a company that expects to launch rockets that have been ejected from the back of a C-17 cargo airplane, and SpaceX, whose two- stage launch vehicle will include the second U.S.-made rocket booster engine to be developed and flown in more than 25 years, according to the company’s founder. DARPA could transition the AirLaunch concept to the Air Force after its demonstration launch in 2008. TacSat 1 is contracted to launch for about $7 million on SpaceX’s vehicle. In addition, in 2005, the Air Force began pursuing a hybrid launch vehicle to support tactically and conventionally deployed satellites. The project is known as Affordable Responsive Spacelift, or ARES, and the Air Force has obtained internal approval to build a small-scale demonstrator that would carry satellites about two to five times larger than TacSats. DOD has several challenges to overcome in pursuing a responsive tactical capability for the warfighter. Although DOD and others are working to develop small, low-cost launch vehicles for placing satellites like the TacSats into space, such a vehicle has yet to be developed, and TacSat 1 has waited nearly 2 years since its completion to be launched. Transferring knowledge from the science and technology community to the acquisition community is also a concern, given that these two communities have not collaborated well in the past. Further, it may be difficult to secure funding for future TacSat science and technology projects since DOD allocates the majority of its research and development money to acquisition programs. Finally, there is no departmentwide vision or strategy for implementing this new capability, and the recent loss of leadership makes it uncertain to what extent efforts to develop low-cost, responsive tactical capabilities such as TacSats will continue to be pursued. While DOD has delivered TacSat 1 on time and within budget, the satellite is not yet operational because it lacks a reliable low-cost—under $10 million—small launch vehicle to place it in orbit. TacSat 1’s original launch date was in 2004 on the SpaceX’s first flight of its low-cost small launch vehicle. However, because of technical difficulties with the launch vehicle and launch facility scheduling conflicts, the TacSat 1 launch has been delayed 2 years and more than $2 million has been added to the total mission costs. SpaceX now plans to use a different small satellite for its first launch. Placing satellites in orbit at a low cost has been a formidable task for DOD for more than two decades because of elusive economies of scale. There is a strong causal relationship between satellite capabilities and launch lift. As capabilities and operational life are added, satellites tend to become heavier, requiring a launch vehicle that can carry a heavier payload. With longer-lived satellites, fewer launches are needed, making per unit launch costs high. In addition, the high cost of a large launch vehicle can only be justified with an expensive, long-living multimission satellite. Ultimately, the high cost of producing a complex satellite has created a low tolerance for risk in launching the satellite and a “one shot to get it right” mentality. Over the past 10 years, DOD and industry have attempted to develop a low-cost launch vehicle. Three launch vehicles in DOD’s inventory—the Pegasus, Taurus, and, to some extent, the Minotaur—were designed to provide space users with a low-cost means of quickly launching small payloads into low-earth orbit. DOD expected that relatively high launch rates, from both commercial and government use, would keep costs down, but the market for these launch vehicles did not materialize. For example, since its introduction in 1990, Pegasus has launched only 36 times, an average of 3 launches per year; Taurus has been launched only 7 times since it was introduced in 1994. The average cost of these launch vehicles is $16 million to $33 million. To provide another avenue for launching small satellites, the Air Force has proposed refurbishing part of its fleet of decommissioned intercontinental ballistic missiles—450 of which have been dismantled. The cost of retrofitting the missiles and preparing them for launch is about $18 million to $23 million. However, one Air Force official questioned whether these vehicles are too large for current TacSats. Some new developers in the space industry are cautiously optimistic about the small satellite market. For example, SpaceX signed seven contracts to launch various small satellites, including TacSat 1. Despite this optimism, SpaceX’s first launch of its new vehicle has yet to occur—in part because it lacks a suitable launch facility. The launch facilities located in the United States cannot readily accommodate quick-response vehicles. Vandenberg Air Force Base—one of two major launch sites in the United States—has lengthy and detailed scheduling processes and strict safety measures for preparing for and executing a launch, making it difficult to launch a small satellite within a tight time frame and at a low cost. SpaceX’s launch of TacSat 1 at Vandenberg was put on hold because of the potential risks it posed to a billion-dollar satellite that was waiting to be launched from a nearby pad. In addition, the Air Force licensed the use of another nearby pad at Vandenberg to a contractor for larger-scale launches. Given the proximity of the launch pads, SpaceX’s insurance premium increased 10-fold, from about $50,000 to as much as $500,000, which added $2.3 million to TacSat 1’s total mission costs. Because of these delays, SpaceX decided to carry a different experimental satellite on its first launch and to use a launch facility on Kwajalein Atoll, in the Pacific Ocean. The potential effect of changes—such as increased premiums or the need to transport satellites to distant locations—on efforts to keep costs low and deliver capabilities to the warfighter sooner is unknown. The Air Force is beginning to examine ways to better accommodate a new generation of quick-response vehicles. For example, Air Force officials are examining the feasibility of establishing a location on Vandenberg specifically for these vehicles that is separate from the larger launch vehicle pads. Officials are also assessing the suitability of other locations, such as Kodiak Island, for quickly launching small satellites. To achieve a low-cost, on-demand tactical capability for the warfighter, the TacSat experiments will need to be transitioned into the acquisition community. We have previously reported that DOD’s acquisition community has been challenged to maximize the amount of knowledge transferred from the science and technology community, and that DOD’s science and technology and acquisition organizations need to work more effectively together to achieve desired outcomes. Many of the space programs we reviewed over the past several decades have incurred unanticipated cost and schedule increases because they began without knowing whether technologies could work as intended and invariably found themselves addressing technical problems in a more costly environment. Although DOD recently developed a space science and technology strategy to better ensure that labs’ space technology efforts transition to the acquisition community, the acquisition community continues to question whether labs adequately understand acquisition needs in terms of capabilities and time frames. As a result, the acquisition community would rather use its own contractors to maintain control over technology development. According to DOD officials, action has been taken to improve the level of collaboration and coordination on the TacSat experiments. Officials from DOD laboratories involved in TacSats and acquisition communities agree that they are working better together on the experiments than they have on past space efforts. However, in pursuing a low-cost, on-demand tactical capability, the science and technology and acquisition communities have moved forward on somewhat separate tracks, and it is unclear to what extent the work and knowledge gained by the labs will be leveraged when the TacSat experiments are transferred to the acquisition community. For example, the Air Force and Navy labs are working to develop bus standards for the TacSat experiments that are scheduled to be transitioned to the Space and Missile Systems Center, the Air Force’s acquisition arm, in fiscal year 2008. Yet, the Space and Missile Systems Center, working with the Aerospace Corporation, has proposed three different options for standardizing the bus. While two of the options are generally larger—and are intended for larger space assets—one of the proposed designs may be suitable for TacSats, although it will likely be costlier than a lab-generated counterpart. In addition, our past work has shown that DOD’s space programs—as well as other large DOD programs—have been unable to adequately define requirements and keep them stable, and seldom achieve a match between resources and requirements at the start of the acquisition. One factor that contributes to poorly defined and unstable requirements is that space acquisition programs have historically attempted to achieve full capability in a single step and serve a broad base of users, regardless of the design challenge or the maturity of technologies. Given this track record, some DOD officials expressed concern over Space and Missile Systems Center’s ability to adopt the TacSat approach of delivering capabilities that are good enough to meet a warfighter need within cost and schedule constraints. Air Force officials identified the center’s organizational culture of risk avoidance and the acquisition process as two of the most significant barriers to developing and deploying space systems quickly. TacSats 1 and 2 have been fully funded within DOD, and TacSats 3 and 4 were recently funded. However, funding is uncertain for TacSats beyond 3 and 4. While the Congress added funding to DOD’s 2006 budget to support TacSat efforts, such as developing bus standards, DOD did not request such funding. According to a DOD official, there would not be an effort to develop bus standards if funding had not come from the Congress. Historically, DOD’s research and development budget has been heavily weighted to system acquisitions—80 percent of this funding goes to weapon system programs, compared with 20 percent going to science and technology. In addition, science and technology funding is spread over thousands of projects, while funding for weapon system programs is spread over considerably fewer, larger programs. This funding distribution can encourage financing technology development in an acquisition program. However, as we have previously reported, developing technologies within an acquisition program typically leads to cost and schedule increases—further robbing the science and technology community and other acquisition programs of investment dollars. DOD currently has no departmentwide strategy for providing a responsive tactical capability for the warfighter. Without such a strategy, it is unknown whether and to what degree there may be gaps or overlaps in efforts. DOD efforts to develop low-cost satellite and launch capabilities are moving forward under multiple offices at different levels (see table 2). Since these efforts are occurring simultaneously, it is unclear how and if they will be used to inform one another. Moreover, there are different visions for the roles of low-cost, responsive satellites and launch vehicles in DOD’s overall space portfolio. For example, one Air Force official stated his office is looking for direction from the Congress on how to move forward rather than from somewhere within DOD. Further, when interviewed, other Air Force officials were not in agreement over how the Air Force’s vision for using TacSats fits in with OFT’s proposed use of this capability for DOD. In addition to the lack of a DOD-wide strategy, the recent departure of key personnel may have created a gap in leadership, making it uncertain to what extent efforts to develop tactical capabilities such as TacSats will be pursued. As we reported in November 2005, program success hinges on whether leaders can make strategic investment decisions and provide programs with the direction or vision for realizing goals and alternative ways of meeting those goals. One official involved in developing the overall architecture described the pursuit of these capabilities as a “grassroots effort,” underscoring the importance of having enthusiastic individuals involved in moving it forward. According to a number of DOD officials, the former OFT director was widely respected within and outside the agency and served as a catalyst for transformation across DOD, and was credited with championing and pursuing innovative concepts that could sustain and broaden military advantage. With the departure of the OFT director and other key advocates of the TacSat concept, service lab officials told us they are concerned about the fate of the TacSat experiments. DOD officials we spoke with acknowledged that there is no agreement on who should ultimately be responsible for deciding the direction of the TacSat experiments and other efforts to develop low-cost responsive tactical capabilities for the warfighter. DOD’s experiences developing a tactical capability for the warfighter through TacSats may be used to inform the way major space systems are acquired. Specifically, DOD’s process for developing TacSat 1 reflects best practices that larger space system programs could employ to achieve better acquisition outcomes. In addition, some DOD officials believe that these efforts—focusing on delivering capabilities to the warfighter through TacSats and small, low-cost launch vehicles—could lead to long-term benefits, including providing opportunities for major space systems to test new technologies, enhancing the skills of DOD’s space workforce, and broadening the space industrial base. Our past work has shown that commercial best practices—such as managing requirements, using mature technologies, and developing technology within the science and technology community—contribute to successful development outcomes. TacSat 1 confirms that applying these practices can enable projects to meet cost and schedule targets. While TacSat 1, as a small experimental satellite with only a few requirements, is much less complex than a major space system, we have reported that commercial best practices are applicable to major space system acquisitions and recommended that DOD implement them for such acquisitions. Despite our recommendation, DOD’s major space system acquisitions have yet to consistently apply these best practices. Manage requirements. DOD’s major space acquisition programs have typically not achieved a match between requirements and resources (technology, time, and money) at program start. Historically, these programs have attempted to satisfy all requirements in a single step, regardless of the design challenge or the maturity of technologies needed to achieve the full capability. As a result, these programs’ requirements have tended to be unstable—that is, requirements were changed, added, or both—which has led to the programs not meeting their performance, cost, and schedule objectives. We have found that when resources and requirements are matched before individual programs are started, programs are more likely to meet their objectives. One way to achieve this is through an evolutionary development approach, that is, pursue incremental increases in capability versus significant leaps. Use mature technologies. DOD’s major space acquisition programs typically begin product development before critical technologies are sufficiently matured, forcing the program to mature technologies after product development has begun. Our reviews of DOD and commercial technology development cases indicate that demonstrating a high level of maturity before new technologies are incorporated into product development puts those programs in a better position to succeed. Develop technology within the science and technology environment. DOD’s space acquisition programs tend to take on technology development concurrently with product development, increasing the risk that significant problems will be discovered late in development and that more time, money, and effort will be needed to fix these problems. Our reviews have shown that developing technologies separate from product development greatly minimizes this risk. DOD officials and industry representatives we spoke with also noted that some long-term benefits could result from focusing on delivering capabilities to the warfighter quickly. First, small, low-cost, responsive satellites like the TacSats could augment major space systems—provided there is a means to launch the satellites. Because TacSats do not require significant investment and are not critical to multiple missions, the consequence of failure of a TacSat is low. In contrast, major space systems typically are large, complex, and multimission, and take many years to build and deliver. If a major space satellite fails, there are significant cost and schedule consequences. Ultimately, the already long wait time for the warfighter to receive improved capabilities is extended. Second, developing small, low-cost launch vehicles could provide an avenue for testing new technologies in space. According to DOD officials, less than 20 percent of DOD’s space research and development payloads make it into space, even while relying heavily on the National Aeronautics and Space Administration’s Space Shuttle, which was most recently grounded for 2 ½ years. We recently reported that DOD’s Space Test Program, which is designed to help the science and technology community find opportunities to test in space relatively cost-effectively, has only been able to launch an average of seven experiments annually in the past 4 years. According to industry representatives and DOD officials, efforts to develop a small, low-cost launch vehicle could improve the acquisition process because testing technologies in an operational environment could lower the risk for program managers by providing mature technologies that could be integrated into their acquisition programs. Third, giving space professionals the opportunity to manage small-scale projects like TacSats from start to finish may better prepare them for managing larger, more complex space system acquisitions in the future. According to Navy and Air Force lab officials, managing the TacSat experiments has provided hands-on experience with the experiment from start to finish, unlike the experience provided to program managers of large systems at the Air Force Space and Missile Systems Center. Finally, building low-cost, responsive satellites and launch vehicles could create opportunities for small, innovative companies to compete for DOD contracts and thereby increase competition and broaden the space industrial base. In April 2005, over 50 small companies sent representatives to the Third Responsive Space Conference, an effort hosted by a small private launch company. An industry representative stated that a number of small companies are excited about developing TacSats and small, low-cost launch vehicles and the potential to garner future DOD contracts, but he cautioned that it would be important to maintain a steady flow of work in order to keep staff employed and preserve in-house knowledge. Other industry representatives told Air Force officials that they are receiving mixed signals from the government regarding its commitment to these efforts—there has been a lot of talk about them, but relatively little funding. In addition, another industry representative stated that requirements must be contained; otherwise, costs will increase and eventually squeeze small companies back out of the business. For more than two decades, DOD has invested heavily in space assets to provide the warfighter with critical information needed to successfully conduct military operations. Despite this investment, DOD has been challenged to deliver its major space acquisitions quickly and within estimated costs. TacSat 1—an experimental satellite—has shown that by matching user requirements with available resources, using mature technologies, and developing technologies separate from product development, new tactical capabilities can be delivered quickly and at a low cost. By establishing a capabilities selection process, the TacSat initiative has also helped to ensure that future TacSats will address high- priority warfighter needs. At the same time, the TacSats may demonstrate an alternative approach to delivering capabilities sooner—that is, using an incremental approach to providing capabilities, rather than attempting to achieve the quantum leap in capability often pursued by large space systems, which leads to late deliveries, cost increases, and a high consequence of failure. By not optimizing its investment in TacSat and small launch efforts, DOD may fail to capitalize on a valuable opportunity to improve its delivery of space capabilities. As long as disparate entities within DOD continue moving forward without a coherent vision and sustained leadership for delivering tactical capabilities, DOD will be challenged to integrate these efforts into its broader national security strategy. To help ensure that low-cost tactical capabilities continue to be developed and are delivered to the warfighter quickly, we recommend that the Secretary of Defense assign accountability for developing and implementing a departmentwide strategy for pursuing low-cost, responsive tactical capabilities—both satellite and launch—for the warfighter, and identify corresponding funding. We provided a draft of this report to DOD for review and comment. DOD concurred with our recommendation and provided technical comments, which we incorporated where appropriate. DOD’s letter is reprinted as appendix II. We plan to provide copies of this report to the Secretary of Defense, the Secretary of the Air Force, and interested congressional committees. We will make copies available to others upon request. In addition, the report will be available on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to the report are Arthur Gallegos, Maricela Cherveny, Jean Harker, Leslie Kaas Pollock, Noah B. Bleicher, and Karen Sloan. To assess the outcomes to date from the TacSat experiments and efforts to develop small, low-cost launch vehicles, we interviewed Department of Defense (DOD) officials in the Office of Force Transformation, Washington, D.C.; Air Force Space Command, Peterson Air Force Base, Colorado; Space and Missile Systems Center, Los Angeles Air Force Base, California; Air Force Research Laboratory, Kirtland Air Force Base, New Mexico, and Wright-Patterson Air Force Base, Ohio; U.S. Naval Research Laboratory, Washington, D.C.; and the Defense Advanced Research Projects Agency, Virginia, via written questions and responses. We also analyzed documents obtained from these officials. In addition, we interviewed industry representatives involved in developing large space systems and small commercial launch vehicles. To understand the challenges to DOD’s efforts and to determine whether DOD’s experiences with TacSats and small, low-cost launch vehicles could inform major space system acquisitions, we analyzed a wide body of GAO and DOD studies that discuss acquisition problems and associated challenges, including our work on best practices in weapon system development that we have conducted over the past decade. In addition to having discussions with officials at the Office of Force Transformation, the Air Force Space Command, the Space and Missile Systems Center, and the Air Force and Navy research labs, we spoke with officials from the National Security Space Office, Virginia, and the Force Structure, Resources, and Assessment Directorate of the Joint Chiefs of Staff, Washington, D.C. We conducted our review from June 2005 to March 2006 in accordance with generally accepted government auditing standards.
For more than two decades, the Department of Defense (DOD) has invested heavily in space assets to provide the warfighter with mission-critical information. Despite these investments, DOD commanders have reported shortfalls in space capabilities. To provide tactical capabilities to the warfighter sooner, DOD recently began developing TacSats--a series of small satellites intended to be built within a limited time frame and budget--and pursuing options for small, low-cost vehicles for launching small satellites. GAO was asked to (1) examine the outcomes to date of DOD's TacSat and small, low-cost launch vehicle efforts, (2) identify the challenges in pursuing these efforts, and (3) determine whether experiences with these efforts could inform DOD's major space system acquisitions. Through effective management of requirements and technologies and strong leadership, DOD was able to deliver the first TacSat satellite in 12 months and for less than $10 million. The Office of Force Transformation, TacSat 1's sponsor, set requirements early in the satellite's development process and kept them stable. DOD modified existing technologies for use in space, significantly reducing the likelihood of encountering unforeseen problems that could result in costly design changes. The satellite was also built within DOD's science and technology environment, which enabled service laboratory scientists to address problems quickly, inexpensively, and innovatively. The vision and support provided by leadership were also key to achieving the successful delivery of TacSat 1. DOD has also made progress in developing three additional TacSats and is working toward developing a low-cost launch vehicle available on demand. Despite this achievement, DOD faces several challenges in providing tactical capabilities to the warfighter sooner. First, DOD has yet to develop a low-cost, small launch vehicle available to quickly put tactical satellites, including TacSat 1, into orbit. Second, limited collaboration between the science and technology and the acquisition communities--as well as the acquisition community's tendency to expand requirements after program start--could impede efforts to quickly procure tactical capabilities. Securing funding for future TacSat experiments may also prove difficult because they are not part of an acquisition program. Finally, DOD lacks a departmentwide strategy for implementing these efforts, and because key advocates of the experiments have left DOD, it is unclear how well they will be supported in the future. Regardless of these challenges, DOD's experiences with the TacSat experiments thus far could inform its major space system acquisitions. DOD's approach to developing the TacSats--matching requirements to available resources, using proven technologies, and separating technology development from product development--reflects best commercial practices that lead to quicker delivery with less risk. According to some DOD officials, the TacSats and small, low-cost launch vehicles--once they are developed--could also provide an avenue for large space system acquisitions to prove out technologies in the space environment, something DOD has avoided because of the high cost of launching such experiments. These officials also believe that giving space professionals the opportunity to manage small-scale projects like TacSats may better prepare them for managing larger, more complex space system acquisitions. Finally, these officials noted that building small-scale satellite systems and launch vehicles could create opportunities for small, innovative companies to compete for DOD contracts and thereby broaden the space industrial base.
You are an expert at summarizing long articles. Proceed to summarize the following text: In 1971, the Atomic Energy Commission, NRC’s predecessor, promulgated the first regulations for fire protection at commercial nuclear power units in the United States. These regulations––referred to as General Design Criterion 3––provided basic design requirements and broad performance objectives for fire protection, but lacked implementation guidance or assessment criteria. As such, NRC generally deemed a unit’s fire protection program to be adequate if it complied with standards set by the National Fire Protection Association (NFPA)––an international organization that promotes fire prevention and safety––and received an acceptable rating from a major fire insurance company. However, at that time the fire safety requirements for commercial nuclear power units were similar to those for conventional, fossil-fueled power units. NRC and nuclear industry officials did not fully perceive that fires could threaten a nuclear unit’s ability to safely shut down until 1975, when a candle that a worker at Browns Ferry nuclear unit 1 was using to test for air leaks in the reactor building ignited electrical cables. The resulting fire burned for 7 hours and damaged more than 1,600 electrical cables, more than 600 of which were important to unit safety. Nuclear unit workers eventually used water to extinguish the fire, contrary to the existing understanding of how to put out an electrical fire. The fire damaged electrical power, control systems, and instrumentation cables and impaired cooling systems for the reactor. During the fire, operators could not monitor the unit normally. NRC’s investigation of the Browns Ferry fire revealed deficiencies in the design of fire protection features at nuclear units and in procedures for responding to a fire, particularly regarding safety concerns that were unique to nuclear units, such as the ability to protect redundant electrical cables and equipment important for the safe shutdown of a reactor. In response, NRC developed new guidance in 1976 that required units to take steps to isolate and protect at least one system of electrical cables and equipment to ensure a nuclear unit could be safely shut down in the event of a fire. NRC worked with licensees throughout the late 1970s to help them meet this guidance. In November 1980, NRC published two new sets of regulations to formalize the regulatory approach to fire safety. First, NRC required all nuclear units to have a fire protection plan that satisfies General Design Criteria 3 and that describes an overall fire protection program. Second, NRC published Appendix R, which requires nuclear units operating prior to January 1, 1979 (called “pre-1979 units”), to implement design features—such as fire walls, fire wraps, and automatic fire detection and suppression systems—to protect a redundant system of electrical cables and equipment necessary to safely shut down a nuclear unit during a fire. Among other things, Appendix R requires units operating prior to 1979 to protect one set of cables and equipment necessary for safe shutdown through one of the following means:1. Separating the electrical cables and equipment necessary for safe shutdown by a horizontal distance of more than 20 feet from other systems, with no combustibles or fire hazards between them. In addition, fire detectors and an automatic fire suppression system (for example, a sprinkler system) must be installed in the fire area. 2. Protecting the electrical cables and equipment necessary for safe shutdown by using a fire barrier able to withstand a 3-hour fire, as conducted in a laboratory test (thereby receiving a 3-hour rating). 3. Enclosing the cable and equipment necessary for safe shutdown by using a fire barrier with a 1-hour rating and combining that with automatic fire detectors and an automatic fire suppression system. If a nuclear unit’s fire protection systems do not satisfy those requirements or if redundant systems required for safe shutdown could be damaged by fire suppression activities, Appendix R requires the nuclear unit to maintain an alternative or dedicated shutdown capability and its associated circuits. Moreover, Appendix R requires all units to provide emergency lighting in all areas needed for operating safe shutdown equipment. Nuclear units that began operating on or after January 1, 1979 (called “post-1979 units”) must satisfy the broad requirements of General Design Criteria 3 but are not subject to the requirements of Appendix R. However, NRC has imposed or attached conditions similar to the requirements of Appendix R to these units’ operating licenses. When promulgating these regulations, NRC recognizes that strict compliance for some older units would not significantly enhance the level of fire safety. In those cases, NRC allows nuclear units licensed before 1979 to apply for an exemption to Appendix R. The exemption depends on if the nuclear unit can demonstrate to NRC that existing or alternative fire protection features provided safety equivalent to those imposed by the regulations. Since 1981, NRC has issued approximately 900 unit-specific exemptions to Appendix R. Nuclear units licensed after 1979 can apply for “deviations” against their licensing conditions. Many exemptions take the form of NRC-approved operator manual actions, whereby nuclear unit staff manually activate or control unit operations from outside the unit’s control room, such as manually stopping a pump that malfunctions during a fire and could affect a unit’s ability to safely shut down. NRC also allows nuclear units to institute, in accordance with their NRC-approved fire protection program, “interim compensatory measures”—temporary measures that units can take without prior approval to compensate for equipment that needs to be repaired or replaced. Interim compensatory measures often consist of roving or continuously staffed fire watches that occur while nuclear units take corrective actions. In part to simplify the licensing of nuclear units that have many exemptions, NRC recently began encouraging units to transition to a more risk-informed approach to nuclear safety in general. In 2004, NRC promulgated 10 C.F.R. 50.48(c), which allows––but does not require–– nuclear units to adopt a risk-informed approach to fire protection. The risk-informed approach considers the probability of fires in conjunction with a unit’s engineering analysis and operating experience. The NRC rule allows licensees to voluntarily adopt and maintain a fire protection program that meets criteria set forth by the NFPA’s fire protection standard 805— which describes the risk-informed approach endorsed by NRC—as an alternative to meeting the requirements or unit-specific fire- protection license conditions represented by Appendix R and related rules and guidance. Nuclear units that choose to adopt the risk-informed approach must submit a license amendment request to NRC asking NRC to approve the unit’s adoption of the new risk-informed, regulatory approach. NRC is overseeing a pilot program at two nuclear unit locations and expects to release its evaluation report on these programs by March 2009. NRC officials told us that none of the 125 fires at 54 sites that nuclear unit operators reported from January 1995 to December 2007 has posed significant risk to a commercial unit’s ability to safely shut down. No fires since the 1975 Browns Ferry fire have threatened a nuclear unit’s ability to safely shut down. Most of the 125 fires occurred outside areas that are considered important for safe shutdown of the unit or happened during refueling outages when nuclear units were already shut down. Nuclear units categorized 13 of the 125 reported fires as “alerts” under NRC’s Emergency Action Level rating system, meaning that the reported situation involved an actual or potential substantial degradation of unit safety, but none of the fires actually threatened the safe shutdown of the unit. NRC further characterizes alerts as providing early and prompt notification of minor events that could lead to more serious consequences. As shown in the table 1, the primary reported causes of these fires were electrical fires. Nuclear units classified the remaining 112 reported fires in categories that do not imply a threat to safe shutdown. Specifically, 73 were characterized as being “unusual events”––a category that is less safety-significant than “alerts”––and 39 fires as being “non-emergencies.” No reported fire event rose to the level of “site area emergency” or “general emergency”—the two most severe ratings in the Emergency Action Level system. As shown in table 2 below, about 41 percent of the 125 reported fires were electrical fires, 14 percent were maintenance related, 7 percent were caused by oil-based lubricants or insulation, and the remaining 38 percent either had no reported causes or the causes were listed as “other,” including brush fires, cafeteria grease fires, and lightning. We also gathered information on fire events that had occurred at nuclear unit sites we visited. NRC’s data on the location and circumstances surrounding fire events was consistent with the statements of unit officials whom we contacted at selected nuclear units. Although unit officials told us that some recent fires necessitated the response of off-site fire departments to supplement the units’ on-site firefighting capabilities, they confirmed that none of the fires adversely affected the units’ ability to safely shut down. Additionally, officials at two units told us that, although fires affected the units’ auxiliary power supply, the events caused both units to “trip”—an automatic power down as a precaution in emergencies. NRC has not fully resolved several long-standing issues that affect the commercial nuclear industry’s compliance with existing NRC fire regulations. These issues include (1) nuclear units’ use of operator manual actions; (2) nuclear units’ long-term use of interim compensatory measures; (3) uncertainties regarding the effectiveness of fire wraps for protecting electrical cables necessary for the safe shutdown of a nuclear unit; and (4) the regulatory treatment of fire-induced multiple spurious actuations of equipment that could prevent the safe shutdown of a nuclear unit. Moreover, NRC lacks a central system of records that would enhance its ability to oversee and address the use of operator manual actions and extended interim compensatory measures, among other related issues. According to an NRC Commissioner, the current “patchwork of requirements” is characterized by too many exemptions, as well as by unapproved or undocumented operator manual actions. He said the current regulatory situation was not the ideal, transparent, or safest way to deal with the issue of fire safety. NRC’s oversight of fire safety is complicated by nuclear units’ use of operator manual actions that NRC has not explicitly approved. NRC’s initial Appendix R regulations required that nuclear units protect at least one redundant system—or “train”—of equipment and electrical cables required for a unit’s safe shutdown through the use of fire protection measures, such as 1-hour or 3-hour fire barriers, 20 feet of separation between redundant systems, and automatic fire detection and suppression systems. The regulations do not list operator manual actions as a means of protecting a redundant system from fire. However, according to NRC officials and NRC’s published guidance, units licensed before January 1979 can receive approval for a specific operator manual action by applying for a formal exemption to the regulations. For example, unit officials at one site told us they rely on 584 operator manual actions that are approved by 15 NRC exemptions for safe shutdown. (NRC allows units to submit multiple operator manual actions under one exemption.) Units licensed after January 1979 may use operator manual actions for fire protection if these actions are permitted by the unit’s license and if the unit can demonstrate that the actions will not adversely affect safe shutdown. NRC and nuclear unit officials told us that units have been using operator manual actions since Appendix R became effective in 1981. These officials added that a majority of nuclear units that use operator manual actions started using them beginning in the mid-1990s in response to the failure of Thermo-Lag––a widely used fire wrap––to meet fire endurance testing. A lack of clear understanding between NRC and industry over the permissible use of operator manual actions in lieu of passive measures emerged over the years. For example, officials at several of the sites we visited produced documentation––some dating from the 1980s––showing NRC’s documented approval of some, but not all, operator manual actions. In some other cases, unit operators told us that NRC officials verbally approved certain operator manual actions but did not document their approval in writing. In some other instances, without explicit NRC approval, unit officials applied operator manual actions that NRC had previously approved for similar situations. NRC officials explained that NRC inspectors may not have cited units for violations for these operator manual actions because they believed the actions were safe; however, NRC’s position is that these actions do not comply with NRC’s fire regulations. Moreover, in fire inspections initiated in 2000 of nuclear units’ safe shutdown capabilities, NRC found that units were continuing to use operator manual actions without exemptions in lieu of protecting safe shutdown capabilities through the required passive measures. For example, management officials for some nuclear units authorized staff to manually turn a valve to operate a pump if it failed due to fire damage rather than protecting the cables that operate the valve automatically. Unit officials at one site stated that they rely on more than 20 operator manual actions that must be implemented within 25 minutes for safe shutdown in the event of a fire. In March 2005 NRC published a proposal to revise Appendix R to allow feasible and reliable operator manual actions if units maintained or installed automatic fire detection and suppression systems. The agency stated that this would reduce the regulatory burden by decreasing the need for licensees to prepare exemption requests and the need for NRC to review and approve them. However, industry officials stated, among other things, that the requirement for suppression would be costly without a clear safety enhancement and, therefore, would likely not reduce the number of exemption requests. Officials at one unit told us that this requirement, in conjunction with other NRC proposed rules, could cost as much as $12 million at one unit, and they believe that the rule would have caused the industry to submit a substantial number of exemption requests to NRC. Due in part to these concerns, NRC withdrew the proposed rule in March 2006. NRC officials reaffirmed the agency’s position that nuclear units using unapproved or undocumented operator manual actions are not in compliance with regulations. In published guidance sent to all operating nuclear units in 2006, NRC stated that this has been its position since Appendix R became effective in 1981. The guidance further stated that NRC has continued to communicate this position to licensees via various public presentations, proposed rulemaking, and industry wide communications. In June 2006, NRC directed nuclear units to complete corrective actions for these operator manual actions by March 2009, either by applying for licensing exemptions for undocumented or unapproved operator manual actions or by making design modifications to the unit to eliminate the need for operator manual actions. Staff at most nuclear units we visited said they would resolve this issue either by transitioning to the new risk- informed approach, or by applying to NRC for licensing exemptions because making modifications would be resource-intensive. In March 2006, NRC also stated in the Federal Register that the regulations allow licensees to use the risk-informed approach in lieu of seeking an exemption or license amendment. NRC officials told us that, at least for the short-term, they have no plans to examine unapproved or undocumented operator manual actions for units that have sought exemptions to determine if these units are compliant with regulations. They said that NRC has already received exemption requests for operator manual actions, and it expects about 25 units–– mostly units licensed before 1979 that do not intend to adopt the new risk- informed approach—to submit additional exemption requests by March 2009. They estimated that about half of the 58 units that have not decided to transition to the risk-informed approach do not have compliance issues regarding operator manual actions and, therefore, will not need to submit related requests for exemptions. These officials anticipate that the remaining units that are not transitioning to the risk-informed approach will submit exemptions in the following two broad groups: (1) license amendment requests that should be short and easy to process because the technical review has already been completed, showing that the operator manual actions in place do not degrade unit safety; and (2) exemption requests that require more detailed review because the units have been using unapproved operator manual actions. Some nuclear units have used interim compensatory measures for extended periods of time—in some cases, for years—rather than perform the necessary repairs or procure the necessary replacements. As of April, 2008, NRC has no firm plans for resolving this problem. For example, at one nuclear unit we visited, unit officials chose to use fire watches for over 5 years instead of replacing faulty penetration seals covering openings in structural fire barriers. Officials at several units told us that they typically use fire watches with dedicated unit personnel as interim compensatory measures whenever they have deficiencies in fire protection features. NRC regional officials confirmed that most interim compensatory measures are currently fire watches and that many of these were implemented at nuclear units after tests during the 1980s and 1990s determined that Thermo-Lag and, later, Hemyc fire wraps, used to protect safe shutdown cables from fire damage, were deficient. According to a statement released by an NRC commissioner in October 2007, interim compensatory measures are not the most transparent or safest way to deal with this issue. Moreover, NRC inspectors have reported weaknesses in certain interim compensatory measures used at some units, including an over reliance on 1-hour roving fire watches rather than making the necessary repairs. Although NRC regulations state that all deficiencies in fire protection features must be promptly identified and corrected, they do not limit how long units can rely on interim compensatory measures—such as hourly fire watches—before taking corrective actions or include a provision to compel licensees to take corrective actions. In the early 1990s, NRC issued guidance addressing the timeliness of corrective actions, stating that the agency expected units to promptly complete all corrective actions in a timely manner commensurate with safety and thus eliminate reliance on the interim compensatory measures. In 1997, NRC issued additional guidance, stating that if a nuclear unit does not resolve a corrective action at the first available opportunity or does not appropriately justify a longer completion schedule, the agency would conclude that corrective action has not been timely and would consider taking enforcement action. NRC’s current guidance for its inspectors states that a unit may implement interim compensatory measures until final corrective action is completed and reliance on an interim compensatory measure for operability should be an important consideration in establishing the time frame for completing the corrective action. This guidance further states that conditions calling for interim compensatory measures to restore operability should be resolved quickly because such conditions indicate a greater degree of degradation or nonconformance than conditions that do not rely on interim compensatory measures. For example, the guidance states that NRC expects interim compensatory measures that substitute an operator manual action for automatic safety-related functions to be resolved expeditiously. Officials from several different units that we visited confirmed that NRC has not implemented a standard timeframe for when corrective actions must be made regarding safe shutdown deficiencies. NRC officials further state that interim compensatory measures could remain in place at some units until they fully transition to the risk- informed approach to fire protection. They stated that this was because many of the interim compensatory measures are in place for Appendix R issues that are not risk significant, and nuclear units will be able to eliminate them after they implement the risk-informed approach. NRC has not resolved uncertainty regarding fire wraps used at some nuclear units for protecting cables critical for safe shutdown. NRC’s regulations state that fire wraps protecting shutdown-related systems must have a fire rating of either 1 or 3 hours. NRC guidance further states that licensees should evaluate fire wrap testing results and related data to ensure it applies to the conditions under which they intend to install the fire wraps. If all possible configurations cannot be tested, an engineering analysis must be performed to demonstrate that cables would be protected adequately during and after exposure to fire. NRC officials told us that the agency prefers passive fire protection, such as fire barriers—including fire wraps—because such protection is more reliable than other forms of fire protection, for example, human actions for fire protection. Following the 1975 fire at Browns Ferry, manufacturers of fire wraps performed or sponsored fire endurance tests to establish that their fire wraps met either the 1-hour or 3-hour rating period required by NRC regulations. However, NRC became concerned about fire wraps in the late 1980s when Thermo-Lag—a fire wrap material commonly used in units at the time—failed performance tests to meet its intended 1-hour and 3-hour ratings, even though it had originally passed the manufacturer’s fire qualification testing. In 1992, NRC’s Inspector General found that NRC and nuclear licensees had accepted qualification test results for Thermo-Lag that were later determined to be falsified. From 1991 to 1995, NRC issued a series of information notices on performance test failures and installation deficiencies related to Thermo-Lag fire wrap systems. As a result, in the early 1990s, NRC issued several generic communications informing industry of the test results and requested that licensees implement appropriate interim compensatory measures and develop plans to resolve any noncompliance. One such communication included the expectation that licensees would review other fire wrap materials and systems and consider actions to avoid problems similar to those identified with Thermo-Lag. Deficiencies emerged in other fire wrap materials starting in the early 1990s, and NRC suggested that industry conduct additional testing. It took NRC over 10 years to initiate and complete its program of large-scale testing of Hemyc—another commonly used fire wrap—and then direct units to take corrective actions after small-scale test results first indicated that Hemyc might not be suitable as a 1-hour fire wrap. In 1993, NRC conducted pilot-scale fire tests on several fire wrap materials, but because the tests were simplified and small-scale models were used, NRC applied test results for screening purposes only. These tests involved various fire wraps assembled in different configurations. The test results indicated unacceptable performance in approximately one-third of the assemblies tested, and NRC reported that the results for Hemyc were inconclusive, although NRC’s Inspector General recently reported that Hemyc had failed this testing. In 1999 and 2000, several NRC inspection findings raised concerns about the performance of Hemyc and MT—another fire wrap— including: (1) whether test acceptance criteria for insurance purposes is valid for fire barrier endurance tests and (2) the performance of fire wraps when those wraps are used in untested configurations. In 2001, NRC initiated testing for typical Hemyc and MT installations used in units in the United States, and the test results indicated that the Hemyc configuration did not pass the 1-hour criteria and that the MT configuration did not pass the 3-hour criteria. In 2005, NRC held a public meeting with licensees to discuss these test results and how to achieve compliance. In 2006, NRC published guidance stating that fire wraps installed in configurations that are not capable of providing the designed level of protection are considered nonconforming installations and that licensees that use Hemyc and MT—previously accepted fire wraps—may not be conforming with their licenses. This guidance further stated that if licensees identify nonconforming conditions, they may take the following corrective actions: (1) replace the failed fire wraps with an appropriately rated fire wrap material, (2) upgrade the failed fire barrier to a rated barrier, (3) reroute cables or instrumentation lines through another fire area, or (4) voluntarily transition to the risk-informed approach to fire protection. According to NRC’s Inspector General, during testimony before Congress in 1993 on the deficiencies of Thermo-Lag, the then-NRC Chairman committed NRC to assess all fire wraps to determine what would be needed in order to meet NRC requirements. The testimony also contained an attachment of an NRC task force that made the following two recommendations: (1) NRC should sponsor new tests to evaluate the fire endurance characteristics of other fire wraps and (2) NRC should review the original fire qualification test reports from fire wrap manufacturers. Although NRC maintains that it has satisfied this commitment, the NRC Inspector General reported in January 2008 that the agency had yet to complete these assessments. NRC officials told us that licensees are required to conduct endurance tests on fire wraps used at nuclear units; however, the NRC Inspector General noted that, to date, no test has been conducted certifying Hemyc as a 1- or 3- hour fire wrap. Licensees’ proposed resolutions for this problem ranged from making replacements with another fire wrap material to requesting license exemptions. In addition, although NRC advised licensees that corrective actions associated with Hemyc and MT are subject to future inspection, the Inspector General noted that NRC has not yet scheduled or budgeted for inspections of licensees’ proposed resolutions. The Inspector General’s report indicated that several different fire wraps failing endurance tests are still installed at units across the country, but NRC does not maintain current records of these installations. Until issues regarding the effectiveness of fire wraps are resolved, utilities may not be able to use the wraps to their potential and instead rely on other measures, including operator manual actions. NRC has not finalized guidance on how nuclear units should protect against short-circuits that could cause safety-related equipment to start or malfunction spuriously (instances called spurious actuations). In the early 1980s, NRC issued guidance clarifying the requirements in its regulations for safeguarding against spurious actuations that could adversely affect a nuclear unit’s ability to safely shut down. However, NRC approved planning for spurious actuations occurring only one at a time or in isolation. In the late 1990s, nuclear units identified problems related to multiple spurious actuations occurring simultaneously. Due to uncertainty over this issue, in 1998 NRC exempted units from enforcement actions related to spurious actuations, and in 2000 the agency temporarily suspended the electrical circuit analysis portion of its fire inspections at nuclear units. Cable fire testing performed by industry in 2001 demonstrated that multiple spurious actuations occurring simultaneously or in rapid succession without sufficient time to mitigate the consequences may have a relatively high probability of occurring under certain circumstances, including fire damage. Following the 2001 testing, NRC notified units that it expects them to plan for protecting electrical systems against failures due to fire damage, including multiple spurious actuations in both safety-related systems and associated nonsafety systems. NRC resumed electrical inspections in 2005 and proposed that licensees review their fire protection programs to confirm compliance with NRC’s stated regulatory position on this issue and report their findings in writing. The proposal suggested that noncompliant units could come into compliance by (1) reperforming their circuit analyses and making necessary design modifications, (2) performing a risk-informed evaluation, or (3) adopting the overall risk- informed approach to fire protection advocated by NRC. In 2006, however, NRC decided not to issue the proposal, stating that further thought and care can be taken to ensure the resolution of this issue has a technically sound and traceable regulatory footprint that would provide permanent closure. The nuclear industry has issued statements disagreeing with NRC’s proposed regulatory approach for multiple spurious actuations. Industry officials noted that NRC approved licenses for many units that require operators to plan for spurious actuations from a fire event that occur one at a time or in isolation and that NRC’s current approach amounts to a new regulatory position on this issue. Furthermore, the industry asserts that units only need to plan for protecting against spurious actuations occurring one at a time or in isolation because, in industry’s view, multiple spurious actuations occurring are highly improbable and should not be considered in safety analyses. Industry officials told us that the 2001 test results were generated under worst-case scenarios, which operating experience has shown may not represent actual conditions at nuclear units. These officials further told us that NRC’s requirements are impossible to achieve. In December 2007, the nuclear industry proposed an approach for evaluating the effects on circuits from two or more spurious actuations occurring simultaneously, but NRC had not officially commented on the proposal as of May 2008. NRC has stated that draft versions of the proposal it has reviewed do not achieve regulatory compliance. As of May 2008, despite numerous meetings and communications with industry, NRC has not endorsed guidance or developed a timeline for resolving disagreements with industry about how to plan for multiple spurious actuations of safety-related equipment due to fire damage. However, NRC officials told us they have recently developed a closure plan for this issue that they intend to propose to NRC’s Commissioners for approval in June 2008. NRC officials told us that after this plan is approved, their planned next steps are to determine (1) the analysis tools, such as probabilistic risk assessments or fire models, that units can use to analyze multiple spurious actuations; and (2) a time frame for ending its ongoing exemption of units from enforcement actions related to spurious actuations. NRC has no comprehensive database of the operator manual actions or interim compensatory measures implemented at nuclear units since its regulations were first promulgated in 1981, in addition to the hundreds of related licensing exemptions. NRC does not require units to report operator manual actions upon which they rely for safe shutdown. Although NRC reports operator manual actions in the inspection reports it generates through its triennial fire inspections, it does not track these operator manual actions industrywide nor does it compile them on a unit by unit basis. NRC does not maintain a central database of interim compensatory measures being used in place of permanent fire protection features at units for any duration of time. In addition, NRC regional officials told us that triennial fire inspectors do not typically track the status of interim compensatory measures used for fire protection or which units are using them. However, units record maintenance-related issues in their corrective action programs, including those issues requiring the implementation of interim compensatory measures. As a result, data are available to track interim compensatory measures that last for any period of time as well as to analyze their safety significance. NRC resident inspectors told us that they review these corrective action programs on a daily basis and that they are always aware of the interim compensatory measures in place at their units. They reported that this information is sometimes reviewed by NRC regional offices but rarely by headquarters officials. NRC officials explained that the agency tracked the use of exemptions— including some operator manual actions—through 2001 but then stopped because the number of exemptions requested by units decreased. This information is available, in part, electronically through its public documents system and partly in microfiche format. These officials explained that part of the agency’s inspection process is to test if licensees have copies of their license exemptions and, thus, are familiar with their own licensing basis. Inspectors have the ability to confirm an exemption, but once the inspectors are in the field, they often rely on the licensee’s documentation. According to these officials, NRC has no central repository for all the exemptions for a unit, but agency inspectors can easily validate a licensee’s exemption documentation by looking it up in their public documents system. They said that they conduct the triennial inspections over 2 weeks at the unit because they realize licensees may not be able to locate documentation immediately. They notify licensees what documents they need during the first week onsite so the licensees can have time to prepare them for NRC’s return trip. NRC regional officials told us that it is difficult to inspect fire safety due to the complicated licensing basis and inability to track documents. An NRC commissioner told us that nuclear power units have adopted many different fire safety practices with undocumented approval status. The commissioner further stated that NRC does not have good documentation of which units are using interim compensatory measures or operator manual actions for fire protection and that it needs a centralized database to track these issues. The commissioner stated the lack of a centralized database does not necessarily indicate that safety has been compromised. However, without a database that contains information about the existence, length, nature, and safety significance of interim compensatory measures, operator manual actions, and exemptions in general, NRC may not have a way to easily track which units have had significant numbers of extended interim compensatory measures and possibly unapproved operator manual actions. Moreover, the database could help NRC make informed decisions about how to resolve these long-standing issues. Also, the database could help NRC inspectors more easily determine whether specific operator manual actions or extended interim compensatory measures have, in fact, been approved through exemptions. Officials at 46 nuclear units have announced their intention to adopt the risk-informed approach to fire safety. Officials from NRC, industry, and units we visited that plan to adopt the risk-informed approach stated that they expect the new approach will make units safer by reducing reliance on unreliable operator manual actions and help identify areas of the unit where multiple spurious actuations could occur. Academic and industry experts believe that the risk-informed approach could provide safety benefits, but they stated that NRC must address inherent complexities and unknowns related to the development of probabilistic risk assessments used in the risk-informed approach. Furthermore, the shortage of skilled personnel and concerns about the potential cost of conducting risk analyses could slow the transition process and limit the number of units that ultimately make the transition to the new approach. As of May 2008, 46 nuclear units at 29 sites have announced that they will transition to the risk-informed approach endorsed by NRC (see fig. 1). To facilitate the transition process for the large number of units that will change to the new approach within the next 5 years, NRC is overseeing a pilot program involving three nuclear units at the Oconee Nuclear Power Plant in South Carolina and one unit at the Shearon Harris Nuclear Power Plant in North Carolina, and NRC expects to release its evaluation of these units’ license amendment requests supporting their transition to the risk- informed approach by March 2009. At that point, 22 nuclear units will have submitted their license amendment requests for NRC’s review, followed by other units in a staggered fashion. NRC and transitioning unit officials we spoke with expected that transitioning to the new approach could simplify nuclear units’ licensing bases by reducing the number of future exemptions significantly at each unit. Furthermore, officials from each of the 12 units we contacted that plan to adopt the approach said that one of the main reasons for their transition is to reduce the number of exemptions, including those involving operator manual actions, that are required to ensure safe shutdown capability under NRC’s existing regulations. Specifically, these officials told us that they expected that conducting fire modeling and probabilistic risk assessments—aspects of the risk-informed approach— would allow the nuclear units to demonstrate that fire protection features in an area with shutdown-related systems would be acceptable based on the expected fire risk in that area. According to some of these officials, under these circumstances units would no longer need to use exemptions—including those involving operator manual actions—to demonstrate compliance with the regulations. Officials at 10 of the units we visited stated that, as a result, the approach could eliminate the need for some operator manual actions. For example, officials at one site that contained two nuclear units expected that by transitioning to the new risk- informed approach, the units could eliminate the need for over 1,200 operator manual actions currently in place. Other unit officials conceded that the outcomes of probabilistic risk assessments may demonstrate the need for new operator manual actions that are currently not required under the current regulations. These officials added that any new actions or other safety features could be applied only to those areas subject to fire risk, rather than to the entire facility, thereby allowing units to maximize resources. According to nuclear unit officials, adopting the risk-informed approach could also help resolve concerns about multiple spurious actuations that could occur as a result of fire events. Officials from six units we visited told us that conducting the probabilistic risk assessments would allow them to identify where multiple spurious actuations are most likely to occur and which circuit systems would be most likely affected. These officials told us that limiting circuit analyses to the most critical areas would make such analyses feasible. NRC has repeatedly promoted the transition to the new risk informed approach as a way for nuclear units to address the multiple spurious actuation issue. According to industry officials and academic experts we consulted, the results of a probabilistic risk assessment used in the risk-informed approach could help units direct safety resources to areas where risk from accidents could be minimized or where the risk of damage to the core or a unit’s safe shutdown capability is highest; however, officials also noted that the absence of significant fire events since the 1975 Browns Ferry fire limits the relevant data on fire events at nuclear units. Specifically, these experts noted the following: Probabilistic risk assessments require large amounts of data; therefore the small number of fires since the Browns Ferry fire and the subsequent lack of real-world data may increase the amount of uncertainty in the analysis. Probabilistic risk assessments are limited by the range of scenarios that practitioners include in the analysis. If a scenario is not examined, its risks cannot be considered and mitigated. The role of human performance and error in a fire scenario—especially those scenarios involving operator manual actions—is difficult to model. Finally, these parties stated that probabilistic risk assessments in general are difficult for a regulator to review and are not as enforceable as a prescriptive approach, in which compliance with specific requirements can be inspected and enforced. Numerous NRC, industry, and academic officials we spoke with expressed concern that the transition to the new risk-informed approach could be delayed by a limited number of personnel with the necessary skills and training to design, review, and inspect against probabilistic risk assessments. Several nuclear unit officials told us that the pool of fire protection engineers with expertise in these areas is already heavily burdened with developing probabilistic risk assessments for the pilot program units and other units, including the 38 units that had already begun transitioning as of October 2007. Academic experts, consultants, and industry officials told us that the current shortage of skilled personnel is due to (1) an increased demand for individuals with critical skills under the risk-informed approach and (2) a shortage of academic programs specializing in fire protection engineering. According to these experts and officials, the current number of individuals skilled in conducting probabilistic risk assessments is insufficient to handle the increased work expected to be generated by the transition to a risk-informed approach. NRC officials we spoke with expressed concern that the nuclear industry has not trained or developed sufficient personnel with needed fire protection skills. These officials also told us that they expect that, as demand for work increases, more engineering students will choose to go into the fire protection field. However, to date, only one university has undergraduate and graduate programs in the fire protection engineering field, and the ability to produce graduates is limited. Other officials we spoke with noted that engineers in other fields can be trained in fire protection but that this training takes a significant amount of time. Academic experts and industry officials stated that without additional skilled personnel, units would not be able to perform all of the necessary activities, especially probabilistic risk assessments, within the 3-year enforcement discretion “window” that NRC has granted each transition unit as an incentive to adopt the new approach. Most nuclear units that responded to an industry survey on this issue indicated that they expected that they will need NRC to extend the discretion deadline for each unit. Delays in individual units’ transition processes could create a significant backlog in the entire transition process. NRC also faces an aging workforce and the likelihood that it will be competing with industry for engineers with skills in the fire protection area. As we reported in January 2007, the agency as a whole faces significant human capital challenges, in part because approximately 33 percent of its workforce will be eligible to retire in 2010. To address this issue, we reported that NRC identified several critical skill gaps that it must address, such as civil engineering and operator licensing. In relation to needed skill areas, the agency has taken steps, including supporting key university programs, to attract greater numbers of students into mission- critical skills areas and to offer scholarships to those studying in these fields. In relation to fire protection, and probabilistic risk assessments in particular, NRC officials told us that they expect to address future resource needs through the use of a multiyear budget and by contracting with the Department of Energy’s National Laboratories to help manage the process. Further, these officials stated that part of the purpose of the pilot program is to help them determine future resource needs for the transition to the risk-informed approach, and, as a result, they do not intend to finalize resource planning until the pilot programs are complete. A number of experts in the engineering field, including academics and fire engineers, stated that it will be difficult for NRC to compete with industry over the projected numbers of graduates in this field over the next few years. Also, NRC’s total workload, in addition to fire protection, is expected to increase as nuclear unit operators submit license applications to build new units, extend the lives of existing units, or increase the generating capacity of existing units. For example, NRC staff are currently reviewing license applications for units at six sites and have recently announced that operators have submitted licenses for two additional units at a seventh site. The agency expects to review or receive 12 more applications during 2008. To date 58 of the nation’s 104 nuclear units have not announced whether they will adopt the risk-informed approach. NRC and industry officials stated that they expected that newer units and units with relatively few exemptions from existing regulations would be less likely to transition to the new approach, while those with older licenses and extensive exemptions would make the transition. However, to date, 25 units licensed prior to 1979 have yet to announce whether they will make the transition. Officials from nontransitioning units we visited told us that concerns over NRC’s guidance and time table have been key reasons why they have not yet announced their intent to transition. According to industry and nuclear unit officials we spoke with, the costs associated with conducting fire probabilistic risk assessments for the units may be too high to justify transitioning to the new approach. For example, some officials told us that performing the necessary analysis of circuits and fire area features in support of the probabilistic risk assessment could cost millions of dollars without substantially improving fire safety. These officials noted that both pilot sites currently expect to spend approximately $5 million to $10 million each in transition costs, including circuit analysis. Some of these officials also noted that updating probabilistic risk assessments—which units are required to do every 3 years or whenever any significant changes are made to a unit—would require units to dedicate staff to this effort on a long term or permanent basis. Officials at transition and nontransition units stated that NRC’s guidance for developing fire models that support probabilistic risk assessments is overly conservative. In effect, these models require engineers to assume that fires will result in massive damage, burn for significant periods of time, and require greater response and mitigation efforts than less conservative models. As such, these officials stated that the fire models provided by NRC guidance would not provide an accurate assessment of risk at a given unit. Furthermore, these officials stated that unit modifications required by the risk analysis could cost more than seeking exemptions from NRC. Some of these officials stated that they expect NRC to revise the probabilistic risk assessment guidance to facilitate the transition process in the future. NRC officials told us that nuclear units have the option to develop and conduct their own fire models rather than follow NRC’s guidance. Furthermore, in its initial review of one of the pilot unit’s probabilistic risk assessments, NRC agreed with industry that models used in the development of the probabilistic risk assessment contained some overly conservative aspects and recommended that the unit conduct additional analysis to address this. However, nuclear unit officials expressed concern that the costs of developing site-specific fire models, a process that includes numerous iterations, could be prohibitive. Nuclear industry officials identified another area of concern in the current transition schedule, in which 22 units are expected to submit their license amendment requests for the risk-informed approach before NRC finishes assessing the license amendment requests for the pilot program units in March 2009. Although NRC has established a steering committee and a frequently asked question process to disseminate information learned in the ongoing pilot programs to other transition units, a number of nuclear unit officials expressed concern about beginning the transition process before the transition pilot programs are complete and lessons learned from the pilot programs are available. For example, an official at one of the pilot sites noted that the success of the pilot program probably will not be known until after the first triennial safety inspection conducted by NRC, which will occur after March 2009. The transition project manager for two nonpilot transition units expressed his opinion that, due to uncertainties regarding the work units must perform in order to comply with the risk-informed standard, no unit should commit itself to transitioning to the new approach until 2 years after the completion of the pilot programs. NRC’s ability to regulate fire safety at nuclear power units has been adversely affected by several long-standing issues. To its credit, NRC has required that nuclear units come into compliance with requirements related to the use of unapproved operator manual actions by March 2009. However, NRC has not effectively resolved the long-term use of interim compensatory measures or the possibility of multiple spurious actuations. Especially critical, in our opinion, is the need for NRC to test and resolve the effectiveness of fire wraps at nuclear units, because units have instituted many manual actions and compensatory measures in response to fire wraps that were found lacking in effectiveness in various tests. Compounding these issues, NRC has no central database of exemptions, operator manual actions, and extended interim compensatory measures. Such a system would allow it to track trends in compliance, devise solutions to compliance issues, and help provide important information to NRC’s inspection activities. Unless NRC deals effectively with these issues, units will likely continue to postpone making necessary repairs and replacements, choosing instead to rely on unapproved or undocumented manual actions as well as compensatory measures that, in some cases, continue for years. According to NRC, nuclear fire safety can be considered to be degraded when reliance on passive measures is supplanted by manual actions or compensatory measures. By taking prompt action to address the unapproved use of operator manual actions, long-term use of interim compensatory measures, the effectiveness of fire wraps, and multiple spurious actuations, NRC would provide greater assurance to the public that nuclear units are operated in a way that promotes fire safety. Despite the transition of 46 units to a new risk-informed approach, for which the implementation timeframes are uncertain, the majority of the nation’s nuclear units will remain under the existing regulatory approach, and the long-standing issues will continue to apply directly to them. To address long-standing issues that have affected NRC’s regulation of fire safety at the nation’s commercial nuclear power units, we recommend that the NRC Commissioners direct NRC staff to take the following four actions: Develop a central database for tracking the status of exemptions, compensatory measures, and manual actions in place nationwide and at individual commercial nuclear units. Address safety concerns related to extended use of interim compensatory defining how long an interim compensatory measure can be used and identifying the interim compensatory measures in place at nuclear units that exceed that threshold, assessing the safety significance of such extended compensatory measures and defining how long a safety-significant interim compensatory measure can be used before NRC requires the unit operator to make the necessary repairs or replacements or request an exemption or deviation from its fire safety requirements, and, developing a plan and deadlines for units to resolve those compensatory measures. Address long-standing concerns about the effectiveness of fire wraps at commercial nuclear units by analyzing the effectiveness of existing fire wraps and undertaking efforts to ensure that the fire endurance tests have been conducted to qualify fire wraps as NRC-approved 1- or 3-hour fire barriers. Address long-standing concerns by ensuring that nuclear units are able to safeguard against multiple spurious actuations by committing to a specific date for developing guidelines that units should meet to prevent multiple spurious actuations. We provided a draft of this report to the Commissioners of the Nuclear Regulatory Commission for their review and comment. In commenting on a draft of this report, NRC found that it was accurate, complete, and handled sensitive information appropriately and stated that it intends to give GAO’s findings and conclusions serious consideration. However, in its response, NRC did not provide comments on our recommendations. NRC’s comments are reprinted in appendix II. We are sending copies of this report to the Commissioners of the Nuclear Regulatory Commission, the Nuclear Regulatory Commission’s Office of the Inspector General, and interested congressional committees. We will also make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or gaffiganm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To examine the number, causes, and reported safety significance of fire incidents at nuclear reactor units since 1995, we analyzed Nuclear Regulatory Commission (NRC) data on fires occurring at operating commercial nuclear reactor units from January, 1995, to December, 2007. NRC requires units to report fire events meeting certain criteria, including fires lasting longer than 15 minutes or those threatening safety. To assess the reliability of the data, we (1) interviewed NRC officials about the steps they take to ensure the accuracy of the data; (2) confirmed details about selected fire events, NRC inspection findings, and local emergency responders with unit management officials and NRC resident inspectors during site visits to nuclear power units; (3) reviewed NRC inspection reports related to fire protection; and (4) checked the data for obvious errors. We determined that the data were sufficiently reliable for the purposes of this report. To examine what is known about nuclear reactor units’ compliance with NRC’s deterministic fire protection regulations, we reviewed the relevant fire protection regulations and guidance from NRC and industry. We also met with and reviewed documents provided by officials from NRC, industry, academia, and public interest groups. In particular, we interviewed officials from NRC’s Fire Protection Branch, Office of Enforcement, four regional offices, Office of the Inspector General, and Advisory Committee on Reactor Safeguards. In addition, we interviewed officials from the Nuclear Energy Institute, National Fire Protection Association, nuclear industry consultants, and nuclear insurance companies. We conducted site visits to nuclear power units, where we met with unit management officials and NRC resident inspectors. During these site visits, we discussed and received documentation on the use of operator manual actions, interim compensatory measures, and fire wraps, and we obtained views on multiple spurious actuations and their impact on safe shutdown. We also reviewed and discussed each unit’s corrective action plan. Finally, we observed multiple NRC public meetings and various collaborations with industry concerning issues related to compliance with NRC’s deterministic fire protection regulations. To examine the status of the nuclear industry’s implementation of the risk- informed approach to fire safety advocated by NRC, we met with and reviewed documents provided by officials from NRC, industry, and public interest groups, as well as academic officials with research experience in fire safety and risk analysis. In particular, we interviewed officials from NRC’s Fire Protection Branch, Office of Enforcement, four regional offices, Office of the Inspector General, and Advisory Committee on Reactor Safeguards. We also interviewed officials from the Nuclear Energy Institute, National Fire Protection Association, nuclear industry consultants, and nuclear insurance companies. We conducted site visits to nuclear power units, where we met with unit management officials and NRC resident inspectors. During these site visits, we discussed and received documentation on the risk-informed approach to fire safety, including resource planning and analysis justifying decisions on whether or not to transition to NFPA-805. We also observed multiple NRC public meetings and collaborations with industry concerning issues related to the risk-informed approach to fire safety. Finally, we reviewed relevant fire protection regulations and guidance from NRC and industry. In addressing each of our three objectives, we conducted visits to sites containing one or more commercial nuclear reactor units. These visits allowed us to obtain in-depth knowledge about fire protection at each site. We selected a nonprobability sample of sites to visit because certain factors—including custom designs that differ according to each nuclear unit, hundreds of licensing exemptions and deviations in place at units nationwide, and the geographic dispersal of units units across 31 states— complicate collecting data and reporting generalizations about the entire population of units. We chose 10 sites (totaling 20 operating nuclear reactor units out of a national total of 104 operating nuclear units) that provided coverage of each of NRC’s four regional offices and that represented varying levels of unit fire safety performance, unit licensing characteristics, reactor types, and NRC oversight. At the time of our visits, 5 of the 10 sites we visited (totaling 10 of the 20 nuclear reactor units we visited) had notified NRC that they intend to transition to the new risk- informed approach to fire safety. Over the course of our work, we visited the following sites: (1) D.C. Cook (2 units), located near Benton Harbor, Michigan; (2) Diablo Canyon (2 units), located near San Luis Obispo, California; (3) Dresden (2 units), located near Morris, Illinois; (4) Indian Point (2 units), located near New York, New York; (5) La Salle (2 units), located near Ottawa, Illinois; (6) Nine Mile Point (2 units), located near Oswego, New York; (7) Oconee (3 units), located near Greenville, South Carolina; (8) San Onofre (2 units), located near San Clemente, California; (9) Shearon Harris (1 unit), located near Raleigh, North Carolina; and (10) Vogtle (2 units), located near Augusta, Georgia. We selected the nonprobability sample from the entire population of commercial nuclear power units currently operating in the United States. In order to capture variations that could play a role in how these units address fire safety, we designed our site visit selection criteria to represent the following: (1) geographic diversity; (2) units licensed to operate before and after 1979; (3) sites choosing to remain under the deterministic regulations and those transitioning to the risk-informed approach; (4) pressurized and boiling water reactor types; (5) a variety of safety problems in which inspection findings or performance indicators of higher risk significance (white, yellow, or red) were issued; (6) units that have been subjected to at least some level of increased oversight since regular fire inspections were initiated in 2000; and (7) sites with various numbers of fires reportable to NRC since 1995. We received feedback on our selection criteria from nuclear insurance company officials, nuclear industry consultants, NRC officials, and academic officials with research experience in fire safety and risk analysis. We interviewed NRC resident inspectors and unit management officials at each site to learn about the fire protection program at the site. We also observed fire protection features at each site, including safe-shutdown equipment and areas of the units where operator manual actions, interim compensatory measures, and fire wraps are used for fire safety. Finally, we observed part of an NRC triennial fire inspection at one site. We conducted this performance audit from September 2007 to June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Ernie Hazera (Assistant Director), Cindy Gilbert, Chad M. Gorman, Mehrzad Nadji, Omari Norman, Alison O’Neill, Steve Rossman, and Jena Sinkfield made key contributions to this report. Nuclear Energy: NRC Has Made Progress in Implementing Its Reactor Oversight and Licensing Processes but Continues to Face Challenges. GAO-08-114T. Washington, D.C.: October 3, 2007. Nuclear Energy: NRC’s Workforce and Processes for New Reactor Licensing are Generally in Place, but Uncertainties Remain as Industry Begins to Submit Applications. GAO-07-1129. Washington, D.C.: September 21, 2007. Human Capital: Retirements and Anticipated New Reactor Applications Will Challenge NRC’s Workforce. GAO-07-105. Washington, D.C.: January 17, 2007. Nuclear Regulatory Commission: Oversight of Nuclear Power Plant Safety Has Improved, but Refinements Are Needed. GAO-06-1029. Washington, D.C.: September 27, 2006. Nuclear Regulatory Commission: Preliminary Observations on Its Process to Oversee the Safe Operation of Nuclear Power Plants. GAO-06-888T. Washington, D.C.: June 19, 2006. Nuclear Regulatory Commission: Preliminary Observations on Its Oversight to Ensure the Safe Operation of Nuclear Power Plants. GAO-06-886T. Washington, D.C.: June 15, 2006. Nuclear Regulatory Commission: Challenges Facing NRC in Effectively Carrying Out Its Mission. GAO-05-754T. Washington, D.C.: May 26, 2005. Nuclear Regulation: Challenges Confronting NRC in a Changing Regulatory Environment. GAO-01-707T. Washington, D.C.: May 8, 2001. Major Management Challenges and Performance Risks: Nuclear Regulatory Commission. GAO-01-259. Washington, D.C.: January 2001. Fire Protection: Barriers to Effective Implementation of NRC’s Safety Oversight Process. GAO/RCED-00-39. Washington, D.C.: April 19, 2000. Nuclear Regulation: Regulatory and Cultural Changes Challenge NRC. GAO/T-RCED-00-115. Washington, D.C.: March 9, 2000. Nuclear Regulatory Commission: Strategy Needed to Develop a Risk- Informed Safety Approach. GAO/T-RCED-99-071. Washington, D.C.: February 4, 1999.
After a 1975 fire at the Browns Ferry nuclear plant in Alabama threatened the unit's ability to shut down safely, the Nuclear Regulatory Commission (NRC) issued prescriptive fire safety rules for commercial nuclear units. However, nuclear units with different designs and different ages have had difficulty meeting these rules and have sought exemptions to them. In 2004, NRC began to encourage the nation's 104 nuclear units to transition to a less prescriptive, risk-informed approach that will analyze the fire risks of individual nuclear units. GAO was asked to examine (1) the number and causes of fire incidents at nuclear units since 1995, (2) compliance with NRC fire safety regulations, and (3) the transition to the new approach. GAO visited 10 of the 65 nuclear sites nationwide, reviewed NRC reports and related documentation about fire events at nuclear units, and interviewed NRC and industry officials to examine compliance with existing fire protection rules and the transition to the new approach. According to NRC, all 125 fires at 54 of the nation's 65 nuclear sites from January 1995 through December 2007 were classified as being of limited safety significance. According to NRC, many of these fires were in areas that do not affect shutdown operations or occurred during refueling outages, when nuclear units are already shut down. NRC's characterization of the location, significance, and circumstances of those fire events was consistent with records GAO reviewed and statements of utility and industry officials GAO contacted. NRC has not resolved several long-standing issues that affect the nuclear industry's compliance with existing NRC fire regulations, and NRC lacks a comprehensive database on the status of compliance. These long-standing issues include (1) nuclear units' reliance on manual actions by unit workers to ensure fire safety (for example, a unit worker manually turns a valve to operate a water pump) rather than "passive" measures, such as fire barriers and automatic fire detection and suppression; (2) workers' use of "interim compensatory measures" (primarily fire watches) to ensure fire safety for extended periods of time, rather than making repairs; (3) uncertainty regarding the effectiveness of fire wraps used to protect electrical cables necessary for the safe shutdown of a nuclear unit; and (4) mitigating the impacts of short circuits that can cause simultaneous, or near-simultaneous, malfunctions of safety-related equipment (called "multiple spurious actuations") and hence complicate the safe shutdown of nuclear units. Compounding these issues is that NRC has no centralized database on the use of exemptions from regulations, manual actions, or compensatory measures used for long periods of time that would facilitate the study of compliance trends or help NRC's field inspectors in examining unit compliance. Primarily to simplify units' complex licensing, NRC is encouraging nuclear units to transition to a risk-informed approach. As of April 2008, some 46 units had stated they would adopt the new approach. However, the transition effort faces significant human capital, cost, and methodological challenges. According to NRC, as well as academics and the nuclear industry, a lack of people with fire modeling, risk assessment, and plant-specific expertise could slow the transition process. They also expressed concern about the potentially high costs of the new approach relative to uncertain benefits. For example, according to nuclear unit officials, the costs to perform the necessary fire analyses and risk assessments could be millions of dollars per unit. Units, they said, may also need to make costly new modifications as a result of these analyses.