Establishing, Monitoring, and Refining an Evaluation Policy at the Department of State

4 minutes read time
Pakistani flood affected people look towards an army helicopter which was dropping relief supplies at the heavily flooded area of Rajanpur, in central Pakistan.
Pakistani flood affected people look towards an army helicopter which was dropping relief supplies at the heavily flooded area of Rajanpur, in central Pakistan.

Establishing, Monitoring, and Refining an Evaluation Policy at the Department of State

When the Department of State began implementing its first department-wide policy on evaluation in 2012, it did not do so in a vacuum.  The Department engaged with key congressional and partner stakeholders on how best to advance our evaluation efforts, including participating in early discussions on what was to become the Foreign Aid Transparency and Accountability Act (FATAA) and briefing Congress about our evaluation efforts, including how many evaluations the bureaus were doing, what kind of evaluations they were, and how they were used.

In 2012, about 20 percent of the Department of State’s 52 bureaus and independent offices had extensive experience with evaluating their programs. Among the 24 bureaus that worked with foreign assistance funds, about 25 percent had already done evaluations because of the nature of their work.  As we built capacity for evaluation during the first two years of policy implementation, we did so as we tackled broader interagency questions about the use of big data sets, and what types of evaluation designs provided the best data.

Because State Department bureaus do a broad range of programming, and many had not conducted evaluations, we did not have large data sets to help set a baseline from which to evaluate our programs. Nor, with the exception of the President’s Emergency Plan for AIDS Relief (PEPFAR), had the programs selected for evaluation been developed with control groups or baseline data.  Most of the evaluations we planned and designed used a mix of qualitative and quantitative data, triangulating data points to increase reliability.  As we continued capacity building for evaluation planning, design, and execution, we began to see some trends.

First, our initial policy had been modeled on USAID’s evaluation policy, and did not sufficiently account for differences in how the work of State Department is planned and executed.  Most of our programs and projects are run from Washington DC, whereas most USAID programs are run from the field. USAID programs cover a broad range development and humanitarian assistance issues.  State Department programs focus on diplomatic engagement over a similar range of topics that include public diplomacy, trade, conflict stabilization, counter-terrorism, economics and business, democracy, passports and visas, and international labor issues.  

We updated the policy and guidance in 2015 to reflect the varied programs, projects, and activities performed by State Department bureaus.  We also began to take stock of how difficult it was to evaluate some State Department programs, especially diplomatic initiatives with longer range expectations for results.  In other foreign assistance programming, sometimes more planning, design, and monitoring were needed before evaluation, and sometimes the goals and objectives of programs were not well defined.  In response, we began to work on a toolkit for program design and performance management in 2015.

As this effort continued, we also expanded our evaluation policy to nest evaluation appropriately within the spectrum of performance management and planning activities that must come before it.  The policy was completed in late 2017 and fits seamlessly within the State Department’s Managing for Results framework. 

Concurrently with establishing the new policy, State Department personnel supported OMB’s development of monitoring and evaluation guidelines required under the FATAA for agencies that administer foreign assistance.  Personnel from the State Department, USAID, HHS, and MCC, among others worked with OMB to create a framework that all agencies could find useful. OMB carefully vetted the guidelines with agencies, resulting in publication in January 2018.

Because of the timing of the work on the guidelines, the State Department was able to align its new policy closely with them.  We look forward to monitoring implementation of the new program and project design, monitoring, and evaluation policy and building a stronger library of evaluations used to improve programs and inform decisions.

 

About the Author: Gordon Weynand serves as Managing Director of Planning, Performance, & Systems in the Office of U.S. Foreign Assistance Resources at the U.S. Department of State.

Editor's Note: This entry also appears in the U.S. Department of State's publication on Medium.com.