Federal Invest in What Works Index
(100 points possible)
Did the agency have a senior staff member(s) with the authority, staff, and budget to evaluate its major programs and inform policy decisions affecting them in FY17? (Example: Chief Evaluation Officer)
Did the agency have an evaluation policy, evaluation plan, and research/learning agenda(s) and did it publicly release the findings of all completed evaluations in FY17?
Did the agency invest at least 1% of program funds in evaluations in FY17? (Examples: Impact studies; implementation studies; rapid cycle evaluations; evaluation technical assistance, and capacity-building)
Did the agency implement a performance management system with clear and prioritized outcome-focused goals and aligned program objectives and measures, and did it frequently collect, analyze, and use data and evidence to improve outcomes, return on investment, and other dimensions of performance in FY17? (Example: Performance stat systems)
Did the agency collect, analyze, share, and use high-quality administrative and survey data - consistent with strong privacy protections - to improve (or help other entities improve) federal, state, and local programs in FY17? (Examples: Model data-sharing agreements or data-licensing agreements; data tagging and documentation; data standardization; open data policies)
Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding decisions and did it disseminate and promote the use of evidence-based interventions through a user-friendly tool in FY17? (Example: What Works Clearinghouses)
Did the agency have staff, policies, and processes in place that encouraged innovation to improve the impact of its programs in FY17? (Examples: Prizes and challenges; behavioral science trials; innovation labs/accelerators; performance partnership pilots; demonstration projects or waivers with strong evaluation requirements)
Did the agency use evidence of effectiveness when allocating funds from its 5 largest competitive grant programs in FY17? (Examples: Tiered-evidence frameworks; evidence-based funding set-asides; priority preference points or other preference scoring; Pay for Success provisions)
Did the agency use evidence of effectiveness when allocating funds from its 5 largest non-competitive grant programs in FY17? (Examples: Evidence-based funding set-asides; requirements to invest funds in evidence-based activities; Pay for Success provisions)
In FY17, did the agency shift funds away from or within any practice, program, or policy that consistently failed to achieve desired outcomes? (Examples: Requiring low-performing grantees to re-compete for funding; removing ineffective interventions from allowable use of grant funds; proposing the elimination of ineffective programs through annual budget requests)
* These scores are based on information provided by the 8 federal departments and agencies included in this index. You can find this background information – as well as a description of how RFA developed these scores – at http://results4america.org/tool/index/
1) Since MCC only administers competitive grant programs, its total possible score was 20 for Question #8 and 0 for question #9.
2) Since USAID only administers competitive grant programs, its total possible score was 20 for Question #8 and 0 for question #9.
Results for America’s Federal Invest in What Works Index (2017) highlights the extent to which the Administration for Children and Families (within HHS); Corporation for National and Community Service; Substance Abuse and Mental Health Services Administration (within HHS); Millennium Challenge Corporation; U.S. Agency for International Development; U.S. Department of Education; U.S. Department of Housing and Urban Development and U.S. Department of Labor are currently building the infrastructure necessary to be able to use data, evidence and evaluation in budget, policy, and management decisions. These agencies are overseeing more than $220 billion in federal investments in FY17.