Evaluation is the process of collecting data to determine how well your program is succeeding and whether changes need to be made. Evaluation data might be collected during implementation, in which case the collection process is often called monitoring. Monitoring is useful to identify problems so that a program can be improved in mid-course. Conversely, comprehensive evaluations are typically done at the conclusion of a program, or at the conclusion of distinct phases of a program.
In addition to your own desire to create the most effective program possible, another strong incentive for evaluating is that many funders require well-conducted evaluations. Although it is true that much of your evaluation may be carried out after your program is completed, do not wait until the program’s completion to plan your evaluation. Successful evaluation is best planned at the same time you are planning your programs interventions evaluation planning should be an integral part of your program planning.
If you have worked through the previous Moving to the Future chapters, the concept of evaluation should be familiar to you. In chapter 2 you thought through the logistics of evaluating the outcome and process objectives you wrote, and you developed a preliminary evaluation work plan at the same time you developed your program work plan in chapter 3. The chapter you are now reading will help you complete your evaluation in two step:
- Review and update the provisional evaluation work plan you developed in chapter 3.
- Follow our steps for carrying out the completed evaluation work plan.
Prepare for Evaluation This short section directs you to find the evaluation worksheets and preliminary evaluation work plan that you developed in chapters 2 and 3, or to work through the relevant sections of chapters 2 and 3 if you have not already done so. The section also helps you identify and fill in gaps so as to finalize your evaluation work plan.
Carry Out Evaluation This section contains a step-by-step prescription for carrying out your evaluation work plan, including suggestions for how to identify and involve evaluation partners.
Frequently Asked Questions about Evaluation
Why should we evaluate our plan? The answer is simple: to ensure effectiveness, accountability, and improvement. Evaluation is valuable, because it:
- helps program managers decide whether to change or continue a program or specific activities within a program
- helps team members communicate and collaborate
- assists program managers to assess the impact of programs and policies on community health
- lets funding sources know that their money is being used efficiently
- allows agencies to decide where to spend staff time
- lets program managers be accountable to funders, volunteers, staff, and boards and
- helps community members appreciate and understand your work.
One final reason to evaluate your plan is taken from an evaluation document of a leading national health foundation that says, good evaluation reflects clear thinking and responsibility (W. K. Kellogg Foundation 2004). And these two attributes are important to nearly all funding organizations.
I’ve heard of different kinds of evaluation: what kind do we need to do? There are several different types of evaluation. If you completed the worksheets in the file entitled “Writing Objectives Worksheets” in chapter 2, you are prepared to do most kinds of evaluation. Moving to the Future provides guidance on program evaluation, which is the systematic collection, analysis, and reporting of information about a program to help in decision-making (The Center for the Advancement of Community Based Public Health 2000).
Agencies and evaluation experts use different terms for different types of evaluation. Such terms may include formative, process, short-term, and long-term evaluation. The terms you use are not important, provided that you carefully plan your evaluation so it generates the data you need. The one exception to this statement is when a funder or other entity important to your programs success requires a specific type of evaluation. In this case you will have to understand their definitions and incorporate them into your evaluation plan. By way of example, below are definitions of four common types of program evaluation taken from An Evaluation Framework for Community Health Programs published by The Center for the Advancement of Community Based Public Health.
Formative evaluation is information collected for a specific period of time, often during the start-up or pilot phase of a project, to refine and improve implementation and solve unanticipated problems. An example of formative evaluation would be asking day care providers to choose from a list of healthy snacks the foods they think children would prefer most before launching the intervention that included offering healthy snacks at day care centers.
Process evaluation addresses questions related to how a program is implemented. It compares what was supposed to happen with what actually happened and answers questions about why the program succeeded, failed, or requires revising. Examples of process measures include participant satisfaction, demographic information on participants, if and when the new outdoor walking trail was completed, and the total number of pounds of fruits and vegetables distributed to school children per school day.
Short-term evaluation assesses whether a program has achieved desired intermediate changes in individuals, population groups, or organizations. This type of evaluation may also be called intermediate, impact, or outcome. Example short-term evaluation measures include the number of worksites that signed up for the worksite wellness program, the number and percentage of people who increased their physical activity during the first few months of membership in the community walking club, and the number and percentage of schools in the district that replaced all the soda in their vending machines with milk, water, and 100% juice. Some resources also consider immediate changes in health status part of short-term evaluation, such as weight loss or lower blood pressure after a few months of a worksite wellness program.
Long-term evaluation examines the effects of a program on health status, usually defined in terms of morbidity (illness, injury) and mortality (death) rates. It determines the long-term effects of a program or intervention. This type may also be called outcome evaluation or impact evaluation. Examples include prevalence of high blood pressure, overweight, and obesity in the population, and disease death rates. The evaluation resources listed at the end of this chapter overview provide more information on evaluation.
What is the difference between monitoring and evaluation? In general, monitoring is the ongoing collection of data on a programs progress. You should use monitoring data to make minor modifications to a program that will increase enrollment, satisfaction, compliance, or utilization. For example, nine months after the new outdoor walking trail opened, the counters installed on the trail indicate only 10 to 20 people per day using the trail, so your team decides to change the trails marketing plan and to organize a festival to be held along the trail. All of this would happen before the formal evaluation of the trail, which might be scheduled to take place two years after the trail’s opening. When managing programs at the local level, the line between monitoring and evaluation can get blurry. Fortunately, the distinction between the two is not critical to success.
Who should conduct the evaluation, our staff or an evaluation expert? Ideally both! If you have the financial resources, it is helpful to hire an evaluation expert to evaluate all or parts of your plan. If you have an evaluation expert come in to evaluate your program, it is called outside evaluation or external evaluation. Sometimes a funding source will require you to conduct an outside evaluation and will either provide the evaluator or require that some grant funding be allocated for an outside evaluation. The Community Tool Box includes a whole section on choosing external evaluators. Even if you are having an external evaluation done, you should continue to collect your own evaluation data. An external evaluator may not collect all the data that you want, and typically you can generate data more quickly than an external evaluator can for you. The information and tools in Moving to the Future are designed to help program staff evaluate the plan.
What is a Logic Model? A logic model is a flow chart of your program. It’s a diagram that shows what you plan on doing and what you expect will happen. It is a common component of evaluation.
If your team has worked through the Moving to the Future materials, you have generated enough material to build a logic model. There are different types of logic models and each type asks for slightly different information. Even with the same type of logic model, such as a program logic model, different organizations will ask for slightly different presentations of the same information. The foundations and government agencies requesting logic models generally include instructions and a template on how to complete the type of logic model that organization is looking for.
Because of the tremendous variability among logic models, Moving to the Future does not include a generic logic model template. Instead, we recommend that you use the logic model template provided by the organization requesting a logic model of your program.
The W.K. Kellogg Foundation Logic Model Development Guide focuses on the development and use of the program logic model: W.K. Kellogg Foundation Logic Model Development Guide.
A logic model visually links program inputs and activities to program outputs and outcomes, and shows the basis for these expectations. The logic model is an iterative tool, providing a framework for program planning, implementation, and evaluation: Logic Model Basics
Listed below are links to completed logic models from CDC:
- CDC’s Youth Media Campaign, VERB Logic Model
- State Heart Disease and Stroke Prevention Program Logic Model
Are research and evaluation different? Yes. A simple distinction between the two is that research seeks to prove and evaluation seeks to improve. Another key difference between research and evaluation is that research generally looks at what can happen by studying controlled conditions, and evaluation looks at what does happen in the real world. The information and tools in Moving to the Future help health professionals conduct program evaluation.
Whats the most important thing to do with my evaluation data? Use evaluation data to improve your program.
Evaluation Resources Below is a list of evaluation resources. The list is not complete but should give you an idea of what is available. This list includes documents that can help you and your team evaluate the programs in your plan, and it includes online resources that provide information on program evaluation and links to program evaluation tools.
An Evaluation Framework for Community Health Programs This document presents a framework that emphasizes program evaluation as a practical and ongoing process that involves program staff and community members, as well as evaluation experts. The overall goal of the framework is to help guide and inform the evaluation process. The document is not a comprehensive manual on how to conduct program evaluation. Instead, the framework promotes a common understanding of program evaluation. An Evaluation Framework for Community Health Programs web page
Introduction to Program Evaluation for Public Health Programs: A Self-study Guide This document is a how to guide for planning and implementing evaluation activities. The manual is based on CDCs Framework for Program Evaluation in Public Health and is intended to assist state, local, and community managers and staff of public health programs in planning, designing, implementing, and using the results of comprehensive evaluations in a practical way. The strategy presented in this manual will help assure that evaluations meet the diverse needs of internal and external stakeholders, including assessing and documenting program implementation, outcomes, efficiency, and cost-effectiveness of activities, and taking action based on evaluation results to increase the impact of programs. Introduction to Evaluation for Public Health Programs: A Self-study Guide web page
W. K. Kellogg Foundation Evaluation Handbook This handbook provides a framework for thinking about evaluation and outlines a blueprint for designing and conducting evaluations, either independently or with the support of an external evaluator/consultant. It is not intended to serve as an exhaustive instructional guide for conducting evaluation. W.K. Kellogg Foundation Evaluation Handbook web page
CDC Evaluation Working Group This is an online resource. CDC convened an Evaluation Working Group, charged with developing a framework that summarizes and organizes the basic elements of program evaluation. Use this website to learn about the CDC Evaluation Working Group and its effort to promote program evaluation in public health. The website includes evaluation resources that may help when applying the framework. CDC Evaluation Working Group website
Community Tool Box The Community Tool Box is an online resource with over 6,000 pages of practical information to support people working to promote community health and development. The web site is developed and maintained by the Work Group on Health Promotion and Community Development at the University of Kansas in Lawrence, Kansas. The Community Tool Box has 4 chapters dedicated to evaluation which contain information on developing a plan for evaluation, methods for evaluation, and using evaluation to understand and improve the initiative. ctb.ku.edu/index.jsp[Community Tool Box website] RE-AIM This is an online resource and the content is generated by the Workgroup to Evaluate and Enhance the Reach and Dissemination of Health Promotion Interventions. Re-aim is a systematic way for researchers, practitioners, and policy makers to evaluate health behavior interventions. It can be used to estimate the potential impact of interventions on public health. This website project is funded by the Robert Wood Johnson Foundation. RE-AIM website
University of Kentucky Cooperative Extension Service This is an online resource list put together by the University of Kentucky, Cooperative Extension Service, Southern Region Program and Staff Development Committee. The list contains links to evaluation resources from several different states. University of Kentucky Cooperative Extension Service Program Development and Evaluation Resources website
University of Wisconsin Cooperative Extension This is an online resource from the Program Development and Evaluation Unit of the University of Wisconsin Cooperative Extension. The unit provides training and technical assistance that enables Cooperative Extension campus and community-based faculty and staff to plan, implement, and evaluate high quality educational programs. Their website contains several resources to help evaluate programs and services. University of Wisconsin Cooperative Extension Program Development and Evaluation Unit website