Another type of expectation relates to understanding what type and level of results will be used to answer the evaluation questions. For example, an evaluation inquiring about a program’s efficiency will need to demonstrate an understanding of what level of efficiency is considered excellent, good, adequate, or poor. Explicitly identifying these expectations improves transparency and provides a point of reference with which to compare results and see if expectations were met. Discussing, understanding, and coming to consensus on these expectations can facilitate the use of evaluation findings and create transparency around how the evaluation findings will be interpreted (43). These discussions might include the types of evidence that are most valued by different groups (e.g., quantitative and qualitative) and the perceived credibility of data sources. In an outcome evaluation, this also might include discussing which outcomes will be examined and identifying the accountable outcome (i.e., the most distal outcome interest holders expect the program to show progress toward achieving).
Setting a baseline lets teams compare planned timelines against actual progress in real time, making it easier to catch deviations early. With these features, our software enhances how program evaluation and review technique-informed schedules are visualized and adjusted throughout the project lifecycle. CDC’s Framework for Program Evaluation guides public health professionals in program evaluation. From a formative perspective, we look to improve our overall program including the workshops, website collections, and leadership program.The current objectives of the summative evaluation plan are to assess the impact of the workshops and website on faculty teaching and student learning. A second aim is to evaluate how the program contributes to the research base on effective faculty development. Program evaluation is a critical tool that serves the dual purpose of describing impact and identifying areas for program improvement.
Evaluation Capacity
This careful consideration leads to more informed decision-making and improved program outcomes, ultimately enhancing the overall impact of your initiatives. Devon Walker discovered a valuable resource for grant writing, emphasizing that it outlines what is necessary to understand and undertake in the assessment process. By following these guidelines, organizations can ensure a systematic and efficient grant program evaluation process that not only assesses their existing initiatives but also positions them advantageously for future funding opportunities.
Primary Users of Findings
By conducting evaluations, organizations can learn to improve managing their limited resources by identifying areas of improvement and approaches to resource allocations. Process evaluation is program evaluation used to assess how a program is being implemented, including factors such as participation rates, the quality of delivery, and the degree to which the program is being implemented as intended. A process evaluation explores how a program or initiative reaches its short and long-term goals. Unlike summative evaluations, process evaluation focuses on the incremental steps along the way.
Subscribe to our newsletter for more nonprofit tools and resources.
- The standards are intentionally broad to provide flexibility to adapt to the unique circumstances, weigh the various options, and determine the best course of action.
- If stakeholders want the evaluation’s results to help improve the program or justify continued funding, they need to make sure the evaluation is completed before the program is slated to end.
- Many states were partnering with other organizations on implementation efforts to leverage resources and funds to have a greater impact than each organization individually would have had.
- Penalties to be applied, if any, should not result from discovering negative findings but from failing to use the learning to change for greater effectiveness.
- Understanding a program’s context sets the stage for meaningful, actionable, and culturally responsive evaluation1.
Implementation of an evaluation plan in large part includes interpreting and understanding the findings resulting from data collection and analysis and using that information for recommendations and acting on findings (Figure 3). It is important that sufficient time is allocated for synthesis and working with interest holders on interpretation and recommendations (43). Results of data analyses are compared with the expectations identified earlier (Step 4) and interpreted within context (Step 1) to determine the practical application and implications of what has been learned. In Step 5, evaluators and interest holders work together to translate what the findings mean, identifying existing strengths, successes, and areas for improvement including opportunities to advance health equity (Box 2). Engaging collaboratively to interpret the findings has multiple benefits including producing a more robust understanding of the findings and their implications and enhancing interest holders’ receptivity and commitment to learning from and using the evaluation findings. Despite the best planning, data collection challenges are common once an evaluation has commenced.
Success stories helped highlight grantees’ key achievements, provided a contextual narrative to support quantitative analyses, and were used by both grantees and the CDC to share the positive impacts of program funding among internal and external stakeholders. Key state program staff members were also invited to participate in key informant interviews annually with the evaluation team. Interviews enabled better understanding of facilitators and barriers of grantees’ performance, existing organizational capacity, and perceived ability to achieve health impact. While evaluation can sometimes be easy and completed without any challenges, there are sometimes barriers that affect effective evaluations.
Program Evaluation Compared with Research
This information was vital in providing real-time updates between reporting periods and monitoring the type and quantity of TA requested by states. The Monitoring and Evaluation Tool helped the evaluation team identify common issues across grantees and allowed for the development of proactive group TA to reduce project officer burden in addressing each request individually. Ultimately, embracing program evaluation as a core practice empowers organizations to refine their efforts, secure funding, and achieve sustainable success.
Ensuring the evaluation provides insights for funders and implementers, as well as community members who might be affected by the program, is important to make sure that all perspectives are represented in the evaluation aims. The objective in this step is to develop collaboratively an optimal, culturally responsive evaluation design that accommodates the program context and available resources, anticipates intended uses, and incorporates all relevant evaluation standards. A well-developed and articulated purpose statement and a clear set of evaluation questions can be referred to throughout the evaluation to help decision-making regarding how the evaluation will be conducted, analyzed, and interpreted.
People who might use the evaluation findings
Formative Evaluation primarily aims to improve and refine a program while it’s still in progress or under development. The Centers for Medicare & Medicaid Services subjects its Innovation Center models to rigorous evaluation. Standardized Tests assess participant performance or knowledge against pre-established benchmarks or standards, commonly used in educational evaluations. Formative evaluation often employs qualitative methods to gather rich, contextual insights into program operations and stakeholder experiences. Evaluators might have additional opportunities to share information about the evaluation throughout the implementation process as opportunities arise. For situational awareness, evaluators can actively seek out and ask questions of those with whom they are collaborating about innovative ways to engage interest holders (36).
This is also an opportunity for a collaborative approach to understanding and interpreting the meanings of the findings and to hear from interest holders who might have a different perspective or interpretation. Understanding and incorporating these perspectives into the products will improve the likelihood that the results and recommendations will accurately represent the context and be accepted and used by interest holders. Whether conducting an analysis of quantitative, qualitative, or both types of data, each type of analysis has established procedures for upholding rigor and objectivity and considerations for protecting privacy and confidentiality that should be followed. Identifying and describing the multitude of analytic methods available is beyond the scope of this framework. Regardless, decisions about which analytic approach(es) to use need to be guided by the evaluation questions and characteristics of the data collected. As noted in Step 4, involving statistics experts might be necessary for analyses and interpretation, particularly for complex analyses, as incorrect or inappropriate analysis or interpretation can lead to false claims and potentially result in decreased trust among interest holders.
- As organizations navigate the complexities of evaluation, they can leverage innovative tools and strategies to not only measure their impact but also enhance their overall effectiveness in serving their communities.
- It highlights different sources of evidence that can support federal decision-making—such as program evaluations and performance measurement.
- Formative evaluation contributes by helping to prevent the misdirection or waste of resources on ineffective strategies during a program’s lifecycle.
- It includes how-to-guides, diagrams and examples to understand and start implementing program evaluation in real life.
- Our expert team paves the way for collaboration, communication, and trust between your organization and the people you serve by leveraging the power of data.
PERT works well in industries like construction, defense, software development and research where predicting timelines is challenging. When a project involves tight deadlines, significant risk or many stakeholders, PERT helps plan schedules realistically and identify the critical path to keep everything on track. Users can filter for the critical path to instantly see which tasks determine the project’s duration and require close monitoring.
Summative Evaluation: Measuring Final Success
Viewed together, the group of logic models can comprehensively show all aspects of a program, which can be useful for program planning and the next steps in the evaluation process. Collaborating with interest holders can aid in developing a description that is comprehensive and inclusive of different perspectives while bringing clarity and program benefits beyond evaluation planning and implementation. Collaboration also can provide an opportunity for reaching agreement about what the program is doing and aims to achieve, and how the program intends to advance health equity. Evaluations conducted without agreement on key activities and outcomes might be of limited use. Program descriptions identify the outcomes the program intends to achieve and the key activities that are expected to lead to those outcomes.