Introduction to Evaluation

Evaluation often brings to mind the words assessment, measurement, accountability, and testing. While these are all important concepts in evaluation, equally important is the concept of using data to improve programs. Evaluation in the past has been associated with less than positive or clearly negative experiences. However, much has changed in the evaluation world, and evaluators are more conscious of their responsibility to facilitate use of the evaluation. A key group of standards for educational program evaluation developed by the Joint Committee on Program Evaluation include Utility Standards that focus on use.

The goal of this content is to make evaluation useful for parenting education programs across the country and to help build capacity for evaluation in parenting education programs. Please click on one of the links below for more information about how evaluation can be used in programs to improve services and demonstrate outcomes.

A Brief Introduction to Evaluation

  1. Definition of Evaluation
  2. Standards and Guiding Principles for Evaluation
  3. Logic Models as a Helpful Tool for Evaluation
  4. Designing an Evaluation

1. Definition of Evaluation

Program evaluation. Program evaluation can be defined as “the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about programs, improve the program effectiveness, and/or inform decisions about future programming (Patton, 1997, p. 23).” This definition encompasses some key concepts about evaluation.

The term “systematic” emphasizes that evaluations have a clear outline for how information is collected. There is also a reason for what types of data are collected and why this data is collected. In other words, there is a clear purpose for the evaluation.

The use of the words “activities, characteristics, and outcomes” highlights that that evaluation is not only about outcomes, although this is often considered the primary reason for evaluation. Evaluation can encompass all aspects of programming.

Finally, the definition alludes to three main reasons or purposes of program evaluation: judgment, improvement, and informing decisions. Judgment is perhaps the most common purpose associated with evaluation and relates to outcomes. It answers the questions: Does the program work? Should it be continued? However, evaluation also relates to program improvement. Results can be used to help figure out what is working in a program and what could work better. Finally, evaluation can help inform decisions about programming, services, and funding at all levels of a program.

Stakeholders. One of the most important aspects of evaluation is the stakeholders. It is important to define the stakeholders early on in the process of evaluation, and include them as much as possible throughout the process. Stakeholders are defined as anyone who has a stake in the evaluation or who may be affected by the results. Traditional stakeholders include decision-makers and policy makers, but just as important are stakeholders such as staff, other people in the community, and the clients themselves.

Finally, evaluation is a long-term process that is best used throughout the life of the program, not just at the end. This website is intended to help programs build their capacity for evaluation activities and evaluation use.

2. Standards and Guiding Principles for Evaluation

The American Evaluation Association has developed a set of guiding principles for evaluators which provide ethical guidance for evaluators. The standards for educational program evaluation are another set of standards for program evaluation. Developing a set of evaluation guiding principles or standards for individual programs is one way to begin demystifying evaluation. It can also help guide evaluation efforts. For the purposes of parent educational evaluation, a number of guiding principles are offered below as a starting point. They are adapted from guiding principles established for Early Childhood Family Education programs (Mueller, 1996) and the W. K. Kellogg Foundation Evaluation Handbook. Individual programs may want to adapt these evaluation guiding principles for their own program.

  1. Evaluation is a tool to strengthen programs. Evaluation positively impacts programs in a variety of ways. Evaluation can enhance and strengthen programs when designed, implemented, and used in intentional ways.
  2. Evaluation should be flexible. Even the best designed evaluation will need to be adjusted along the way to meet the changing needs or context of the program. Evaluation designs should be flexible to allow for these changes of course along the way.
  3. Multiple methods should be used to gather information. Different evaluation questions require different types of data. Multiple methods of data collection should be used, and the decision as to what types of data to collect should be based upon the questions that a program wants to answer.
  4. Evaluation should be focused on needs and use. Evaluation should provide information that will be used by the program, whether that is for program improvement or to inform decisions, or for some other use. It should address the real information needs of the program.
  5. Evaluation is a participatory process. Everyone can have a role in evaluation, from the program participants to the funders. Evaluation can and should involve as many stakeholders as possible throughout the entire process, from deciding the purpose of the evaluation to collecting data to making decisions based on the data. Stakeholders are anyone with a stake in the evaluation or anyone who may be affected by the results of the evaluation.
  6. Evaluation is sensitive to staff and participants and supports diversity. Although evaluation is participatory in nature, it should not overburden staff or participants. Team members should be adequately compensated, trained, and supported. Diverse perspectives and backgrounds should be welcomed and included throughout the process.
  7. Evaluation is an ongoing process. Evaluation is more than simply an event that happens at the end of a program or of a year. Ideally, it is an integral part of a program and is incorporated into the program from the beginning. It is a management and learning tool that not only provides information about outcomes, but can inform the design and implementation of a program as well.
  8. Evaluation can be used to prove and improve. There are two main types of evaluation: summative and formative. Summative evaluation is typically done at the end of a project or program and is often done for accountability purposes. Formative evaluation focuses on program improvement and is often done for program developers and implementers. Often, evaluation is associated primarily with summative evaluation. Evaluation is seen as a means to prove what works and to provide accountability. However, evaluation that provides information to improve programs is just as important as evaluation focused on outcomes.

3. Logic Models as a Helpful Tool for Evaluation

Evaluation can be used to prove and improve. Although programs are operated within the political context of accountability and the need to demonstrate outcomes, it is equally important to gather information that will help to improve programs. This is particularly true for parent education, because it is essential that programs understand what works best for whom under what circumstances. The challenge becomes balancing the need for outcome data with information that is useful for developing and implementing programs to best to serve the changing needs of parents and families. Logic models are one way to do this.

What are Logic Models?

Logic models help to describe to staff, participants, and other stakeholders how programs expect to create change. It is essentially a roadmap of the program, showing how it is expected to work (W. K. Kellogg Foundation, 1998).

Logic models come in many types, but two main types focus on either the outcomes/activities of a program or the theories underlying a program.

  • Outcome and/or Activities Logic Models. Outcome logic models are particularly well-suited for programs like parenting education that focus on long-term outcomes that are hard to measure. Outcome logic models can also show how short-term outcomes lead to intermediate and long-term outcomes Often, these logic models include the activities of a program and show how these activities are linked together to form a complete picture of program implementation (W.K. Kellogg Foundation, 1998).
  • Theory Logic Models. Logic models can also be based on the theoretical underpinnings of a program, or a program’s theory of change. All programs are based on some sort of theory or combination of theories. For example, parenting education programs draw on family systems theory, child development theories, adult learning theories, and others. These theories provide the basis and the reasons for program activities, and why programs expect certain activities will lead to certain outcomes. These theories also provide the keys to understanding the various activities programs do and how they are expected to lead to outcomes (Patton, 1997; W.K. Kellogg Foundation, 2004).

It is important to note that logic model development is a participatory process. The best logic models are developed with input from a wide variety of stakeholders. Patton (1997) and the W. K. Kellogg Foundation Logic Model Development Guide are recommended for further reading about logic models and how to construct a logic model of individual programs.

Why Create a Logic Model?

Logic models have a number of uses, including clarifying program implementation and informing evaluation activities (Patton, 1997; W. K. Kellogg Foundation, 2004).

Clarifying program implementation

  • Logic models can facilitate thinking, planning, and communicating about program objectives and actual accomplishments.
  • Logic models can clarify program design. They show the links between activities and outcomes and can help bring intentionality to program activities. Often, creating a logic model can lead to modifications and improvements in the program.
  • Logic models can also promote communication about the program by allowing staff to communicate, explain, and demonstrate the complexity of programs and how program activities are linked to outcomes.
  • Replication of programs can also be enhanced by developing logic models.

Informing evaluation activities.

  • In program evaluation, logic models can document interim outcomes, inform evaluation decisions, and chart complex outcomes.
  • Logic models show short-term outcomes and intermediate outcomes, which are often relatively easier to evaluate than long-term outcomes. Because the logic model shows how short and intermediate outcomes are linked to long-term outcomes, logic models can provide evidence that a program is on track toward achieving long-term outcomes, even if these outcomes are not evaluated.
  • Logic models also inform evaluation decisions. They provide a systematic way of deciding what and how to evaluate in order to see if the program’s assumptions, theories, and activities are working the way the program is expected.
  • Logic models also provide a way to measure each set of events to see what happens and what works for whom under what circumstances.

4. Designing an Evaluation

There are four main steps in designing an evaluation: Determining the purpose (or intended use), developing questions, creating a budget, and choosing the data collection methods (W. K. Kellogg Foundation, 1998). These steps may or may not be done in this particular order, but often it is best to determine the purpose of the evaluation first. Throughout the evaluation, stakeholders should be involved, and decisions should be based on the information needs and intended use of the evaluation. Evaluation should be incorporated into all aspects of a program, at all stages of a program’s development. Developing a program’s capacity for evaluation is an ongoing, continuous process.

  • Determine the purpose. The purpose of the evaluation is perhaps one of the most important decisions to make. When thinking about the purpose:
    • Involve stakeholders
      • Who are the stakeholders and what kinds of information do they want?
      • Consider Use
    • How will the evaluation data be used? What are the decisions that will be informed by the evaluation? Will the evaluation data be used to improve the program? Is the program trying to determine whether or not the goals were achieved? Is the evaluation primarily for accountability purposes?
      • As mentioned above, a logic model [link to logic model section of webpage] can be extremely helpful in determining the purpose of the evaluation. In the next section, the Five-Tiered Approach to Evaluation will be discussed, and this approach can also be useful in determining the purpose.
  • Brainstorm questions. After the purpose is determined, begin to think about specific questions the evaluation should address. These questions stem from the purpose of the evaluation and are linked to the goals and objectives of the program. Brainstorm questions, generate concerns, and then select specific questions based on the evaluation purpose and specific program context (Worthen, Sanders, & Fizpatrick, 1997).
  • Create a budget. Evaluation is often thought of as an additional expense that distracts from the primary purpose of the program to serve participants. However, evaluation can be an investment in a program that pays large dividends in enhancing service delivery and ensuring outcomes. Evaluators generally recommend 5-10 percent of program budget be set aside for evaluation activities. In an ideal world, evaluation would comprise 10 percent of a program budget. However, for parenting education programs, 5-7 percent is more realistic. It is important to remember that evaluation can be good, it can be cheap, and it can be quick. It cannot be all three. This is where understanding the evaluation’s purpose can help to determine what trade-offs are acceptable.
  • Determine the data collection methods. The final component of developing an evaluation design is to select data collection methods to answer the evaluation questions. This will ensure that the information gathered will be useful for the program and provide needed information to stakeholders. Often, programs will jump directly to this step in the process, but evaluation will be most helpful if the purpose and questions drive the method selection.