This past week I sat down with Eileen Klemm to talk about evaluation and Check & Connect. I’m the director of The Evaluation Group at the Institute on Community Integration, University of Minnesota (also the headquarters for Check & Connect), and I’ve conducted a number of evaluations with schools, districts, and other organizations who have used the Check & Connect model. In addition to being one of the Check & Connect trainers, Eileen is also in a graduate program in program evaluation.
Program evaluation is so highly valued within Check & Connect that it’s included as one of the necessary steps for a well-implemented program (Christenson, Stout, & Pohl, 2012). However, evaluations can be conducted for many different purposes or audiences. I’ve noticed that some of my clients are becoming overwhelmed by data reporting and evaluation requests from local and federal agencies as well as program funders. How can you approach these activities so that evaluation can best serve you while strengthening your Check & Connect program? In this blog, the first in a planned series on program evaluation, we’d like to provide an overview of common evaluation practices, some helpful evaluation resources, and a bit of inspiration.
Motivation for Evaluation
While there may be many reasons for conducting a program evaluation—such as assessing fidelity of implementation, determining program outcomes, and providing accountability to program funders—the best evaluations begin with a genuine interest and engagement in the process of inquiry. Evaluations are far more useful and productive if the orienting questions are “What’s really happening in our program? What’s working well and what’s not? What can we do to improve?” rather than “How can we prove we met all of our outcomes? What do we need to report?”
From this approach based on curiosity and the desire to learn and improve, the first step in evaluating your implementation of Check & Connect is to identify the key questions that are most important to you and your program. This step is pivotal as it will provide the structure or map for your evaluation. Your key questions should fit the stage of implementation you’re at (i.e., don’t try to answer a long-term outcome question if you are just beginning) and be answerable (i.e., you can reasonably access the necessary data and information needed to answer it).
The next step is to determine the data or information you will need to identify or collect in order to answer those questions. This process of determination can take a bit of practice, as it is easy to get off-track or collect too much or too little data in order to clearly answer the evaluation question at hand. For both of these steps, I’ve found it helpful to read other program evaluation reports for ideas. It’s important to build in adequate staff time or employ outside assistance in the data collection and analysis steps of evaluation. Many programs I work with admit that they have plenty of data but no time to make sense of it, which is almost worse than having no data at all!
Evaluation and the Learning Organization
Finally, once the data has been collected and analyzed, take time to discuss the evaluation findings with staff and other key stakeholders. The best evaluations are those actually used by staff to make program improvements. In this way, your program evaluation is quite similar to how Check & Connect mentors use data with students to inspire change. In a recent New York Times article, Secret Ingredient for Success, the authors observe that this sort of learning and self-examination, which is common in successful people and organizations, “requires that we honestly challenge our beliefs and summon the courage to act on that information, which may lead to fresh ways of thinking about our lives and our goals.”
In future posts we will take a deeper look at each of the individual evaluation steps—identifying evaluation questions/the process of inquiry, determining the best data sources, analyzing the data, sharing findings with staff and key stakeholders/using information for change—and different approaches to evaluating your Check & Connect program so that it becomes the secret ingredient for your—and your students’—success. In the meantime, check out the evaluation resources listed below, and please share with us: What questions have guided the evaluation of Check & Connect at your site?
The American Evaluation Association web site has a wealth of resources. The resources in the “How to” section of the Public evaluation eLibrary may be especially helpful.
Novice evaluators may want to begin with the following sections of this site:
Christenson, S. L., Stout, K. E., & Pohl, A. J. (2012). Check & Connect: Implementing with fidelity. Minneapolis, MN: University of Minnesota, Institute on Community Integration.
About the Authors: This article was co-authored by Mary McEathron, Ph.D. and Eileen Klemm, M.A. Dr. McEathron is the director of The Evaluation Group at the Institute on Community Integration and has extensive training and experience in evaluation design and methods. Her research interests focus on evaluation use and evaluation’s role in bridging academic and applied worlds. Ms. Klemm is the project coordinator for Check & Connect presenting training nationally, facilitating the Check & Connect Coordinators’ Community of Practice, and providing leadership in the overall training and support of the Check & Connect model.
© 2013 Regents of the University of Minnesota. All rights reserved.
The University of Minnesota is an equal opportunity educator and employer.