My whole career has been an evaluation project juggling act. At times, I was the only evaluation consultant for an entire university campus. Logic models were a game changer for me. When a client showed up in my office three months after our initial consult, I could pull out our logic model and we could catch up on the plan in less than a minute.
Now, on project management teams, I am the self-appointed “Logic Model Queen.” I’m sure my team mates roll their eyes behind my back when I start drawing the iconic logic model structure on the conference room white board. That’s okay. I always win them back by the end of the project, because logic models are excellent project management tools.
Creating the Logic Model
Let’s review how a project team develops a logic model. Using the template shown below, the group first establishes desired outcomes (results) for the project. Team members then work “backwards” to see if planned activities logically could lead to the outcomes. In the Resources column, they list everything needed to conduct activities. Once the columns are filled, the team members can reflect on their assumptions underlying the plan and identify known challenges. From there, the team established methods to assess program implementation (process evaluation) and results (outcomes evaluation).
Centerpiece of Project Management
Once the logic model is drafted, don’t bury it in some dusty folder in cyberspace. Your team should review it at each meeting. Lack of meeting time is no excuse: a logic model review can be done in five minutes, preferably at the top of the meeting. The general question the team should consider this: Does our logic model still reflect reality? If the answer is basically yes, the team can then use the following logic-model inspired questions to review the project more thoroughly
Change Shows Learning
So let’s say your team considers the question Does our logic model still reflect reality, and the answer is “no.” Now what do you do? First, be comforted by the fact that this is not an unusual occurrence. Good project management almost always leads to change. Here are some reasons that programs must be adapted midstream:
Your expectations were too high. Often, in the middle of programs, we realize the change we hoped for is going to take more time than expected. It is not unusual to discover “things take longer than they do.” (That’s a direct quote from one of my first evaluator-bosses.) In such cases, the most comforting words in the English language are pilot project. Let go of your expectations, revise your outcomes and, if necessary, alter your evaluation methods to capture evidence of modified outcomes.
You learn new things about the community you’re serving: Let’s face it, needs and community assessment only give you a tiny piece of the information you need to run a successful project. Only by working with participants and stakeholders directly to do you truly learn about their desires, needs, and values. As the great poet Maya Angelou said, do the best you can, and when you know better, do better. If you can, adjust your program activities and outcomes as much as possible to better meet your stakeholders’ wants or needs. Otherwise, record what you learn so you know what to do differently next time.
Unexpected changes arise for you, your community, or your partners. We all work in complex systems that can change with little notice, disrupting best laid plans. Regular discussion related to your logic model will help your team assess and respond to shifting terrain. Team discussions will document the challenges for final reports.
I have written before that I think logic models are the duct tape of the evaluation world. I hope this post convinced you to use it early and often during project management discussions. You also might like reading about how to use it for proposal writing, conducting program reality checks, or even for planning your child’s birthday party.
If you want training on logic models, you might be interested in the half-day workshop Logic Models for Program Evaluation and Planning, being offered at the American Evaluation Association’s 2017 conference in Washington, DC. (It’s number 64 on this list of AEA 2017 workshops.) You do not have to register for the conference to enroll in the workshop.