[Skip to Content]
Visit us on Facebook Visit us on FacebookVisit us on Twitter Visit us on TwitterVisit our RSS Feed View our RSS Feed
NEC Spotlight November 22nd, 2024
CategoriesCategoriesCategories Contact UsContact Us ArchivesArchives Region/OfficeNEC Main Site SearchSearch

Mar

24

Date prong graphic

“Five Things I’ve Learned,” from an Evaluation Veteran

Posted by on March 24th, 2017 Posted in: Blog


 

Cindy Olney in her home office

Kalyna Durbak, the NEO’s novice evaluator, recently posted the five things she learned about evaluation since joining our staff. I thought I would steal, er, borrow Kalyna’s “five things” topic and write about the most important lessons I’ve learned after 25+ -years in the evaluation field.

My first experience with program evaluation was in the 1980s, as a graduate assistant in the University of Arizona School of Medicine’s evaluation office.  Excel was just emerging as the cool new tool for data crunching. SPSS ran on room-sized mainframes, and punch cards were fading fast from the computing world. Social security numbers were routinely collected and stored along with other information about our research or evaluation participants. Our desktop computers ran on DOS. The Internet had not even begun wreaking havoc.

Yep, I’m old. The field has evolved over time and the work is more meaningful than ever. Here are five things I know now that I wish I had known when I started.

#1 Evaluation is different from research: Evaluation and research have distinctly different end goals. The aim of research is to add to general knowledge and understanding.  Evaluation, on the other hand, is designed to improve the value of something specific (programs, products, personnel, services) and to guide decision-making. Evaluation borrows many techniques from research methodology because those methods are a means to accurate, credible information. Technical accuracy of data means nothing if it cannot be applied to program improvement or decision-making.

#2 Evaluation is not the most important kid in the room. Evaluation, unchecked, can be resource-intensive, both in money and time. For every dollar and hour spent on evaluation, one dollar and hour is subtracted from funds used to produce or enhance a program or service. Project plans should focus first on service or program design and delivery, with proportional funding allocated to evaluation. Evaluation studies should not be judged by the same criteria used for research. Rather, the goal is to collect usable information in the most cost-effective manner possible.

#3 What gets measured gets done: Evaluation is a management tool that’s worth the investment. Project teams are most successful when they begin with the end in mind, and evaluation plans force discussion about desired results (outcomes) early on.  (Thank you, Stephen Covey, for helping evaluators advocate for their early involvement in projects.)  You must articulate what you want to accomplish before you can measure it.  You need a good action plan, logically linked to desired outcomes, before you can design an process assessment. Even if your resources limit you to the most rudimentary of evaluation methods, the mere process of committing to outcomes, activities and measure on paper (in a logic model, please!) allows a team to take one giant step forward toward program success.

#4 Value is in the eyes of the stakeholders: While research asks “What happened,” evaluation asks “What happened, how important is it, and, knowing what we know, what do we do?”  That’s why an evaluation report that merely collects dust on a shelf is a travesty. The evaluation process is not complete until stakeholders have interpreted the information and contributed their perspectives on how to act on the findings. In essence, I am talking about rendering judgement: what do the findings say about the value of the program? That value judgment should, in turn, inform decisions about the future of the program. While factual findings should be objective, judgments are not.  Value is in the eyes of the people invested in the success of your program, aka, stakeholders. Assessment of value may vary and even conflict among various stakeholder groups. For example, a public library health literacy program has several types of stakeholders. The library users will judge the program based on its usefulness to their lives. City government officials will judge the program based on how many taxpayers express satisfaction with the program.  Public librarians will value the program if it aligns with their library mission and brings visibility to their organization.  Evaluation is not complete until these multiple perspectives of value are explored and integrated into program decision-making.

#5 Everything I need to know about evaluation reporting I learned in kindergarten. Kindergarten was the first and possibly the last place I got to learn through play. In grad school, I learned to write 25-50 page research and evaluation reports. In my career, I discovered people read the executive summaries (if I was lucky), then stop. Evaluations are supposed to lead to learning about your programs, but no one thinks there’s anything fun about 50-page reports. Thankfully, evaluators have developed quite a few engaging ways to involve stakeholders in analyzing and using evaluation findings. For example, data dashboards allow stakeholders to interact with data visualizations and answer their own evaluation questions.  Data parties provide a social setting to share coffee, snacks, and data interpretations.  New innovations in evaluation reporting are being generated every year. It’s a great time to be an evaluator! More bling; less writing, and it’s all for the greater good.

So,there you have it: my five things.  These five lessons have served me well. I suspect they will continue to do so until bigger and better evaluation ideas come along. What about you? Share your insights below in our comments section.

Image of the author ABOUT nnlmneo


Email author View all posts by
This project is funded by the National Library of Medicine (NLM), National Institutes of Health (NIH) under cooperative agreement number UG4LM012343 with the University of Washington.

NNLM and NETWORK OF THE NATIONAL LIBRARY OF MEDICINE are service marks of the US Department of Health and Human Services | Copyright | HHS Vulnerability Disclosure | Download PDF Reader