Feb
10
Posted by nnlmneo on February 10th, 2017
Posted in: Blog
Are you an introvert? Then I have an evaluation method for you: document review. You usually you can do this method from the comfort of your own office. No scary interactions with strangers.
Truth is, my use of existing data in evaluation seldom rises to the definition of true document review. I usually read through relevant documents to understand a program’s history or context. However, a recent blog post by Linda Cabral in the AEA365 blog reminded me that document review is a real evaluation method that is conducted systematically. Cabral provide tips and a resource for doing document review correctly. For today’s post, I decided to plan a document review that the NEO might conduct someday, describing how I would use Cabral’s guidelines. I also checked out the CDC’s Evaluation Brief, Data Collection Methods for Evaluation: Document Review, which Cabral recommended.
Here’s some project background. The NEO leads and supports evaluation efforts of the National Network of Libraries of Medicine (NNLM), which promotes access to and use of health information resources developed by the NIH’s National Library of Medicine. Eight health sciences libraries (called Regional Medical Libraries or RMLs) manage a program in which they provide modest amounts of funding to other organizations to conduct health information outreach in their regions. The organizations receiving these funds (known as subawards) write proposals that include brief descriptions (1-2 paragraphs) about their projects. These descriptions, along with other information about the subaward projects, are entered that is into the NLM’s Outreach Projects Database (OPD).
The OPD has a wealth of information, so I need an evaluation question to help me focus my document review. I settle on this question: How do our subawardees collaborate with other organizations to promote NLM products? Partnerships and collaborations are a cornerstone of NNLM. They are the “network” in our name. Yet simply listing the diverse types of organizations involved in our work does not satisfactorily capture the nature of our collaborations. Possibly the subaward program descriptions in our OPD can add depth to our understanding of this aspect of the NNLM.
Now that I’ve identified my primary evaluation question, here’s how I would apply Cabral’s guidelines in the actual study.
Catalogue the information available to you: For my project, I would first review the fields on the OPD’s data entry pages to see what information is entered for each project. I obviously want to use the descriptive paragraphs. However, it helps to peruse the other project details. For example, it might be interesting to see if different types of organization (such as libraries and community-based organizations) form different types of collaborations. This step may cause me to add evaluation questions to my study.
I also would employ some type of sampling, because the OPD contains over 4500 project descriptions from as far back as 2001. It is neither feasible nor necessary to review all of them. There are many sampling choices, both random and purposeful. (Check out this article by Palinkas et al for purposeful sampling strategies.) I‘m most interested in current award projects, so I likely would choose projects conducted in the past 2-3 years.
Develop a data collection form: A data collection form is the tool that allows you to record abstracted or summarized information from the full documents. Fortunately, the OPD system downloads data into an Excel-readable spreadsheet, which is the basis for my form. I would first delete columns in this spreadsheet that contain information not irrelevant to my study, such as mailing address and phone numbers of the subaward contact person.
Get a co-evaluator: I would volunteer a NEO colleague to partner with me, to increase the objectivity of the analysis. Document review almost always involves coding of qualitative data. If you use qualitative analysis for your study, a partner improves the trustworthiness of conclusions drawn from the data. If you are converting information into quantifiable (countable) data, a co-evaluator allows you to assess and correct human error in your coding process. If you do not have a partner for your entire project, try to find someone who can work with you on a subset of the data so you can calibrate your coding against someone else’s.
Ensure consistently among teammates involved in the analysis: “Abstracting data,” for my project, means identifying themes in the project descriptions. Here’s a step-by-step description of the process I would use:
For a more explicit description of coding qualitative data, check out the NEO publication Collecting and Analyzing Evaluation Data. The qualitative analysis methods described starting on page 25 can be applied in qualitative document review.
So, got documents? Now you know how to use them to assess your programs.