[Skip to Content]
Visit us on YouTube Visit us on Twitter View our RSS Feed
The MidContinental Messenger
IssuesIssuesIssues Contact UsContact Us ArchivesArchives Region/OfficeMCR Website SearchSearch

Benefits of Participating in a Mobile App Evaluation Project


The NN/LM MCR became aware that the cost of apps for mobile devices was a barrier that was keeping our members from trying apps that could potentially improve work performance or efficiencies or provide an easier way to locate and share health information. To address this issue, qualified full Network member applicants were provided with a $50 app purchase card in exchange for downloading at least four apps and evaluating them by filling out a Mobile App Evaluation Form. Two cohorts of librarians participated in this project from May 2014 to April 2016 and submitted a total of 122 evaluations.

Participants were required to work at a NN/LM MCR Full Network member institution and be a professional librarian. The application form allowed for potential participants who did not have a Master’s level degree in librarianship to provide an explanation as to why they should be considered.

Participants agreed to:

  • allocate the time required to experiment with at least four appropriate for-fee mobile apps,
  • fully report on those apps using the online App Evaluation Form,
  • and submit their reports by quarterly deadlines.

Applications were reviewed for eligibility and, once approved, were sent either an iTunes (for iOS devices such as iPhones and iPads) or GooglePlay (for Android and Windows devices) purchase card. In our first cohort (Year One), 13 members were selected to participate and were provided with purchase cards totaling $650. The second cohort (Year Two) was expanded to 19 participants, with $950 distributed via purchase cards. We had a diverse group of both academic and hospital librarians represented in our participant group, with at least one participant from each state in the region in both cohorts.

Chart showing participants

In terms of app selection, participants were advised that apps must be appropriate for their setting and cost money either for the initial purchase or for in-app purchases. The criteria was purposefully left broad in hopes that members would be more likely to try a variety of apps that would be useful in their particular work environment.

The app evaluation criteria used for the project was a modified version of the app evaluation worksheet developed by faculty at the Spencer S. Eccles Health Sciences Library for their Topics in Pediatrics course (http://campusguides.lib.utah.edu/content.php?pid=105887). This form was modified by former MCR Technology Coordinator Rachel Vukas as a web form using the SurveyMonkey platform. The form asked participants to provide basic app information (name, cost, platform, etc.) and more detailed evaluation in the areas of credibility, purpose, bias, currency, and organization.

After each quarterly deadline, summaries of the reviews were shared with the region in an article in the MCR’s Plains to Peaks Post newsletter. These posts were well received – in the 2016 MCR Spring Questionnaire, 23 out of 28 readers indicated that these reports increased their awareness of mobile apps. Apps were ranked on a scale that ranged from Excellent to Not Good, with the majority of apps reviewed falling into the Excellent or Very Good categories. In both cohorts we found that about 2/3 of the apps were focused on health or medicine and the remaining third were productivity apps. The apps covered a variety of topics such as password management, diagnostic tools, patient education, medical calculators, pdf viewers, and much more.

Due to the use of purchase cards, we were only able to collect app cost data from participant reports. In Year One, participants reported spending a total of $305 in on 46 apps, which was an average of $6.63 per app. The highest cost app was $24, but the average was brought down by several free apps that were reviewed despite the project specifications. There was $345 remaining on purchase cards in Year One. The Year Two cohort reported spending $625 for 76 apps, bringing the Year Two average cost to $8.22 per app. The highest cost app was a whopping $45, but again the lowest app was free. Based on this, we calculated the remaining Year Two funds to be $325. Participants were able to find excellent apps for reasonable prices.

After each cohort submitted their final reviews, they were asked to complete a brief self-evaluation using SurveyMonkey. In Year One, participants were asked to respond to the following prompt: “Participating in this project benefited my program” on a scale of Very Positively to Not Positively. 15% of participants indicated that this benefited them very positively and 54% indicated it was a positive benefit. While no one indicated that this project was not positive for them, 31% of the participants did not respond.

In Year Two, the self-evaluation was modified and consisted of two questions using a 5-point Likert Scale. When asked their level of agreement with the statement “My involvement in this project benefited or enhanced my professional development,” 26% strongly agreed, 47% agreed, 22% neither agreed nor disagreed, and 5% disagreed. They were also asked to indicate their agreement with the statement “I now feel more confident in my ability to evaluate mobile apps,” 26% strongly agreed, 58% agreed, 11% neither agreed nor disagreed, and 5% disagreed.

We were pleased with the overall rate of participation. Most reviews were submitted by the established deadlines (73% in Year One/ 74% in Year Two). There were a smaller number of late reviews submitted, though these were usually sent after a request for an extension (21% in Year One/ 26% in Year Two). Two participants ended up dropping out during the final quarter of Year One, so there were a total of 6% of expected reviews that were not received during that year.

After the project completion, we created a rubric to help determine the quality of evaluations submitted. The rubric was based on elements that we would have liked to have seen in an ideal completed evaluation. The evaluation form had four open-answer comment fields that asked for more information on each section completed. These sections were not required, but the information in these fields provided deeper insight and richer information about the apps reviewed. The remaining questions were all required, but gave reviewers the option to select “Information Not Available” as an answer. We noted a large number of these responses were submitted, and while many of these were probably the correct response as app information is not always readily available, there were a few questions where that response did not make a lot of sense: questions such as “Are there ads?” or “When was the app last updated?” Based on this, we down-graded evaluations each time that there was a blank response or they used an “Information Not Available” response when that information should have been easily accessible. There were a total of seven possible deductions and evaluations fell in a range of an A (0 deductions) to G (6 deductions). As you can see in the chart below, the majority of evaluations were of an exceptionally high quality.

Chart showing evaluation quality

Running this project was a fun endeavor and many of the participants seemed to enjoy the work involved. The variety of apps reviewed was a welcome surprise and we were pleased to see many unanticipated app types and subjects included. Overall, we were happy with the evaluations and the amount of effort our participants put forward in this project.

Of course, there were a couple challenges in running a project like this. We had projected that our participants would select higher cost apps, and were disconcerted with the low average app cost and large amount of leftover funds. While working with the participants was mostly a pleasure, it did take some effort to stay on top of them and ensure they were meeting deadlines and submitting quality evaluations for appropriate apps. In hindsight, we feel that most of this difficulty was due to the evaluation form itself. The form was too comprehensive for the information we wanted to gather, which made completing reviews a time-consuming process.

Were we to run this project again or be approached for advice from someone running a similar project, we would make the following recommendations. First, we would either lower the amount of funds provided on purchase cards or encourage participants to select higher cost apps. Second, we would revise the evaluation form to make it shorter and more concise and require responses to open-ended comment boxes. Finally, we would offer more guidance in app selection, as this was a time-consuming process for both the participants and project managers.

We hope reading about this project gives you some insight on the behind-the-scenes processes and perhaps inspires you to run your own app evaluation group. Feel free to reach out to us if you have any questions or would like more information.

– Alicia Lillich, Kansas/Technology Coordinator

– John Bramble, Utah/Research Enterprise Coordinator

The MidContinental Messenger is published quarterly by the National Network of Libraries of Medicine MidContinental Region

Spencer S. Eccles Health Sciences Library
University of Utah
10 North 1900 East, Building 589
Salt Lake City, Utah 84112-5890

Editor: Suzanne Sawyer, Project Coordinator
(801) 587-3487
suzanne.sawyer@utah.edu

This project has been funded in whole or in part with Federal funds from the Department of Health and Human Services, National Institutes of Health, National Library of Medicine, under cooperative agreement number UG4LM012344 with the University of Utah Spencer S. Eccles Health Sciences Library.

NNLM and NATIONAL NETWORK OF LIBRARIES OF MEDICINE are service marks of the US Department of Health and Human Services | Copyright | Download PDF Reader