May
06
Posted by Margot M on May 6th, 2026
Posted in: Communities of Interest
Tags: artificial intelligence, hospital librarians, hospital library advisory group
This is part of a series of blog posts from hospital librarians using AI in their work. This post was submitted by Carly Schanock, Clinical Research and Education Librarian at Harvey Cushing and John Hay Whitney Medical Library (CT).
Yale recently launched a year-long Consensus AI trial and is actively seeking user feedback. For the past few months, I’ve used Consensus to supplement the scholarly databases I rely on for clinical research and systematic reviews.
The user-friendly features include rephrasing the questions for additional searching, providing an evidence summary, and exploration of follow-up questions. The split-screen layout displays your question and evidence summary on the left, while the right side shows each result’s key takeaway, citation metrics, citation information, and flags for highly cited or rigorous journal articles.
From there, you can filter results by journal rank, minimum citation count, methodology, or field of study—including medicine, agriculture, chemistry, education, and sociology. Selecting the “Ask’ feature lets you dig deeper into individual papers with targeted follow-up questions.
You can also export results directly as a .RIS file for use with your preferred citation manager. I have built exporting from Consensus AI into the lesson plan of my monthly Zotero class.
Need help getting started? Consensus offers an excellent LibGuide for Academic Research.
I chose two different types of clinical questions: a structured evidence-based question and a qualitative question about clinician well-being. I chose OpenEvidence and UpToDate for comparison because OpenEvidence is another AI platform, and UpToDate is a clinical decision-support tool.
OpenEvidence allows up to 3 free searches per month but providers with a NPI get unlimited searches. It does not save the prompts you already searched, so re-running them counts as a new search. UptoDate is a subscription only platform with no free options.
Many medical libraries provide unlimited access to UptoDate for its users. Consensus has a free tier for individuals that includes Basic Quick Search, Limited Pro Search, three Deep Searches per month, and 10 paper Snapshots per month.
Question 1: “In people with opioid addiction, does the use of Narcan help decrease overdose deaths in street medicine settings?”
Consensus translated my prompt into two targeted searches about naloxone use and community outreach programs for opioid overdose mortality. It provided a summary with in-text citations, including systematic reviews and meta-analyses. The response also includes a consensus bar based on a sample of papers from the results. The categories for the consensus bar are yes, possibly, mixed, and no. The answer to the prompt changes if you expand to “Deep” mode of 50 papers instead of 20, though the consensus of “yes” stays the same.
OpenEvidence delivered a detailed, narrative summary with eight citations, including guidelines, systematic reviews, and umbrella reviews. The response was divided into sections such as “individual level survival”, “population-level mortality”, “guidelines support”, and “practical considerations for street medicine”. It provides in-text citations and suggested follow-up questions, like Consensus.
UpToDate surfaced sections on opioid use disorder treatment overviews and overdose prevention. When I simplified my search to “Narcan street drug overdose,” it returned “prevention and management of side effects in patients receiving opioids for chronic pain” and “Clonidine and related imidazoline poisoning.”
Question 2: “How do you practice self-compassion while interacting with patients and their issues?”
Consensus translated my prompt into two searches: “self-compassion in clinical interactions with patients” and “practicing self-compassion for clinicians dealing with emotional labor, empathy, and patient-related stress.” Running in Pro mode, it examined 20 papers and delivered a detailed, cited summary.
OpenEvidence provided a summary but no citations, which is unusual based on past questions. When I followed up, it told me: “The previous question about self-compassion in clinical practice is a professional development and wellness topic rather than a clinical medical question. I did not search the medical literature…” However, it will offer to rerun the search in a way that provides a summary with citations.
UpToDate returned one loosely relevant topic on its result page: interactions with patients and families in palliative care.” The in-depth overview of this topic also includes graphics, and new updates go through a peer-review process, which are two things the other tools don’t do. It links to related topics throughout the overview.
Comparing challenging prompts across these tools is a valuable strategy when database results are minimal and a great way to build the knowledge needed to educate patrons.
AI Statement of Use: On April 24, 2026, I, Carly Schanock, used Claude Sonnet 4.6 via Yale’s Clarity platform to tighten up the language and reduce the usage of passive voice. Consensus AI, OpenEvidence, and UptoDate were used to answer prompts input on April 23, 2026.