Posted by randerson on February 25th, 2023
Posted in: Blog, Thought Leadership
Artificial intelligence technology ChatGPT has been making waves for its ability to generate human-like responses to a variety of prompts.
The software sits at the center of both practical and ethical debates, one of which was recently held at Northwestern University Feinberg School of Medicine. “Let’s ChatGPT,” a hybrid discussion panel attended by nearly 1,000 people on campus and online, addressed these concerns as a collaborative effort between the Institute for Public Health and Medicine (IPHAM) and Northwestern’s Institute for Augmented Intelligence in Medicine (I.AIM).
The event featured a wide range of perspectives across the learning health system including Catherine Gao, MD; Mohammad Hosseini, PhD; Alexandre Carvalho, MD; Abel Kho, MD; David Liebovitz, MD; Kristi Holmes, PhD; Ngan MacDonald; and Faraz Ahmad, MD, MS. Watch the event here.
The event was hosted by Kho, director of I.AIM, Director and of the IPHAM Center for Health Information Partnerships, and professor of Medicine (General Internal Medicine) and Preventive Medicine (Health and Biomedical Informatics) and Ngan MacDonald, Chief of Data Operations at I.AIM.
The event was moderated by Yuan Luo, PhD, Director of the Center for Collaborative AI in Healthcare (I.AIM), Chief AI Officer at Northwestern University Clinical and Translational Sciences (NUCATS) Institute, and associate professor of preventive medicine, and Pediatrics. Luo and his team provide expert guidance to the NNLM National Evaluation Center on use of Natural Language Processing (NLP) to analyze large quantities of unstructured text.
Postdoctoral researcher in the Department of Preventive Medicine and NNLM National Evaluation Center Ethicist Mohammad Hosseini, PhD, believes ChatGPT can serve as a beneficial partner in the compilation and distribution of experimental research — given it is referenced in full clarity.
Hosseini is an associate editor for the journal Accountability in Research, which recently published an editorial, Using AI to write scholarly publications. In this editorial, Hosseini and colleagues provide guidance to authors on the use of AI technology such as ChatGPT in the communication of their research. “It is better to err on the side of transparency and encourage disclosure. We ask researchers to be more transparent about the context, saying what part of the text is generated, because they are ultimately accountable for the content,” says Hosseini.
Catherine Gao, MD, instructor of Medicine (Pulmonary and Critical Care), recognizes that the facilitation of further discussion is critical to navigating ChatGPT effectively.
“It is still early as an experimental technology and I think it will take more discussions like this one, more data, and increased evaluation, before medical and educational communities can make a thoughtful decision on use in the future,” says Gao.
NEC Director Kristi Holmes, PhD, professor of Preventive Medicine, Chief of Knowledge Management at I.AIM, and director of Galter Health Sciences Library, believes that ChatGPT usage reflects evolving issues related to information literacy..
“We need to carefully consider how a person finds, accesses, processes, and makes use of this new source of information. Our goal at the library is to determine how we help support and nurture good information use,” says Holmes. We are developing training and resources to support our campus to understand new tools so that they can be applied in meaningful ways responsibly.”
Written by Alex Miranda