Informaticist or Informatician? A Literary Perspective.
Author(s): Bain, Andrew P, McDonald, Samuel A, Lehmann, Christoph U, Turer, Robert W
DOI: 10.1055/s-0044-1790553
Author(s): Bain, Andrew P, McDonald, Samuel A, Lehmann, Christoph U, Turer, Robert W
DOI: 10.1055/s-0044-1790553
To improve firearm injury encounter classification (new vs follow-up) using machine learning (ML) and compare our ML model to other common approaches.
Author(s): Ancona, Rachel M, Cooper, Benjamin P, Foraker, Randi, Kaser, Taylor, Adeoye, Opeolu, Mueller, Kristen L
DOI: 10.1093/jamia/ocae173
Accurately identifying clinical phenotypes from Electronic Health Records (EHRs) provides additional insights into patients' health, especially when such information is unavailable in structured data. This study evaluates the application of OpenAI's Generative Pre-trained Transformer (GPT)-4 model to identify clinical phenotypes from EHR text in non-small cell lung cancer (NSCLC) patients. The goal was to identify disease stages, treatments and progression utilizing GPT-4, and compare its performance against GPT-3.5-turbo, Flan-T5-xl, Flan-T5-xxl [...]
Author(s): Bhattarai, Kriti, Oh, Inez Y, Sierra, Jonathan Moran, Tang, Jonathan, Payne, Philip R O, Abrams, Zach, Lai, Albert M
DOI: 10.1093/jamiaopen/ooae060
The aim of this study was to assess the completeness and readability of generative pre-trained transformer-4 (GPT-4)-generated discharge instructions at prespecified reading levels for common pediatric emergency room complaints.
Author(s): Gimeno, Alex, Krause, Kevin, D'Souza, Starina, Walsh, Colin G
DOI: 10.1093/jamiaopen/ooae050
Although supervised machine learning is popular for information extraction from clinical notes, creating large annotated datasets requires extensive domain expertise and is time-consuming. Meanwhile, large language models (LLMs) have demonstrated promising transfer learning capability. In this study, we explored whether recent LLMs could reduce the need for large-scale data annotations.
Author(s): Sushil, Madhumita, Zack, Travis, Mandair, Divneet, Zheng, Zhiwei, Wali, Ahmed, Yu, Yan-Ning, Quan, Yuwei, Lituiev, Dmytro, Butte, Atul J
DOI: 10.1093/jamia/ocae146
To evaluate the efficacy of ChatGPT 4 (GPT-4) in delivering genetic information about BRCA1, HFE, and MLH1, building on previous findings with ChatGPT 3.5 (GPT-3.5). To focus on assessing the utility, limitations, and ethical implications of using ChatGPT in medical settings.
Author(s): McGrath, Scott P, Kozel, Beth A, Gracefo, Sara, Sutherland, Nykole, Danford, Christopher J, Walton, Nephi
DOI: 10.1093/jamia/ocae128
To allow health professionals to monitor and anticipate demands for emergency care in the Île-de-France region of France.
Author(s): Hanf, Matthieu, Salle, Léopoldine, Mas, Charline, Ghribi, Saif Eddine, Huitorel, Mathias, Mebarki, Nabia, Larid, Sonia, Mazué, Jane-Lore, Wargon, Mathias
DOI: 10.1093/jamia/ocae151
Firearm violence constitutes a public health crisis in the United States, but comprehensive data infrastructure is lacking to study this problem. To address this challenge, we used natural language processing (NLP) to classify court record documents from alleged violent crimes as firearm-related or non-firearm-related.
Author(s): Kafka, Julie M, Schleimer, Julia P, Toomet, Ott, Chen, Kaidi, Ellyson, Alice, Rowhani-Rahbar, Ali
DOI: 10.1093/jamia/ocae082
To examine whether comfort with the use of ChatGPT in society differs from comfort with other uses of AI in society and to identify whether this comfort and other patient characteristics such as trust, privacy concerns, respect, and tech-savviness are associated with expected benefit of the use of ChatGPT for improving health.
Author(s): Platt, Jodyn, Nong, Paige, Smiddy, Renée, Hamasha, Reema, Carmona Clavijo, Gloria, Richardson, Joshua, Kardia, Sharon L R
DOI: 10.1093/jamia/ocae164
To assess the performance of large language models (LLMs) for zero-shot disambiguation of acronyms in clinical narratives.
Author(s): Kugic, Amila, Schulz, Stefan, Kreuzthaler, Markus
DOI: 10.1093/jamia/ocae157