Large language models for biomedicine: foundations, opportunities, challenges, and best practices.
Generative large language models (LLMs) are a subset of transformers-based neural network architecture models. LLMs have successfully leveraged a combination of an increased number of parameters, improvements in computational efficiency, and large pre-training datasets to perform a wide spectrum of natural language processing (NLP) tasks. Using a few examples (few-shot) or no examples (zero-shot) for prompt-tuning has enabled LLMs to achieve state-of-the-art performance in a broad range of NLP applications [...]
Author(s): Sahoo, Satya S, Plasek, Joseph M, Xu, Hua, Uzuner, Özlem, Cohen, Trevor, Yetisgen, Meliha, Liu, Hongfang, Meystre, Stéphane, Wang, Yanshan
DOI: 10.1093/jamia/ocae074