Large language models leverage external knowledge to extend clinical insight beyond language boundaries.
Large Language Models (LLMs) such as ChatGPT and Med-PaLM have excelled in various medical question-answering tasks. However, these English-centric models encounter challenges in non-English clinical settings, primarily due to limited clinical knowledge in respective languages, a consequence of imbalanced training corpora. We systematically evaluate LLMs in the Chinese medical context and develop a novel in-context learning framework to enhance their performance.
Author(s): Wu, Jiageng, Wu, Xian, Qiu, Zhaopeng, Li, Minghui, Lin, Shixu, Zhang, Yingying, Zheng, Yefeng, Yuan, Changzheng, Yang, Jie
DOI: 10.1093/jamia/ocae079