Working Group Webinar Library
Webinar Library
2023 Trends in Applied NLP in Healthcare: Large Language Models, No-Code, and Responsible AI
This webinar summarizes current trends and use cases that apply state-of-the-art natural language processing in healthcare systems, pharmaceuticals, and health IT companies. David Talby covers unlocking use cases that become possible with current large language models; enabling medical domain experts to train & tune models in a co-code environment; and testing NLP models across aspects of Responsible AI in preparation for regulatory oversight. Talby also shares free and open-source tools that you can use today in these three areas.

Explaining Prediction Models Using Generative AI in Preventive Healthcare
Multiple studies have already demonstrated the effectiveness of electronic health record data in building Type 2 Diabetes Mellitus screening and prediction models. Through collaboration with family medicine physicians, nurse practitioners, patients, and computer science experts, the team at the University of Maribor aims to bring the T2DM prediction models closer to healthcare experts.

Professor or Parrot: Is Medical Knowledge an Emergent Phenomenon of Large Language Models?
Join the Natural Language Processing Working Group for an energetic discussion of the article Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models by Tiffany Kung et al.

Achieving Equitable Impact with Biomedical NLP: Needs for Translational Research
New advances are constantly made in biomedical and health natural language processing, but very few of these advances translate to measurable impact in medicine. When new AI and NLP methodologies are used in practice, they often magnify social biases or exhibit other undesirable behaviors. These failures of translation and AI-related injustices stem from a common source.

Towards Effective and Efficient Interpretation of Deep Neural Networks: Algorithms and Applications
Dr. Xia (Ben) Hu presents a systematic framework from modeling and application perspectives for generating DNN interpretability, aiming at dealing with main technical challenges in interpretable machine learning, i.e., faithfulness, understandability and the efficiency of interpretability.
