The purpose of the panel is to discuss and inform the researchers and practitioners about the differences between task-specific models versus general-purpose models, namely, between Named Entity Recognition (NER) models and Large Language Models (LLM). NER is an important task in natural language processing. Due to the complexity and ever-changing nature of language, recognizing a named entity requires a significant effort in model training in addition to the pre-trained models. LLMs on the other hand, do not require such effort. The panel will elucidate the differences between task-specific NER models and LLM on entity recognition tasks and showcase the comparisons between the two models in various use cases. In addition, the panel will offer tips and lessons learned from working with both task-specific NER and LLM in a large corpus of texts.
Learning Objectives
- Identify key differences between Named Entity Recognition (NER) models and Large Language Models (LLMs) in training, adaptability, and performance.
- Analyze the effectiveness of NER models and LLMs across different use cases.
- Apply best practices for selecting, fine-tuning, and deploying NER models and LLMs.
Moderator
- Elise Berliner (Oracle Life Sciences)
Speakers
- Elise Berliner (Icahn School of Medicine at Mount Sinai)
- David Talby, Dr. (John Snow Labs)
- Hongfang Liu, PhD (University of Texas Health Science Center at Houston)