The 2023 submission process is now closed.
Regulators, researchers, and clinical practitioners have all been exploring how to best evaluate the different phases of the health AI deployment lifecycle. The American Medical Informatics Association (AMIA) as the world's largest professional society for medical informaticists has been taking an active role in helping the community identify oversight mechanisms to ensure the safe, effective use of artificial intelligence (AI) applications in healthcare.'
The AMIA AI Evaluation Showcase Series aims to select and present the current use cases in the field and form a community to help build consensus around best practice. The community and the curated best practice examples are expected to strengthen the thought leadership position of the AMIA community in the industrial standard understanding and setting.
In Stage III, we invite submissions for projects to summarize the comprehensive evaluation of the Health AI implementation – from the Stage I results of technical model performance (including bias detection and mitigation) to the Stage II results of workflow/usability studies to new research results in Stage III that measure the impact of the AI implementation.
Authors should report on the impacts of the AI implementation based on the measures of their choice, e.g., clinical outcomes, patient-reported outcome measures (PROMs), patient-reported experience measures (PREMs), clinician experience and adoption measures, cost of care, healthcare quality, or other measures from health economic studies.
Those who did not submit for Stage I and Stage II but still wish to participate in Stage III should follow the Stage I and Stage II submission guideline to include the evaluation plan for all three stages and the result for all three stages. The best work will be considered for publication in JAMIA Open.
If your organization has AI/ML initiatives or departments, please help pass the CFP to your colleagues involved in those initiatives and departments. This year, we would like to extend our invitation to submit to our industrial partners who have been working in this area to join this ecosystem too. We would like to start the conversations in the community and collect best practice examples. Your help would be highly appreciated.
New Entry Point
This year, we have added a new entry point from Stage III for those teams who did not previously submit to the AI Showcase. Some teams might only have a plan for Stage III given that the plan is likely to lead to a multi-year effort.
This year, we ask reviewers to look at the merits across the three stages for new submissions to be considered for presentations. We also added an evaluation question on “Evaluation Methods and Results of Stages I and II, if applicable” for new submissions to examine Stages I and II contributions. Stages I and II performance can be submitted as a reference to previous published work or as an additional description in the submission.
If the authors choose to submit as a new submission, the new submissions should cover at least (1) and (2) in the following list of items and preferably have some mentions of (3) and (4). Priority will be given to submissions that have a high-quality evaluation plan for Stage 3.
- Evaluation framework across three stages
- Stage 3 plan (or performance if possible)
- Stage 1 performance: either through providing a reference (if previously published elsewhere) or a separate description
- Stage 2 performance: either through providing a reference (if previously published elsewhere) or a separate description
|Stage I - Technical Performance Studies||Stage II - Usability and Workflow Studies||Stage III - Health Impact Studies|
|Submission to have a system description, results from a study of algorithm performance, and an outline of the methods of the full evaluation plan.||Submission to address usability and workflow aspects of the AI system, including prototype usability or satisfaction, algorithm explainability, implementation lessons, and/or system use in context.||Submissions to summarize the comprehensive evaluation to include research from prior submissions with new research results that measure the impact of the tool; acceptance of stage II submission not required.|
|Informatics Summit: March 13-16, 2023 Seattle, Washington.||Clinical Informatics Conference, May 23-25, 2023, Chicago, Illinois||AMIA Annual Symposium, November 11-15, 2023, New Orleans, LA|
|Deadline passed||Deadline passed||Deadline passed|
How to Participate
Select a Team and Team Leads
Scientific teams must commit to submitting work centered around the same solution to all three conferences and should designate a lead scientist for the team. Given the different nature of each conference and the types of expertise needed for each stage, authorship is expected to vary for each submission.
Determine Type of Submission
AMIA offers three ways to participate: submitting a paper, podium abstract, and poster proposals for Stage I and II deadlines. We envision submitting a final proposal providing a summary of the entire evaluation, in paper or podium abstract format.
We will attempt to give those scientists committing to and completing comprehensive evaluations to present at the various meetings. For all meetings, scientifically rigorous manuscripts and podium abstracts not incorporated into a scientific session will receive a poster presentation opportunity.
We are in discussion with an informatics journal for publishing the selected papers via a specific issue. The selected work will receive a publication opportunity subject to the peer-review process.
Who Can Submit to the AI Showcase
Domestic and international contributions to the showcase are welcome from AMIA members, non-members, and students from all informatics sectors. A separate Scientific Program Committee (SPC) will judge proposals and select presentations for a series of sessions and posters at the Informatics Summit, Clinical Informatics Conference, and Annual Symposium.