How to use Generative AI and Prompt Engineering for Clinicians
How to use Generative AI and Prompt Engineering for Clinicians
Generative AI has been making waves ever since it burst into public consciousness in November 2022. The seemingly endless capabilities of tools such as ChatGPT, CoPilot, and Google’s Gemini heralded predictions of a transformed world, where work and leisure would be unrecognisable.
A year and a half later things look pretty familiar, but some early adopters are finding ways in which AI can help them in their personal and professional lives. As AI enters the healthcare domain, we need to ask: Is it welcome? Is it safe? Will it make our lives easier or harder?
Join Dr Keith Grimes, GP and Digital Health Consultant, as he explains everything you wanted to know about Large Language Models but were afraid to ask. From how they are built and trained, through to their strengths and weaknesses, and how they are being used in Healthcare, you’ll learn about how to safely use Generative AI and generate instructions that will help you get the most out of your AI assistant.
Expert: Keith Grimes, Digital Health & Innovation Consultant, Curistica
Summary of the webinar:
This BMJ Future Health webinar focused on the role of generative AI and prompt engineering in healthcare, particularly for clinicians. It explored how AI models like large language models (LLMs) are built, their strengths, weaknesses, risks, and practical applications in healthcare.
Key Topics Covered:
1. Introduction to Generative AI & Large Language Models (LLMs)
- AI has evolved from rule-based systems to deep learning and neural networks.
- LLMs predict words based on statistical patterns, making them useful for text generation, summarization, and reasoning-like tasks.
- The transformer architecture (from the 2017 paper "Attention is All You Need") significantly improved AI’s language processing capabilities.
- OpenAI’s GPT models have scaled exponentially, with GPT-4 trained on 13 trillion tokens of data.
2. Strengths & Capabilities of Generative AI in Healthcare
- AI can expand, summarize, translate, and reason across vast medical datasets.
- Potential applications include:
- Clinical documentation (automating notes, discharge summaries).
- Decision support (providing medical guidance and summarizing evidence).
- Medical education (acting as a tutor for clinicians).
- Administrative tasks (reducing paperwork for healthcare professionals).
3. Risks & Challenges in Healthcare AI
- Hallucinations: AI can generate factually incorrect or misleading information that appears plausible.
- Bias & Fairness: Models mirror biases in their training data, potentially leading to misdiagnosis or inequitable treatment.
- Data Privacy & Security Risks: AI may leak sensitive patient data or infringe copyright laws when trained on unverified sources.
- Regulatory Issues: No LLM-based medical device has been approved yet due to transparency and safety concerns.
- Environmental Impact: Training AI consumes significant energy, water, and computational resources.
4. The Role of AI in Clinical Practice
- While AI is not yet approved for direct patient care, it can be used for administrative tasks and clinical decision support.
- Example Use Cases:
- Ambient Documentation: AI transcribes consultations into medical notes (e.g., Tortoise, Microsoft Copilot).
- Clinical Decision Support: AI assists with triaging patients, generating insights, and improving workflow efficiency.
- Medical Exams & AI: AI models are now outperforming humans on multiple-choice and long-form medical exams in some studies.
5. Prompt Engineering for Clinicians
- What is Prompt Engineering?
- Crafting effective instructions to get the best responses from AI.
- Best Practices:
- Structure your prompts (define role, context, task, output format).
- Provide context & examples (e.g., medical guidelines, SNOMED CT codes).
- Use step-by-step reasoning (e.g., “Think this through step by step”).
- Refine prompts iteratively for better accuracy.
Q&A Highlights
- Best AI model for UK healthcare? No regulated medical LLM exists, but advanced general models (GPT-4, Gemini, Claude) are best for research & support.
- Medical-specific AI models? Google’s Med-Gemini is being tested but not widely available.
- Can AI perform surgeries? Not autonomously, but AI-assisted robotic surgery is progressing.
- AI’s role in medicine? AI enhances efficiency and decision-making but should be used with human oversight.
Conclusion & Next Steps
- AI is a powerful tool but must be used with caution, oversight, and proper validation in clinical settings.
- Upcoming BMJ Future Health Event: November 19-20, London.
- BMJ Future Health LinkedIn Group for continued discussions.
- Keith Grimes shared additional resources and invited participants to connect for further learning.
This webinar provided a comprehensive overview of generative AI in healthcare, its opportunities, risks, and practical applications for clinicians. 🚀