How to master prompt engineering & the future of generative AI
Join Dr Keith Grimes for an engaging session on "Mastering Prompt Engineering & The Future of Generative AI." Prompt engineering is the process of providing and manipulating detailed instruction in generative artificial intelligence (generative AI) software to achieve desired high quality and relevant outputs.
This session will begin with a hands-on demonstration designed to teach or improve upon your prompt engineering skills. Together we will discover techniques to craft precise and effective prompts that optimize AI performance.
Following this, we will briefly explore the future of Generative AI. Explore emerging trends, potential innovations, and ethical considerations shaping the next wave of AI advancements. This session will equip you with practical skills and forward-thinking insights essential for navigating the evolving AI landscape.
Expert: Keith Grimes, Digital Health & Innovation Consultant, Curistica
Summary of the webinar:
This BMJ Future Health webinar, moderated by Kieran Walsh, explored prompt engineering and the future of generative AI with expert speaker Keith Grimes, Digital Health and Innovation Consultant at Curistica. The session focused on how to effectively use large language models (LLMs), practical applications of prompt engineering, safety considerations, and future trends in AI.
Key Themes and Takeaways
1. Understanding Generative AI and Large Language Models (LLMs)
- Generative AI uses deep learning and neural networks to generate text, images, and other content based on patterns in data.
- Large Language Models (LLMs), such as ChatGPT, Claude, Gemini, and Perplexity, work by predicting the next word in a sequence based on massive datasets.
Strengths of LLMs:
- Capable of summarization, translation, content creation, reasoning, and coding assistance.
- Can increase productivity and improve quality of work when used correctly.
- Limitations and Risks:
- Hallucinations (creating false but convincing information).
- Bias (reflecting biases in training data).
- Lack of internet access (unless specifically enabled).
- Regulatory and privacy concerns (particularly in clinical use).
2. Mastering Prompt Engineering
Why Prompt Engineering Matters:
- The way you phrase a prompt significantly impacts the model’s response quality.
- Prompt engineering helps maximize accuracy, efficiency, and relevance.
Basic Prompting Approach:
- Think before you prompt – Clearly define your goal.
- Choose the right tool – Different AI models have different capabilities.
- Start simple – Begin with a straightforward request.
- Iterate and refine – Improve responses by adding structure, context, and examples.
3. Advanced Prompt Engineering Techniques
🔹 Structuring Prompts: The CREATE Framework
A structured approach ensures clarity and improves response accuracy:
- Context – Provide background information.
- Role – Define the AI’s role (e.g., "You are an expert complaints officer").
- Expectation – Specify instructions clearly (e.g., "Write a response that is professional and polite").
- Audience – Define the target audience (e.g., patients without medical training).
- Tone – Determine sentiment (e.g., empathetic, formal, casual).
- Examples – Provide format guidance or sample responses.
🔹 Enhancing Responses with Context
- LLMs don’t know what they don’t know—providing additional data improves accuracy.
- Ways to add context:
- Paste relevant text directly into the prompt.
- Upload documents (e.g., NICE guidelines, BMA policies).
- Reference previous conversations or structured data.
🔹 Step-by-Step Thinking ("Chain of Thought")
- Asking AI to “think through step by step” improves logical reasoning and response quality.
- Example prompt for structured complaints response:
- Read the notes.
- Compare them with guidelines.
- Assess the complaint.
- Generate a response.
4. Safe and Responsible Use of AI in Healthcare
- AI is not a medical device – It should not be used for direct clinical decision-making without oversight.
- Avoid sharing patient-identifiable information in AI prompts.
- Legal considerations:
- Some medical defense organizations advise against using AI for official complaint responses.
- Follow local regulatory guidelines when integrating AI into workflows.
5. Reusing and Automating Prompts
🔹 Creating Custom AI Assistants
- Instead of manually repeating prompts, users can package and reuse structured prompts:
- Save structured prompts in a document for future use.
- Use AI’s built-in memory and customization features (e.g., ChatGPT's Custom GPTs).
- Build a “Custom GPT” to streamline tasks (e.g., a complaints-handling assistant).
- Leverage enterprise AI tools (e.g., Microsoft Copilot, Claude Projects).
6. Retrieval-Augmented Generation (RAG): The Next Level of AI
- RAG improves AI accuracy by integrating real-time, external data.
- Instead of relying only on pre-trained knowledge, RAG pulls relevant information from databases, PDFs, and reports.
- Use case: Instead of manually uploading guidelines for every prompt, RAG automatically retrieves the latest medical policies and integrates them into responses.
7. Future Trends in Generative AI
🔹 AI is Becoming Multimodal
- LLMs are evolving beyond text to process images, videos, and voice inputs.
Examples:
- AI can analyze medical images, ECGs, and X-rays alongside text descriptions.
- Voice interfaces enable natural speech interaction.
- AI-generated videos and digital avatars are becoming more realistic.
🔹 Open Source vs. Proprietary AI Models
- Open-source AI (e.g., LLaMA 3, Mistral, Falcon 2) allows for greater transparency and customization.
- Proprietary AI (e.g., GPT-4, Claude) often provides higher accuracy and security guarantees.
🔹 Smaller, Faster AI Models
- AI models are becoming more efficient, running directly on mobile devices (e.g., Apple’s upcoming iPhone AI features).
- Trade-off: Smaller models are less powerful but more energy-efficient.
🔹 AI Copilots in Everyday Workflows
- AI assistants are being embedded into professional tools like Microsoft 365, Google Workspace, and clinical record systems.
- Integration Example: Future EHR systems may have AI copilots to help with documentation, summarization, and decision support.
8. Practical Applications and Ethical Considerations
Where AI Can Help Now:
✅ Administrative tasks (e.g., complaints handling, scheduling).
✅ Education & training (e.g., interactive case studies, knowledge retrieval).
✅ Medical writing & summarization (e.g., patient letters, research).
Where AI Needs Caution:
⚠️ Clinical decision-making – AI is not a substitute for medical expertise.
⚠️ Patient interactions – AI responses must be reviewed by professionals.
⚠️ Bias & fairness – AI models can reinforce existing healthcare inequalities.
Final Takeaways
- AI is a powerful tool but must be used responsibly.
- Prompt engineering improves accuracy, efficiency, and usability.
- AI should complement—not replace—human expertise in healthcare.
- Future AI models will integrate multimodal inputs, memory, and real-time data.
This session emphasized that mastering AI starts with understanding its strengths, weaknesses, and best practices for responsible use. 🚀