"Innovations such as artificial intelligence [can] both (1) assist you in receiving magnificent blessings and (2) diminish and suffocate your moral agency. Please do not allow the supposed accuracy, speed, and ease of modern technologies to entice you to avoid or circumvent the righteous work that invites into your life the blessings you will need."— Elder David A. Bednar, BYU Devotional, January 2024
Understanding how to wield AI effectively isn't just a tech skill — it's becoming a clinical competency. This guide will transform how you learn, study, and prepare for your calling as a healer.
Why does AI give me generic answers?
Here's the truth most people never realize: AI is a mirror. Vague questions get vague answers. Specific, context-rich questions get expert-level responses.
The second prompt gives AI the context to tailor its response to your level, your purpose, and your specific scenario.
💡 The Rule: Treat AI like a brilliant colleague who just walked into the room. They're incredibly capable, but they don't know your situation yet. The more context you provide, the more useful they become.
Which tools to use and when
Not all AI is created equal. Here's what you need to know:
Use these for general studying, concept explanations, and basic questions:
| Model | Access | Best For |
|---|---|---|
| ChatGPT | chat.openai.com | General questions, explanations, study help |
| Claude | claude.ai | Nuanced analysis, longer documents, careful reasoning |
| Gemini | gemini.google.com | Google integration, multimodal tasks |
| Grok | x.com/i/grok | Real-time information, current events |
For clinical questions, case analysis, and medical decision support — use these specialized tools:
| Tool | Access | Best For |
|---|---|---|
| OpenEvidence | openevidence.com | Evidence-based clinical questions, citations included |
| Amboss | amboss.com | Medical knowledge, Step prep, clinical reasoning |
| UpToDate | uptodate.com | Clinical decision support, treatment guidelines |
💡 When to use which: General inquiries and basic study questions → use ChatGPT, Claude, Gemini, or Grok. Medical cases and clinical questions → use OpenEvidence, Amboss, or UpToDate for more detailed and precise answers with proper citations.
🌐 Cloud Models — Via This Website: You have access to state-of-the-art AI models (GPT-4o, Claude, and more) directly through the AI Chat on this website. Use these models at no cost for your studies.
🎓 Set Up Your Own Access: Consider creating free accounts with tools like OpenEvidence for personalized medical tutoring. Having your own accounts lets you build a history of interactions, get tailored recommendations, and continue learning beyond this course.
The one rule you must follow
🚨 The Rule: NEVER put patient data (PHI) directly into any AI model. Period.
This applies to all AI tools — ChatGPT, Claude, Gemini, Grok, and any other consumer AI. These tools are NOT HIPAA compliant.
💡 Bottom line: When in doubt, don't include it. Use AI to learn concepts and explore ideas — never to process real patient information. This may change as HIPAA-compliant AI tools become available in clinical settings, but for now: no patient data in AI.
Techniques for more useful AI responses
The "context window" is how much information AI can hold in memory during a conversation. Think of it as working memory — the bigger it is, the more information AI can reference at once.
How to use this strategically:
You just created a personalized study tool that knows exactly what you learned today.
AI models have knowledge cutoffs and can "hallucinate" — confidently stating incorrect information. The solution? Give it the source material.
Now the AI is synthesizing real, current information — not guessing from training data that might be outdated.
Sources to feed AI:
Using AI responsibly and effectively
AI is confident. That doesn't mean it's correct.
AI will never say "I don't know" with uncertainty in its voice. It delivers wrong answers with the same confidence as right ones. This is one of its most dangerous characteristics.
Your verification toolkit:
The more consequential the information, the more important verification becomes. Studying for an exam? Verify key facts. Real patient care? Never rely on AI alone.
"Artificial intelligence cannot replace revelation or generate truth from God. We have the responsibility to ensure the Holy Ghost can attest to the truth and authenticity of all we say and share."— Elder Gerrit W. Gong, BYU Education Week, August 2025 (source)
| Myth | Reality |
|---|---|
| "AI will give me the right answer" | AI gives you an answer. Correctness is your job to verify. |
| "Longer prompts are annoying" | AI loves context. More detail = dramatically better responses. |
| "AI is cheating" | AI is a tool, like a calculator. Using it wisely is a skill worth developing. |
| "I can just copy AI responses" | AI is a drafting partner. Your judgment must refine the output. |
| "All AI chatbots are basically the same" | Models vary wildly in capability. Choosing the right tool matters. |
| "AI knows the latest research" | AI has knowledge cutoffs. For recent info, provide sources yourself. |
Knowing when to step away is as important as knowing how to use it.
AI is powerful, but there are times when reaching for it is the wrong choice:
This should be obvious, but using AI during exams is academic dishonesty. Beyond the ethical violation, it also means you won't actually learn — and that catches up with you on the wards.
Never paste patient information into AI tools. Most are not HIPAA compliant. De-identify completely or create fictional scenarios based on the clinical pattern.
If you immediately reach for AI without attempting to reason through a problem yourself, you're robbing yourself of the productive struggle that builds clinical thinking. Try first, then use AI to check, expand, or clarify.
Personal statements, reflective essays, and certain assignments are meant to capture YOUR thinking and growth. Using AI undermines their purpose and your development.
Real patient care decisions, sensitive conversations, ethical dilemmas — these require human wisdom, empathy, and accountability that AI cannot provide.
The goal is to become a capable physician who can use AI as a tool — not a dependent user who can't function without it.
"AI cannot replace the gift of divine inspiration or the individual work required to receive it. Interactions with AI cannot substitute for meaningful relationships with God and others."— General Handbook, December 2025 (source)
Checklists and guidelines for responsible AI use
AAMC principles summary: Use AI to augment learning (not replace your judgment), protect privacy and confidentiality, evaluate outputs for bias and inaccuracy, and disclose when AI contributed to your work. Read the AAMC Principles for AI Use in Medical Education →
When required by your assignment, cite AI assistance clearly. Example:
The best physicians of tomorrow will be those who know how to learn, synthesize, and leverage every tool available to serve their patients. AI is an incredibly powerful tool. Use it wisely. The goal is to amplify your thinking, not outsource it.
Elder Gong has taught that these technologies are "part of the Lord hastening His work in the latter days." You have the opportunity to use AI to serve more patients, catch diagnoses faster, and extend care to those who need it most.
The Brigham Young University School of Medicine is founded on the teachings and example of Jesus Christ, the Master Healer, and guided by the doctrine and leaders of The Church of Jesus Christ of Latter-day Saints.
We provide a spiritually grounded and scientifically rigorous education for physicians who will minister to God's children throughout the world as healers, teachers, researchers, leaders, and disciples of Jesus Christ.
We approach the practice of medicine as an opportunity to prevent and alleviate human suffering and to care for all people as fellow children of a loving God.