Best Practices & Safety Guidelines

Using AI responsibly in clinical settings requires following specific guidelines to protect patients and maintain professional standards. These aren't suggestions - they're essential requirements.

🔒 Never Put PHI in Public AI Systems

This next point is absolutely critical: never put protected health information into public AI systems. When you're creating materials for patients, you must keep all identifying information out of your prompts.

❌ NEVER Do This

"Create exercises for Maria Rodriguez who had a stroke last month..."
Contains patient name and medical information

✅ Always Do This

"Create exercises for a 45-year-old patient with moderate Broca's aphasia..."
De-identified, professional description

More Safe Examples:

  • Instead of "John from accounting" → "Office worker returning to work"
  • Instead of "Room 314 patient" → "Hospital patient"
  • Instead of "Mrs. Smith's daughter" → "Family member"

You Remain the Clinical Expert

Most importantly - AI is a tool, not a replacement for your clinical judgment. Every piece of content that AI generates needs to be reviewed through your professional lens.

🔍 What to Watch For

  • Awkward or unnatural phrasing
  • Inappropriate difficulty level for patient's stage
  • Vocabulary too advanced or too simple
  • Content that doesn't align with therapy goals
  • Cultural insensitivity or stereotypes

🧠 Your Expertise Determines

  • ✅ Whether sentence complexity matches patient ability
  • ✅ If vocabulary choices are cognitively appropriate
  • ✅ Whether exercises align with therapeutic goals
  • ✅ How to modify content for individual needs
  • ✅ When to progress to more challenging material

💡 Recommended Workflow

1

Generate Options

AI creates 5-10 different exercise variations in minutes

2

Apply Clinical Judgment

Select the 2-3 that perfectly match patient needs

3

Modify & Refine

Adjust content based on your professional expertise

4

Plan Progression

Adapt remaining options for future sessions

Understanding AI Limitations

🌀 What is "Hallucination" in AI?

In artificial intelligence, a hallucination happens when the system generates information that looks correct but is not actually true or based on real evidence. The AI does not "know" it is making a mistake. It can sound confident and give details that feel real, even though they are invented.

Examples of AI Hallucinations:

  • Creating fake research citations that sound real
  • Inventing specific statistics or percentages
  • Making up names of therapy techniques
  • Stating medical facts that aren't accurate

How to Prevent Issues:

  • ✅ Always fact-check specific claims
  • ✅ Verify any research or statistics mentioned
  • ✅ Trust your clinical knowledge over AI assertions
  • ✅ Use AI for content structure, not medical facts

⚖️ Watching for Bias

You should also be aware that AI can reflect biases present in its training data. I've seen AI generate examples that inadvertently reinforced gender stereotypes - like always showing men in technical roles and women in care-giving roles. Always review content for cultural sensitivity and inclusive representation.

Common Bias Patterns to Watch:

  • Gender role stereotypes in examples
  • Cultural assumptions about family structures
  • Age-related assumptions about technology use
  • Socioeconomic assumptions about interests

Being Transparent with Patients

💬 Simple Honesty Works Best

If asked, be transparent with patients about using AI in therapy planning. Keep it simple: You don't need to make this complicated. Simply let patients know that you use technology tools to help create personalized materials for their therapy.

Sample Patient Explanations:

Simple Version:

"I use computer tools to help me create materials that match your interests and goals."

Detailed Version:

"I use AI technology to help generate practice exercises that are personalized for you. I review everything to make sure it's appropriate for your therapy goals."

Family-Focused Version:

"These reading materials were created using AI to match [patient's] interests in gardening and teaching. I selected and reviewed everything to ensure it's at the right level."

Maintaining Professional Standards

📋 Documentation

  • Note when AI-assisted materials are used
  • Document modifications made to AI output
  • Track patient responses to materials
  • Maintain records of effective prompts

🤝 Collaboration

  • Share successful prompts with colleagues
  • Discuss challenges and solutions
  • Contribute to professional AI discussions
  • Stay updated on best practices

📚 Continuing Education

  • Stay informed about AI developments
  • Attend relevant professional development
  • Understand your institution's AI policies
  • Follow ASHA guidelines as they develop

🎯 Quick Reference Checklist

Before Using AI:

  • ✅ Remove all patient identifiers from prompts
  • ✅ Use general descriptions only
  • ✅ Check institutional policies
  • ✅ Ensure secure internet connection

While Using AI:

  • ✅ Generate multiple options
  • ✅ Review content for appropriateness
  • ✅ Modify based on clinical judgment
  • ✅ Check for bias or stereotypes

After Using AI:

  • ✅ Document what materials were used
  • ✅ Note patient responses
  • ✅ Save effective prompts for future use
  • ✅ Share insights with colleagues

Ready to Get Started Safely?

Now that you understand the safety guidelines, you're ready to begin using AI as your research partner.