4 Fast Facts
Limbic AI is clinically trained on evidence-based mental health content with built-in safety guardrails, unlike general AI models like ChatGPT.
Limbic's two products help patients refer to services (Access) and support them between therapy sessions (Care).
In some services Limbic increased referrals from ethnic minorities by 30% and non-binary individuals by 179%.
Advanced safety protocols detect crisis situations and alert clinical teams without adding to therapist day-to-day workload.
In this revealing conversation with Josh Cable May, Clinical Lead at Limbic, we go beyond the user manual to understand what happens behind the scenes when AI meets mental health care.
Josh takes us through the rigorous testing process, explains how crisis detection actually works, and addresses head-on the concerns many of us have about introducing AI into the therapeutic relationship and whether AI will take therapist jobs.
Whether you're curious about the difference between specialised mental health AI and general models like ChatGPT, or wondering if these tools might actually help reach clients who've been hesitant to seek support, this interview offers a rare glimpse into the development process from someone who understands both clinical work and AI technology!
Listen and/or watch now for free!
To read more about Limbic, you can visit their website here: Limbic.ai.
We've highlighted and summarised key parts of our conversation with Josh below. The below transcript is not word for word. For those interested in the complete transcript, listen or watch the video, or access the full transcript at the bottom of the post.
Interview Highlights
On the difference between Limbic and general AI like ChatGPT
SOPHIA: What would be the difference between Limbic and an app like Chat GPT?
JOSH: The problem with ChatGPT and other generalised models is they're so broad and generalised, typically trained on just the internet. They're really good at general tasks but not specifically trained to work in healthcare. They're also not specifically "guard-railed," which means they have a lot of free reign. This makes it hard to ensure responses are clinically safe and effective.
What we're now seeing is the market flooded with what are called "GPT wrapper apps," which is basically where you just get ChatGPT, put a nice design on the front, add a company logo and say, "We're now an AI therapist."
The main difference is Limbic builds our own models specifically targeted for mental health. We're training those models in-house, doing safety testing in-house, and have about 14 patents on what we call the "Limbic layer" – different guardrails that stop our models from giving medical advice, help detect risk, and ensure responses are based on clinically evidenced CBT textbooks and articles.
On Limbic's two main products
JOSH: Our first product is called Limbic Access. Its purpose is to help patients access and refer themselves to mental health support, replacing static web forms or phoning administrators. It's a chatbot on mental health service websites that guides users through a conversational referral process.
The secondary purpose is supporting clinicians because when someone comes into an assessment, you have much more information than you typically would from standard referrals.
Limbic Care is about supporting treatment. The patient journey typically starts with referral through Limbic Access, then assessment with a human clinician who determines their treatment pathway. Limbic Care supports that human-led therapy.
It uses generative AI to support patients between therapy sessions, essentially replacing PDF worksheets and workbooks with conversational AI trained on those same materials. Rather than just writing in a static PDF, it'll conversationally guide patients through exercises: "Tell me about a difficult situation this week. What was your thought in that situation? What did you do?"
This is significantly more engaging than PDFs and helps when patients get stuck. Even the best therapist only has one hour a week with patients, and in the remaining time, patients might Google questions or use ChatGPT. With Limbic Care, they can ask questions and get evidence-based responses.
On safety protocols and crisis detection
SOPHIA: How can you make Limbic safe for patients in clinical settings when we're not monitoring what's happening between sessions?
JOSH: That's the key part of my role and team. We have clinicians on staff to ensure safety. Our guardrails make sure the bots can effectively identify crisis and won't give medical advice.
Before any response reaches the patient, we run it through checks to ensure it's safe, follows evidence base, isn't giving medical advice, etc. If we think a response could be distressing or isn't evidence-based, we regenerate it. This happens in milliseconds and doesn’t delay the user.
We also conduct extensive pre-launch testing. This includes automated testing where we have AI models test each other, which quickly processes hundreds of thousands of conversations, flagging issues like "risk occurred in 10% of cases."
We then have qualified clinicians do thousands of tests, including "red teaming" where we try to break the chatbot and get it to behave inappropriately, to make sure we are catching any wrong answers.
We also test with non-clinical populations, having people pretend to have depression to see how the AI responds. This eliminates bias in testing, and if something unexpected happens, it doesn't impact someone actually suffering with depression.
By the time it reaches patients, it's been tested hundreds of thousands of times. Even after launch, we monitor interactions, and patients can log whether responses were helpful, unhelpful, or harmful.
On increasing access for underserved groups
JOSH: We published research in Nature Medicine with amazing stats. In services using Limbic, we saw almost 30% increase in referrals from ethnic minorities and a 179% increase in referrals from individuals identifying as non-binary.
A key reason was not having to interact with a human initially. For many people, especially from minority groups, this is potentially the first time they've spoken about mental health to anyone. They preferred that first interaction wasn't with a human.
We make it clear they're talking to AI, and patients like that because there's a perception (which is correct) that AI can't be judgmental. This leads to less stigma compared to speaking with a human. This is not to say admins would hold stigma, but it takes the question of if they do out of the equation.
These feelings of not being judged can help people realise they might need support. Many patients who were unsure when first speaking to the chatbot ended up engaging with therapy they needed. There's something about taking time to express yourself in your own words in a safe space without worrying about judgment.
On the fear of AI replacing therapists
SOPHIA: Is there a risk of Limbic care or other AI care replacing therapists?
JOSH: This concern is completely understandable. If you're feeling that way, it's good to talk about it. Many people worry about job security, but I cannot stress enough that no Limbic product is designed to take a therapist's job.
The problem is there's already a shortage of therapists. We're trying to support therapists to fill that gap, support patients to access help, and make therapists' lives easier. Limbic Care is a therapy support tool, not advertising itself or pretending to be a therapist.
I've found that once therapists understand how Limbic works and why it's safe and won't take their jobs, it becomes more acceptable and less scary. It's like anything - uncertainty leads to anxiety, but knowledge reduces fear.
I saw a good analogy: way back when people made clothes by knitting, new technology like the loom came in to assist humans. It didn't stop humans from making clothes; their job just evolved. That's where we're going with therapy - not taking humans out of the loop but potentially changing how we deliver therapy.
CBT hasn't changed since the 60s - you have one hour a week with limited support between sessions. There's room to increase effectiveness, and AI can help by giving therapists more ability to focus on that hour with patients while supporting everything around it.
What do you think about AI in healthcare? Would you be interested in integrating AI into your practice? Have you done so already? Tell us in the comments!
Want to read the full transcript? Download below.
Share this post