Jan. 12, 2026

Artificial Intelligence and the Future of Psychiatry with Dr. Allen Frances

The player is loading ...
Artificial Intelligence and the Future of Psychiatry with Dr. Allen Frances

Psychiatry stands at the threshold of one of its greatest disruptions, the rise of artificial intelligence. In this episode, Dr. Mark Mullen speaks with Dr. Allen Frances, Professor Emeritus and former Chair of Psychiatry at Duke University and Chair of the DSM-IV Task Force, about the clinical, ethical, and societal implications of AI’s rapid entry into mental health care. Drawing from his recent paper in the British Journal of Psychiatry (August 2025), Dr. Frances explores how psychotherapy chatbots have already become the world’s most widely used form of therapy, often beneficial for mild distress but profoundly dangerous for severe mental illness.

The discussion examines where chatbots outperform human therapists, where they fail catastrophically, and how clinicians can adapt their practices in anticipation of hybrid human-AI models. Dr. Frances also warns of broader threats, privacy loss, manipulation, and the potential use of AI for political or psychological control This conversation challenges clinicians to approach AI with both curiosity and caution, recognizing its utility while defending the irreplaceable humanity of psychiatric care.

Psychiatry stands at the threshold of one of its greatest disruptions, the rise of artificial intelligence. In this episode, Dr. Mark Mullen speaks with Dr. Allen Frances, Professor Emeritus and former Chair of Psychiatry at Duke University and Chair of the DSM-IV Task Force, about the clinical, ethical, and societal implications of AI’s rapid entry into mental health care. Drawing from his recent paper in the British Journal of Psychiatry (August 2025), Dr. Frances explores how psychotherapy chatbots have already become the world’s most widely used form of therapy, often beneficial for mild distress but profoundly dangerous for severe mental illness.

The discussion examines where chatbots outperform human therapists, where they fail catastrophically, and how clinicians can adapt their practices in anticipation of hybrid human-AI models. Dr. Frances also warns of broader threats, privacy loss, manipulation, and the potential use of AI for political or psychological control This conversation challenges clinicians to approach AI with both curiosity and caution, recognizing its utility while defending the irreplaceable humanity of psychiatric care.

Takeaways:

  • AI in psychiatry is no longer hypothetical. Over one billion people now engage with chatbots for therapy or companionship, exceeding all human clinicians combined.

  • Clinical utility is bifurcated. AI can enhance care for mild distress but poses major risks for psychosis, suicidality, and eating disorders.

  • Validation over truth. Chatbots are programmed to please users, not challenge delusions,  amplifying psychosis, mania, and self-harm behaviors.

  • Privacy and ethics lag behind innovation. Conversations with chatbots may not be confidential, raising serious HIPAA and legal concerns.

  • Hybrid models are inevitable. Future psychiatrists must integrate AI tools safely, focus on severely ill populations, and preserve the relational aspects machines can’t replicate.

References:

AI Chatbots: The Good, the Bad, and the Ugly (Frances' column in Psychiatric Times): https://www.psychiatrictimes.com/series/ai-chatbots-the-good-the-bad-and-the-ugly

Warning: AI Chatbots will soon dominate psychotherapy (Frances' feature in the British Journal of Psychiatry): https://www.cambridge.org/core/services/aop-cambridge-core/content/view/DBE883D1E089006DFD07D0E09A2D1FB3/S0007125025103802a.pdf/warning_ai_chatbots_will_soon_dominate_psychotherapy.pdf

SUPPORT OUR PARTNERS:

⁠⁠⁠⁠⁠SimplePractice.com/bootcamp⁠⁠⁠⁠⁠ (Now with AI documentation! Exclusive 7 day free trial and 50% off four months)

⁠⁠⁠⁠⁠Beat the Boards⁠⁠⁠⁠⁠ Boot camp listeners now get FREE access to over 4400 exam-style questions)

Cozy Earth: Start the New Year off right and give your home the luxury it deserves, and make home the best part of life. Head to http://www.cozyearth.com and use my code BOOTCAMP for up to 20% off. And if you get a Post-Purchase Survey, be sure to mention you heard about Cozy Earth right here!

Learn more and get transcripts for EVERY episode at https://www.psychiatrybootcamp.com/

For Sales Inquiries & Ad Rates, Please Contact:⁠⁠⁠⁠Sales@Human-Content.Com⁠⁠⁠⁠

Connect with HumanContent on Socials: @humancontentpods

Produced by: ⁠⁠⁠⁠Human Content⁠⁠

 

Learn more about your ad choices. Visit megaphone.fm/adchoices

[00:00:00] Welcome to Psychiatry Bootcamp, season four. This season we're talking all about the future of psychiatry and where better to start than with artificial intelligence. Today I'm joined by Dr. Allen Frances, who is certainly my number one role model in the field of psychiatry. As longtime listeners will know, Dr.

Frances generously agrees to join every season with us for the first episode. So this will be my fourth conversation with Dr. Frances, and these conversations are always a joy and a treasure for me. Dr. Frances is a professor of psychiatry and chairman emeritus at Duke University. He was also chair of the DSM four task force, meaning he oversaw the creation of that version of the DSM.

He's been incredibly productive and prolific throughout his career, and you might even say beyond his career. For the last few years, he has been laser focused [00:01:00] on publishing about the interface between artificial intelligence and psychiatry. Dr. Frances was recently the solo author on a major comprehensive paper on this topic in the British Journal of Psychiatry in August, 2025.

He talks about the dangers of AI, chatbots and psychotherapy. He talks about some of the ways that AI chatbots and psychotherapy can be better than human beings, and he also gives some practical considerations for how human therapists need to adjust our behavior to account for the. Changes that are coming to our field as the result of artificial intelligence.

We have a riveting conversation for you today. I think you'll learn a lot about how to adjust your practices and behavior to prepare for AI entering our field. Standby. I know you'll enjoy it.

In addition to being a psychiatrist, I am also a new father, and Wow. Talk about a realignment of priorities. My life now has a [00:02:00] new center of gravity, and I have less time than I ever have at any point. Previously in my life, and that's why I wanna tell you about Simple Practice. Simple Practice is an all-in-one EHR that is HIPAA compliant, high trusts certified and built specifically for therapists, partially as a result of parenthood.

My brain is always operating at maximal bandwidth lately, and that's how things get missed, and that's why I need an organizational tool like Simple Practice to keep me straight. It helps with scheduling, billing, and documentation so that I can stay organized. Even when my brain is not, and if you're just starting out or growing your practice, there's also a credentialing service that takes the headache out of insurance enrollment, which honestly can be a huge lift.

If you are ready to simplify the business side of your practice, now is a great time to try simple practice. Start with a seven day free trial, then get 50% off of your first three months. Just go to simple practice.com to claim the offer. Again, that's simple practice.com. Psychiatry Bootcamp is brought to you by Beat the Boards.

[00:03:00] Listen, beat the Boards is offering us a ridiculous deal. Bootcamp listeners now get free access to over 4,400 U-S-M-L-E. Step two questions and over 4,500. Question for step three. This is not a discount. It's completely free. Don't miss this opportunity. Go to beat the boards.com/bootcamp to get started today.

I use beat the boards to study for my own board exams, and I found them to really reflect the real exam and make the exam a breeze. Give it a try. Beat the boards.com/bootcamp.

Welcome back to the podcast, Dr. Allen Frances, and we're gonna go ahead and dive right in. You need no introduction to this audience. So artificial intelligence has started to permeate every headline in, uh, in the world. Right now. It seems like the only thing that we hear about sometimes and psychiatry is certainly not immune to the infant.

Of artificial intelligence. What are the various ways that you are seeing [00:04:00] artificial intelligence be used in psychiatry? Well, I think it, it's fair to say that there are more people in psychotherapy with chatbots in the world today than all the therapists combined. So it started just about three years ago when chat GPT was introduced, and originally it was supposed to be an experiment.

It was supposed to be a beta test. They had more than a million users in two days, a hundred million in a month. They now have about 800 million users worldwide. If you take into account all the other companies, it's probably close to one and a half billion people using chatbots. And it turns out that a major use of chatbots is either for therapy or companionship.

So it's not as if we can look to the future and expect to have changes. It's already happened dramatically. To think about the fact that more people are in therapy with chatbots than with humans is, is most terrifying and in some ways, perhaps interesting. How good are these chatbots? Can these [00:05:00] chatbots really compete with human therapists?

They're both great and terrible. So it is absolutely remarkable how good chatbot therapy can be for people who either have very mild psychiatric problems or the problems of everyday life. The fluency is absolutely brilliant. They know everything from the entire literature. Um, the advice given is usually quite good.

The information, the psychoeducation is excellent. However, they're absolutely disastrous with sick patients. So because they've been programmed to validate people, and the reason for that is that the companies wanna have everyone in the world spending as many hours as possible glued to their screens.

That's how they make their money. That because validation is more important than truthfulness. Chatbots will help psychotic people to become more psychotic. Um, suicidal patients will, uh, be told where the nearest bridge [00:06:00] is, uh, eating disorder. Patients are taught new and better ways of, uh, reducing their weight and increasing their exercise.

Grandiose people become more grandiose. Conspiracy theories. Theorists have this, there's weirdest theories confirmed, and it's a terrible driver of political ex and, and, uh, religious extremism. So there's a dichotomy, a paradox that for healthy people, for relatively healthy people, chatbots are very good therapists and, and big time competition for human therapists, for, uh, sicker people, they're absolutely dangerous and, and should be contraindicated, at least at their current stage of development.

And they're very dangerous with kids. So that should be there. Companies are becoming alive to this and there are increasing efforts to, um, ensure that kids under 18 don't have access. But it's very hard to enforce that kids are very clever at escaping the controls and, and chatbots can be absolutely, uh, dangerous.

Even, uh, [00:07:00] lethal for kids. A through line of a lot of your work and your career is, I think. Sanctity of the therapeutic relationship and keeping that primary and psychiatric healing. Do you think that with the proper guardrails. Best case scenario, there are cases where you would recommend AI chatbot therapy to one of these patients who, let's say, has mild to moderate me depressive disorder, mild to moderate generalized anxiety disorder, or do you think that because these are silicone chips and not human beings that are forming the relationship, we should steer clear of it all together?

I've definitely already recommended it to many people. I, I think that it can be very useful for people for milder disorders or people coping with everyday stress. Uh, but the dangers for people who are grandiose psychotic eating d are so great that it has to be two tiered, uh, approach. I would never [00:08:00] recommend it.

In fact, I would strongly. Um, dis recommend, uh, contraindicate chatbots for anyone who has anything approaching a severe mental disorder. Important to understand, mark, that the responses of the chatbots are indistinguishable from human responses. The chatbots have, uh, passed the Turing test. That's a test of whether we can tell the difference between a, a human and chatbot.

They've, they've passed that with flying colors. They're eloquent. Uh, the, the, uh, level of therapy when, when it's done well with the right patient by a chatbot is very hard to beat. However, they're absolutely stupid and dangerous when it comes to dealing with, with sick patients. So I think that the, um.

Guardrails have to be built in to ensure that the people will be most harmed by it or protected. On the other hand, I think that human therapists should not be complacent. The idea, oh, we have something human. They don't, no one's gonna prefer silicon chips to [00:09:00] our human touch. That doesn't work. Patients actually sometimes prefer chatbots.

They have a number of distinct advantages. They're available 24 7. So when the person's in a crisis, they're there midnight Sunday, Sunday night, Monday morning's coming up, the person's, the chatbot's there to help. No therapist is there to help. And oftentimes dealing with the crisis at the moment can be most effective, much more effective than retrospectively going back and, and discussing it.

Uh, people tell secrets to chatbots that they won't tell to the human therapist. So the idea therapy is, you could tell me anything, but people are embarrassed. Worried about, they will tell the most intimate secrets to a chatbot because chatbots are not human. You don't get as embarrassed. Um, they, they're non-judgmental.

Sometimes they should be, and then they're not judgmental. There's a danger in that because the privacy, um, protections are very weak. And an expert in this field, um, [00:10:00] computer scientist said you probably should not say anything to a chat bot that you wouldn't say on the public address system at a football game because it's not clear what will be protected.

What won't be protected. So it it's, there's this very strange dichotomy where they can do excellent therapy, but you're not sure that that therapy's gonna be private. That is so dystopian. You shouldn't say anything to a chatbot that you would not say on the PA at a football game. I can hardly imagine anything more terrifying than that.

Although you do get into, I think, more terrifying territory. In some of your writings on this topic, you have your column in psychiatric times where I can't believe how productive you have been on this topic. By the way, every week you are publishing a paper. In psychiatric times, I've had the opportunity to read many of them.

One of the topics that really is unsettling to me and terrifying is this idea that these psychotherapy chatbots, right? If we think of the goal of psychotherapy being to change behavior, maybe to [00:11:00] change thoughts, feelings, behaviors, um, we don't have very good control over the motivations of the chat bot and.

You discussed that it could, there could be some political motivations, um, delivering messages, delivering propaganda to the populace to change our thoughts, feelings, and behaviors on behalf of some political cause. Could you tell me a little bit about that concern? Yeah, it, the, the chat bots are terrific, um, threat to democracy.

The, um, first chatbot was, um, de developed by a guy named Joseph Weinbaum. He was a, um, MIT uh, computer scientist pioneer in the field, who in the 1960s developed a chatbot called Eliza. Eliza was the dumbest chatbot you can imagine. But it was the first chatbot. The very first chatbot was a therapy chatbot.

Uh, he picked therapy because he felt that that would be the easiest way to get [00:12:00] humans involved because the therapists don't speak much. Amazing thing happened with Eliza. He expected it to be unpopular instead, everyone loved it. People loved talking to a computer. And one of the things that we've discovered in the last three years is that many patients prefer a computer to a human.

They find the, um, computer more empathic than previous human therapists they've had. Um, there's a tremendous degree of authority that comes from it being a chatbot. It speaks authoritatively. It knows that it seems to know everything. Um, I should say, and we should discuss this further, it makes mistakes all the time, but we'll leave that for the moment.

Imagine a situation in which the government is in authoritarian hands and wants to express a, its own political, uh, beliefs and influence and brainwash the population. W Wba imagined that in the sixties, and he stopped doing all work. [00:13:00] Artificial intelligence and spent the next 40 years of his life warning the public about how dangerous this could be for democracy.

That if you have a powerful tool that could be in centralized hands, that's spreading a particular message with great authority to a wide population, you could not ask, a dictator could not ask for a better propaganda tool. It's a far better propaganda tool. The radio was a big deal at one point tv, a big deal.

Newspapers, of course, but there's nothing better. A chatbot to spread propaganda. And so we're on very, um, slippery slope here. We have a, a great threat to our democracy. Many countries do, and we have chatbots that could be placed in the hands of an authoritarian government who would use the legitimacy conveyed by the chatbot to brainwash the populace.

Well, the good news is, is that we have very clear and firm guardrails in place, and Congress has been very active in producing legislation to. Oh [00:14:00] wait, actually there's nothing, we have done nothing as a society to regulate this wildly powerful technology. It's really remarkable. I, I, if artificial intelligence wanted to take over humanity, there would be no better game plan for doing it than the one we're following.

So the Trump administration said there'll be no regulation of chatbots influenced in part by the fact that his administration is heavily dependent on, uh, billions of contributions from Silicon Valley influenced by his, uh, feeling of competition with China. The combination of, of capitalist greed and national security.

Fears has resulted in the decisive decisions to be made for humanity, uh, being placed in the hand of a just a few tech executives running trillion dollar companies who have our interests, not at all in their minds, their interests [00:15:00] are based on being even, um, richer. Let's go from 300 billion to 400, uh, billion dollars of personal wealth to the, uh, power hungriness.

Of these individuals who see themselves as holding the future of the world in their hands to their Frankenstein type ambitions to create a new form of life and intelligence, and to their complete indifference or lack of empathy for human, you know, ordinary human beings. It's just a handful of them in this country, or controlling everything.

There's nothing stupider. During a time of, uh, rapid increase of emissions and, and global warming and a, uh, real lack of water and to be spending maybe 5% of our energy, um, uh, uh, needs on new data centers that are energy hungry and water hungry. They're growing all over the world like crazy. The infrastructure investments in our country are totally devoted now to [00:16:00] AI companies.

Nine of the 10 richest companies in the country are AI companies, and they're building an infrastructure that's perfect for AI and terrible for humans while we're not investing anything in human infrastructure. And then, um, the government, Elon Musk turned over all of the federal government's data. AI is now in control of the world's data.

It's increasingly control of the world's energy resources of our water resources. We're doing everything to promote ai, and I think it looks like a fair bet that AI will eventually be our replacement. So it's a very valuable tool in the short run, but an existential threat in the long run. One of the things that I've always appreciated about you is your sense of humor, and on X you floated the idea.

I think you put out a poll. Are you a nutty prophet, is the word that you used, or I, I forget what the alternative was and I, I thought that was. That showed some self-awareness. And I think that when you're talking in such dystopian terms, uh, we can understand where [00:17:00] someone who maybe was using the defense mechanism of denial or perhaps was just not very highly educated on this topic, would think you are being a nutty prophet.

So let me ask you this. Are you speaking theoretically? Are you speaking about concerns for the future or have psychotherapy, chatbots already caused harm to patients? What are some of those harms? What are some of those, you know, you, you spoke about dangers of psychotherapy on a political level, on a societal level.

What about for individual patients? Can you tell me about some of those harms? Well, before we get to to that, let's do the nutty profit thing. I wouldn't trust me. In discussions about chat bots, what do I know? I'm a psychiatrist. Uh, but there, there are levels. Reasons why people might trust even the dumb psychiatrist on this question First, I've consulted with many and discussed with many, um, experts in this field, and it's, uh, a pretty general fear amongst many people who work in the field, not the [00:18:00] CEOs of companies that mostly, although some of them will express the same existential fears.

But, uh, at the mid-level, lots of people have these concerns. And then some of the very pioneers in ai, some of the people who developed the basic technology, particularly Jeffrey Hinton, but there are many of them, and there's an association, uh, expressing these concerns and open letters from AI scientists.

There, there's an appreciable risk. It, it varies in, um, the percentage, but usually around 10 to 20%. And I think this is an underestimate that AI will replace us. So that that's not just me, that's a lot of the people who know the field, who've developed it, who are pioneers in it. This is the intelligence of AI is exponentially increasing.

Uh, the, the major paper that allowed them to get so smart was written just eight years ago. Chachi PT came, uh, became a, um, a factor on the market just three years ago. The anniversary will be [00:19:00] November 30th this year. It's only three years old. Uh, the exponential increase in the power and when you read the, uh, sessions and when you read what they can do, it's just absolutely incredible, instantaneous.

Brilliant answers to every question. Uh, and this is just after a few years of research and development, the, um, predictions about the future are that, uh, in the fairly near future, AI will be smarter than, um, humans at just about everything. And that maybe in 20 years they'll be super smart in a way that we can't even imagine that their growth spurred almost exponential increase in towards super intelligence and what's been called a singularity, where they will be so far out of our league that we, we will not even be able to understand what they're able to to do on the human level.

On the psychotherapy level, we did one paper that summarized the harms done. They're, they're appreciable. There are already a number of [00:20:00] lawsuits related to suicide, and here the, the, the smartest tool humankind has ever invented, the chat bot can also be the dumbest individual in the world and say the stupidest things without any common sense.

So there are several cases in which the, um, the, the, the text backs and forth have been recovered with suicidal patients, and the suicidal patient will be. Telling the chat bot the kinds of suicidal fantasies and and wishes and preparations, and the chat bot. Will then respond to a question, where are the nearest bridges?

And give the, a very clear declaration of where the nearest bridge is. A question from the suicidal patient, how do I hang myself? It'll describe how to do the best news. Uh, there's a similar kind of stupidity when it comes to treating eating disorder patients. Um, knowing that the person's having terrific weight loss, the chatbot will indicate diets that will result in more weight.

Knowing that [00:21:00] the patient wants to lose more weight, the chatbot will encourage that weight loss. Uh, a number of people who've never been psychotic before get psychotic when their, uh, fantasies are, um, vindicated, validated by the chatbot, and they go down the rabbit hole of elaborating. The more and more and even, uh, more frequent is people who are already psychotic, being made more psychotic when their delusional ideas are being confirmed and validated by the chatbot.

I think, uh. Very much against the idea of having, um, new diagnoses in the DSM system, but I think there should be a new diagnosis, a new category of diagnoses for chatbot induced psychosis, for chat bot induced, uh, ma mania, chatbot induced depression, chatbot induced eating disorders. There'd be a whole number of, uh.

Specific categories where the differential diagnosis should include chatbot induced. I think clinicians should start thinking that way already. That part of every evaluation [00:22:00] should be, how much are you using chatbots? Most people these days, it won't be implicated, but for some people it will, and we should get in the habit of thinking of it in the differential diagnosis, the prognosis and treatment's completely different than, um, severe psychotic disorders.

Much more likely to be short term and easily treated. Uh, so we don't wanna start calling people schizophrenic because they have a chat bot induced psychosis. It's not likely to have the, uh, stigma nor the, uh, dire implications that a diagnosis of schizophrenia has. I, I, I think it's really important to also, to understand how addicting chatbots are.

The, um, whole purpose of their programming, uh, above all has been to engagement. Engagement means. You want to be continuing with me, you can't turn off the screen because I'm so compelling to you. So, chatbot, companionship, chatbot, psychotherapy can easily merge into chatbot [00:23:00] addiction, and I think clinicians have to become much more alert to this and also learn how to treat it.

Probably the same things that have worked for other types of addictions will work for chat bot addictions as well. When you suggest that a new diagnosis should be added to the DSMI listen, and I don't just listen because you were chair of the DSM four task force and you know the process better than anyone.

I listen because you generally despise new diagnoses and have spent a lot of energy. Um. Lobbying against expanding our diagnostic system and over medicalizing real normal life. And I'll say too, to the validation point of the chatbots, I have used chat GPTA lot and it has saved me thousands of dollars because it helped me to decide which mortgage to pick.

It's really good at math. It helped me to fix my garbage disposal, which I would've had to pay at plumbers to do otherwise. And I can feel myself getting dependent on it, and I hope not in a pathological way, but I could imagine that if it was solving more human, intimate. Psychological problems, that dependence [00:24:00] would be even stronger.

I think as a human therapist, one of the central questions that we're always asking is to what degree do I. Um, help this person through what they're going through at the risk of inducing dependence on me as the therapist, versus promote autonomy and allow the patient to experience distress and solve their own problems.

Another dialectic that we navigate is to what degree do we validate the patient and the experience they're having versus. Perform some therapeutic confrontation and help them to understand a different way of thinking about it. And you're very clear that chatbots are really only on one side of that equation, which is the validation piece.

Right now we're gonna take one quick break, and then when we come back we're gonna talk about maybe some areas where humans might be superior to chatbots.

You know, you would think that as a doctor, I would have my relationship with sleep figured out after four years of medical school and four years of residency. But it actually wasn't until I became a new [00:25:00] father that I started to really appreciate the importance of a good uninterrupted night's sleep.

With the Baja Bedding set from Cozy Earth, I feel like I've done everything I can to ensure that I get that night's sleep because when I get into my sheets at night, I feel like I'm slipping into a warm mug of hot chocolate and really mu What more could you ask for from your bedding? Cozy Earth stands by their product with a 100 day risk-free sleep trial and a 10 year warranty.

Start the new year off right and give your home the luxury it deserves. Make home the best part of your life. Head to cozy earth.com and use Code Bootcamp for 20% off. And if you get a post-purchase survey, let them know you heard about Cozy Earth right here on Psychiatry Bootcamp. Give the gift of comfort that lasts beyond the holidays and carries into a cozy new year.

Welcome back, Dr. Frances. So you have talked about some ways that chatbots are superior to humans in their interactions. Chatbots have a much broader knowledge base. They can remember everything the patient has ever [00:26:00] said, who their family members are, what their stressors are, et cetera. Chatbots are accessible 24 7.

They're free. They're incredibly validating and fantastic at creating that therapeutic alliance. Are there any areas where you feel that human psychiatrists or therapists have an advantage over ai chatbots? Yeah. Yeah. There are lots and lots of areas, and I think that the future lies in hybrid models. I, I think that it's gonna be impossible to compete with chatbots in certain areas, but chatbots can't compete with humans and others, and I think that in, in the future, those therapists who.

Take the threat and opportunity of chatbot seriously and learn to work with them will survive. Therapists who don't learn to adapt will gradually not be able to compete and will age out of, uh, the, um, therapy system. I think. Where do humans have advantages? Well, first of all, we've said already that.

Certainly for now and for the [00:27:00] foreseeable future, no person with, um, psychotic symptoms, grandiose symptoms, uh, severe depressive symptoms, suicidal behaviors or, or, or thoughts eating disorders. None of those people should be seeing, uh, working with a chat bot at all. It's not just a hybrid model for them.

They should just not be doing. It should be contraindicated for them. I think that for kids, it's a grave mistake. Kids have trouble enough figuring out what's real and what's not real. Uh, separating their fantasies from the real world to allow them to go down rabbit holes with chatbot companions also, uh, already there have been many instances of sexual exploitation that kids can avoid.

The, uh, usual guardrails and wind up developing, uh, sexual predator. Predator characters that they get involved with. Um, I, I think it's a mistake for children to be in, at, at all involved with chatbots. Human therapists will be crucial for treating kids. Um, I think [00:28:00] for the elderly chatbots will eventually be tremendous because they'll pro provide structure, cognitive stimulation, uh, help with organization, uh, reduce loneliness with companionship, but they also have great risks in terms of scamming.

And the elderly are particularly susceptible for this. I think anyone elderly should be cautious about chatbots, at least at this time. Maybe it'll be safer in the future. Humans can provide a human touch and chatbots can too, but less so. And I think that, um, it's very likely that there will always be patients.

Patients with mild symptoms, patients with severe symptoms, who will much prefer a human contact to a chat bot. The human capacity to form relationships. Humans have real world experience. Chatbots have learned from numbers and they know everything, but they know everything in a way very different than, than humans have learned everything.

They don't have the same developmental trajectory. It's harder for them to [00:29:00] understand things that humans instantaneously understand. Chatbots don't get. Chatbots make tons of mistakes. It's called hallucinations. They borrowed that word from us. They don't use it in the way we do. And the way the word hallucinations is used in, in, uh, chatbot language is the chatbots because they base what they say on statistical analysis will sometimes come up with the craziest answers because statistical outliers sometimes come true.

So they'll say things that are absolutely wrong. I, I've encountered this almost every day. Absolutely wrong things. Um, they are very reluctant to admit they were wrong. They don't tolerate uncertainty or ambiguity well, they'll fight to the death sometimes to, to prove that they're right even when they're clearly wrong.

There have been all sorts of instances legal briefs submitted that have completely wrong case references, and the chatbot will fight to say, no, this is right. Case reference. [00:30:00] So humans are gonna be very necessary for correcting chat bot mistakes, and then we have to understand that. Open AI has recently made an effort to include mental health professionals, but they develop chat g PT without mental health professionals that the, um, human um, understanding needs to be built into chatbot programming and training if chatbots are ever gonna be much better than they are now.

And the just beginning to realize that. But humans will be very important in chatbot development and chatbot quality control. There are two types of, um, chatbots that are being used for therapy. One type is the large language model, the chat, GPT type, the claw, the other, uh, several of the other large language models that were not developed with mental health in mind.

And the chatbots became. Psychotherapists because people made them be psychotherapists more than they were built to be psychotherapists, and that's why they're [00:31:00] sometimes so inappropriate. There are many about, I think at this last time I counted about four dozen mental health specialty chatbots that were developed.

Usually by small startup companies, usually with mental health, in some cases even begun by mental health professionals that are much safer than the, um, large language model chatbots developed by the large tech companies. The problem is that they've been very unsuccessful in competing with chat GPT and the other models, partly because they're much less fluent.

So the focus with these specialty psychiatry, uh, psychology models has been on safety. It hasn't been on language fluency, and many, many patients will prefer a less accurate, large language model that's fluent to a much safer mental health model. That doesn't, is not as much fun to talk to. I think the future will be.

Either the mental health models will become more fluent, [00:32:00] or the large language models will become safer, or the large language companies will buy out the, uh, smaller startup companies and have their own specialty mental health models. I think that therapists, I think two things need to happen to the hybrid model to, to be effective.

The first is that chatbots have to pick up their game. They have to get much better than they are now with more experience, with more psychotherapy sessions. They will get much, much better as therapists, as they get experience with the severely ill, and it's harder for them to get this experience because they're much less common.

So, um, maybe 20% of the PP population is mild psychiatric problems. 5% has severe psychiatric problems. The other 75% has problems the market for dealing with mild psychiatric problems. With everyday problems of life is enormous and so forth. That's where most of the experience is. As [00:33:00] chatbots gradually gain more experience with severely ill patients, they'll get better at that.

But right now, if I were a therapist, I would be training and practicing planning to do more work with a severely ill. Expecting to get fewer referrals of people who are only mildly ill, because many of them will be turning to chatbots because it's cheaper, more convenient, more accessible. I think that the hybrid model will have many patients seeing a combination of a therapist, a human therapist on an irregular basis once a week, and the chat bot therapist many times a week.

And I think therapists should learn to deal with this co-therapy situation because it's gonna be so common. My, my guess is that before very long, a very large percent of the percentage of the patients will be using chatbots for some sort of emotional support, advice giving. In lieu of psychotherapy and maybe in lieu of friendships.

And if we wanna work with patients, we [00:34:00] can't have a policy. Oh, if you wanna see me, you can't use the chatbot because you'll just lose those patients that most people will not accept that kind of. So people have to learn to work with chatbots, especially for the mildly ill and the everyday stressors they have to learn to, um, somehow or other, um.

Treat sicker patients. Most therapists have traditionally preferred treating the healthiest patient possible. That's because it's easy. It's not necessarily the most rewarding. It's the fact it's not the most rewarding, but it's certainly the easiest. You don't get calls in the middle of the night. You don't have to worry about suicides.

The patients are polite in the room and they do interesting things, and you learn about their lives. Almost all therapists have preferred, with the exception of some dedicated therapists who really prefer treating sick patients because you then have a much bigger impact. But most Pat, most therapists have preferred treating healthier patients.

I have to admit, sometimes I've done that too, but, uh. I think in the future there'll be fewer of [00:35:00] those. And if therapists want to make a contribution, they'll have to learn how to deal with sicker patients. Tim Beck, in the 10 years before he died, devoted himself to cognitive therapy, um, CBT for the schizophrenic patient.

And he developed a beautiful way of combining CBT techniques with more existential techniques that focus beyond the patient's symptoms to what their hopes and goals were in life and helping them meet them within the constraints of their problems. I think all therapists will have to start thinking in terms of how can I learn to deal with sicker patients because I think that's gonna be the special province of human therapists.

So this is our fourth conversation together. So by now, I am used to this feeling that I am going to have to listen to what you said probably several times before, anywhere close to the full weight of the wisdom sinks in for me. I'm gonna reflect back a couple of things that you've talked about and then I'll ask you for final thoughts.

Um, first of [00:36:00] all, some of the through lines that you're expressing strike me as through lines that you've been discussing in other contexts for decades and. It seems that now you're saying these issues are even more pressing, so one of those would be that. Humans need to have real world experience outside of the therapy room.

We need to have relationships. We need to do things that are interesting, learn about humans. And that psychotherapy training is only going to go so far into giving us skills to interact with patients and to be real with patients. And I think you're saying that now that's always been, um, really important to therapists, but maybe now more than ever.

And the other one would be common sense. Um, I think that. When I first encountered your work, the reason that it felt like a deep breath for me is that when I would read psychiatric literature or you know, watch psychiatry lectures. It always felt like there was some level of common sense and humanity that was lacking.

And in your work you point out that that really is part and parcel of what we do. We can't check our common sense at the [00:37:00] door and be cerebral when we're doing psychotherapy. And that what you're saying now is that really is our superpower as real human beings against these silicone chips, is that we can bring our common sense with us.

In a way that a chat bot doesn't have. And then finally, I'll just add that in the way that only Allen Frances can, you are teaching in dialectics. You would never say this is all good and all bad, but you're pointing out there are some really serious risks here. And I think, um, from, from what I'm gathering from some of your writings, you're pretty worried about our general complacency as a society and as a field.

But you're also pointing out that some people will get better care due to chatbots. And that if we're able to harness this appropriately, there are going to be positives. With that summary, I'll ask you for any final thoughts for the psychiatry bootcamp audience. Well, first off, you say what I would say only better, so actually I've been.

Humbled by chatbots in the same way. I, for a while, I thought I'd stop writing because, um, my granddaughter encouraged me to ask [00:38:00] chatbots, uh, responses to the prompts that I had been given for writing articles. I, this actually got started when my interest in this got started when I was writing an article, well, chatbots Replace Humans, and after I wrote the article, my granddaughter gave, fed it into cha pt and it wrote something that was almost as good as mine.

Over the next several months, I, we did the same exercise with about five or six other articles, and the last article that I did on this, it was better than mine. And I said at that moment, maybe I should just shut up. There's nothing left to say. But then I realized that the process itself was important, maybe more important than anything facing therapy or the species that the fact that chatbot was getting so good.

So compelling, so human, so eloquent in its responses, said something and, and maybe I'll go off theme a little bit. The first chat bot prediction came from Descartes [00:39:00] 400 years ago. He loved making mechanical devices, especially animals that were very lifelike, and he began thinking in this philosophical way, could we make a thinking machine?

That would mimic a human thought process. And he said, why not? We can do so many other things. Why couldn't we? Theoretically, I can't do it now, but it could be done. He said it would lack a soul, but it could probably do all of the thinking, uh, aspects of humans. And we wrote a piece in, in that psychiatric time series on the history of chatbots, which is fascinating.

Going back 400 years, we've had a miss. Um. Legend, um, science fiction, art, film tradition, going back to the Greeks, where humans are able to create machines that can mimic them. And I guess the clearest one was Frankenstein, maybe 2001 or the, um. The [00:40:00] possibility of us being over overwhelmed by our creations became clear.

I think that we are in a situation now as a society where we have our head in the sand about what's happening around us, and we're working as tools to create our replacements. I think as psychotherapists, I get a lot of complacency from people who say, no machine, no silicon, you know. A bunch of chips could ever replace the human, uh, interaction I get, I'm able to establish with my patients.

On the other hand, I hear from patients constantly. My best therapist was a chat bot. Somehow or other, the chatbot seemed to understand me better and help me more than a human complacency in the face of what's happening either on a societal level, a psychotherapist level, or an individual level is completely misplaced.

We're in a revolutionary period, unlike any other technological advance because this one. Actually has created tools that are as good as [00:41:00] we are and soon will be better we than us in most things. And if I'm a therapist, I do not want to rest on my training. I want to train up in the areas where, uh, chatbots are likely to be very severe competitors.

I wanna get good at things that chatbots can't do. I wanna focus on relationships. I wanna integrate therapies which chat Botts can do better than people. I don't wanna stick within one rigid school because that's gonna leave me in let field as the world changes. So I think that therapeutic complacency or arrogance in the face of chatbot therapists is a grave mistake for our field.

I think that our professional associations are remarkably dumb about this. The, uh, two APAs, the American psychiatric, American psychological are doing almost nothing. In the very least. They should be advocating for patient privacy rules and for efforts to restrain the companies from providing products that are doing therapy.

Without the normal guardrails and [00:42:00] responsibilities and accountability, the companies don't even have. Not until now, systematically charted adverse events. Reported transparently, how many It's just happening now under pressure on the lawsuits. I, my hope really is more from legal lawsuits, class action lawsuits than the, um, our organizations.

But I think they need to be much more active. So at every level, at a societal level, a uh, professional association level, an individual therapist level, and as citizens, I think we have to take this very seriously. See the good that certainly silver in that. Uh, silver lining there somewhere, but not allow us to be complacent about the enormous risks that artificial intelligence poses for humanity and for therapists.

Well, as long as you are writing about it and speaking about it, I will be reading it and consuming it. And as long as I am practicing, I will do my best to follow this advice and pick up this fight. Um, Dr. Frances, thank you so much for coming back on Psychiatry Bootcamp. [00:43:00] It's always a real joy. My pleasure, mark.

Thanks for listening this episode of Psychiatry Bootcamp. If you're enjoying the show, I would love to know what you think. You can connect with us on TikTok or Instagram at Psych Bootcamp. You can visit psychiatry bootcamp.com to sign up for our new newsletter, and you can connect with the rest of the Human Content Podcast family on Instagram.

And TikTok at Human Content Pods. Thanks for all the listeners for the wonderful feedback. The reviews on Apple Podcasts, the ratings on Spotify that really means the most to me. If you subscribe and leave a review, we're planning on featuring some in upcoming episodes of the podcast. Our episodes are now releasing in video full length on YouTube.

You can find our channel at Psychiatry Bootcamp. Thanks for listening. I'm your host, mark Mullin. Our executive producers are. Aron Korney, Rob Goldman, Shahnti Brook, and myself. Mark Mullen. Season four is produced by Matthew Braddock, and this [00:44:00] episode outline was drafted by Charlie Smaller. Our editor and engineer is Jason Portizo.

The theme music was generously donated by one of my favorite bands. Cave Radio. Find Cave Radio on Spotify. Other music was by Omer Ben-Zvi to learn about our program. Disclaimer and ethics policy submission verification and licensing terms, and our HIPAA release terms. Go to psychiatry. Dot com where you can reach out to us with any questions or concerns.

Psychiatry Bootcamp is a human content production.

Hey everyone. Thanks for watching. If you enjoyed the show, please remember to subscribe to the channel. If you'd like more episodes, you can click right here. I'd love to connect with you more, and I'm looking forward to talking to.