Recently, a study showed that OpenAI's ChatGPT passed the Turing test in the field of psychotherapy, showing that it is more empathetic than human therapists when providing psychological counseling advice. This research result comes from the latest report by the technology media The Decoder.
The research team invited 830 participants to compare ChatGPT with human therapists’ responses. The results showed that the recognition rate of participants was only slightly higher than random guesses, with the probability of accurately identifying human therapist responses being 56.1%, while the probability of identifying ChatGPT responses being 51.2%. This suggests that many people have considerable difficulties in distinguishing AI from human therapists.
Even more surprisingly, ChatGPT generally scores higher than human experts in empathy, cultural competence, and therapeutic alliances. Its replies are usually longer, with a positive tone, and more nouns and adjectives are used, making the overall performance appear more detailed and compassionate. This phenomenon triggers an interesting bias: when participants think they are reading content generated by AI, they often give lower ratings; and when AI responses are mistakenly considered to be written by humans, the ratings are generally higher.
This is not the first time that research has confirmed the potential of artificial intelligence in the field of consulting. Research by the University of Melbourne and the University of Western Australia also found that ChatGPT provides more balanced and comprehensive advice than humans when dealing with social dilemmas, with support rates of 70% to 85%. Nevertheless, most participants still stated that they prefer human consultants.
With the continuous exploration of AI's potential in mental health services, future research should pay more attention to how to reasonably and effectively integrate AI into the mental health service system while ensuring the quality of treatment to improve people's mental health level.