Large language models, such as Chat GPT, have the potential to detect mental health risks - including depression and suicide - in patients already under psychiatric treatment, notes a study published by Jama Open Network.
The research conducted by a Korean team indicates that although these models "demonstrate potential" for detecting this type of risk, "it is essential to continue improving performance and safety before their clinical application."
The team examined the potential of large language models, which are artificial intelligence systems trained with large amounts of data, making them capable of understanding and generating natural language.
That same potential is shown by embeddings (embedding models), a natural language processing technique that converts human language into mathematical vectors and that was also analyzed by the team.
The study was based on data from 1,064 psychiatric patients between 18 and 39 years old who underwent various self-assessment and incomplete sentence tests.
These last ones consist of proposing to a patient a series of unfinished sentences that they have to finish with the first thing that comes to mind and give subjective information, for example, about the concept they have of themselves or of interpersonal relationships.
The data was processed by large language models such as GPT-4, Gemini 1.0 Pro or Google Deepmind and by text embedding models such as text-embedding-3-large OpenAI.
Research indicates that these models "have demonstrated their potential in assessing mental health risks," including pressure and suicide, "based on narrative data from psychiatric patients."
You may be interested in: Amazon presents Alexa+ with AI features
In a comment on the study, in which he did not participate, Alberto Ortiz, from La Paz University Hospital (Madrid), pointed out that the study was conducted on patients who are already undergoing psychiatric treatment, so generalizing its results to apply this methodology in the detection of risks in the general population "is not possible, for the moment".
Ortiz said in Science Media Centre, a scientific resource platform, that the application of AI in mental health, in any case, will have to focus on people's subjective narratives, as is done in this research.
However, he considered that “it is one thing to detect risks and screen, and quite another to treat people with psychic suffering, a task that goes beyond applying a technological solution and in which the subjectivity of the professional is essential to develop the therapeutic bond”.







