Depression is one of the most common mental health illnesses and the leading cause of disability. It is technically a mood disorder characterized by persistently low mood and a feeling of sadness and loss of interest.
Can depression actually be successfully treated? The short answer is yes, but the journey of this treatment is full of complexity and it is mostly because of the patient. The patient usually doesn’t seek for help and they don’t even tell about there depression to others.
Because of such problems, the treatment of patients get delay and they get deeper in depression and lead to addiction to much bad stuff such as drugs and smoking.
Now a new technology by MIT researchers can solve this problem. Researchers at the Massachusetts Institute of Technology (MIT) have created a new AI model capable of detecting depression by analyzing the written and spoken responses by a patient. It is created by a team of MIT researchers –Tuka Alhanai, Mohammad Ghassemi, and James Glass.
The AI model is being called as “context-free” by the researchers. Researches had developed a neural-network model by which a machine can break down text or audio from a human and assign a score indicating the person’s level of depression
“The first hints we have that a person is happy, excited, sad, or has some serious cognitive condition, such as depression, is through their speech,” says first author Tuka Alhanai, a researcher in the Computer Science and Artificial Intelligence Laboratory.
If we talk about the therapist, they typically use a combination of tried-and-true questions and direct observation to diagnose mental health conditions of the patient.
Based on the result of all tests, the therapist comes to the conclusion that whether a person is affected by depression and what kind of treatment is required to cure him.
Our MIT’s AI also does something similar to this, but without the need for conditioned questions or direct observation. It doesn’t need context.
The research team says it’s so advanced it can accurately predict if the individual is depressed, without needing any other information about the questions and answers and that’s we named it as “context-free”.
“If you want to deploy depression-detection models in a scalable way, you want to minimize the number of constraints you have on the data you’re using. You want to deploy it in any regular conversation and have the model pick up, from the natural interaction, the state of the individual,” said Alhanai.
“The model sees sequences of words or speaking style, and determines that these patterns are more likely to be seen in people who are depressed or not depressed,” Alhanai added. “Then, if it sees the same sequences in new subjects, it can predict if they’re depressed too.”
To test their AI, the researchers conducted an experiment. The researchers have trained the model dataset of 142 people interactions, collected from the Distress Analysis Interview Corpus which contains audio, text, and video interviews of patients with mental health issues and virtual agents controlled by humans. Out of all the subjects in the dataset, 20% were labeled as depressed.
The researcher had set the score for the interactions of the AI. Each subject was scored in terms of depression on a scale between 0 to 27. Scores between 10 to 14 were considered moderate and those between 15 to 19 were considered depressed, while all others below that threshold were considered not depressed.
The researchers tell participants to answer a series of questions. Respondents were free to answer any way they wanted. Participants’ responses were recorded in a mix of text and audio. In the text version, AI was able to predict depression at about seven questions and answer sequences.
But in the audio version, it took about 30 sequences for the AI to make a determination. In the tests run so far, the model demonstrated a success rate of 77% and has outperformed nearly all other models which rely on heavily question and answer structure.
The sequencing technique helps the model look at the conversation as a whole and note differences between how people with and without depression speak over time
“That implies that the patterns in words people use that are predictive of depression happen in a shorter time span in text than in audio,” Alhanai added.
The only painful thing with this AI was that it requires a large amount of data to predict depression from audio than it did with text.
This work represents a “very encouraging” pilot, Glass a senior research scientist in the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) says. But now the researchers seek to discover what specific patterns the model identifies across scores of raw data.
“Right now it’s a bit of a black box,” Glass says. “These systems, however, are more believable when you have an explanation of what they’re picking up. … The next challenge is finding out what data it’s seized upon.”
In my opinion, this technology could be very useful for identifying mental distress in casual conversations in clinical offices. I hoped this method has the potential to be developed as a tool to detect the signs of depression in natural conversation, such as a mobile app that monitors a user’s text and voice for mental distress and send alerts.
The researchers are also looking to expand their model’s capabilities by testing these methods on additional data from many more subjects with other cognitive conditions, such as dementia.
“It’s not so much detecting depression, but it’s a similar concept of evaluating, from an everyday signal in speech, if someone has cognitive impairment or not,” Alhanai says.
More in AI :