How Artificial Intelligence is progressing in mental healthcare
Researchers at the University of Southern California have developed an algorithm that can help prevent suicides. Here’s a look at how Artificial Intelligence tools are being used to fill gaps in mental healthcare
Ahead of World Suicide Prevention Day on 10 September last year, a World Health Organization (WHO) report revealed that India had the highest suicide rate in the South-East Asia region (India falls within WHO’s South-East Asia region group of countries). According to the report, Suicide In The World: Global Health Estimates, the suicide rate in India was pegged at 16.5 incidents per 100,000 people in 2016. The global age-standardized suicide rate was 10.5 per 100,000 people.
According to the report, suicide is among the top 20 leading causes of death worldwide. Over the years, Artificial Intelligence (AI) tools have been used to fill gaps in mental health care: be it the diagnosis or detection of the early signs of mental health issues. Now, researchers at the University of South California’s Viterbi School of Engineering (USC’s VSE) have developed an algorithm that can identify individuals in real-life social groups who can be trained as gatekeepers to spot suicidal tendencies.
“Gatekeeper training" is an intervention training method approved by WHO. A suicide prevention gatekeeper can be any community member. On a university campus, it could be a teacher or a coach, for instance.
The algorithm follows the theme of gatekeeper training but chooses the right people for the role through AI. “The idea is for our algorithm to help improve an intervention that is already being used for suicide prevention. It’s the most popular suicide-prevention intervention. It is often referred to with the initials QPR— question, persuade, refer," says Phebe Vayanos, assistant professor of industrial and systems engineering and computer science at the USC VSE.
According to an official news release on the school’s website, Vayanos, also an associate director at the USC’s Center for Artificial Intelligence in Society, and PhD candidate Aida Rahmattalabi, the lead author of the study Exploring Algorithmic Fairness In Robust Graph Covering Problems, investigated the “potential of social connections such as friends, relatives, and acquaintances to help mitigate the risk of suicide". Their paper was presented at the Conference on Neural Information Processing Systems (NeurIPS) in Vancouver in December.
“Currently, these gatekeeper interventions which are conducted on college campuses, (or in the) military…they do not really leverage social network information. They just say: Let’s train some people that interact a lot with others. For example, we could train professors on college campuses. But our colleagues from social work observed that this is not a strategic choice. I am not necessarily, for example, as a professor the one that sees others closely in their everyday environment and sees that something is wrong," Vayanos explains in a Skype interview. “Although it is a good first step, I think what we would like to do is also train students and people that are really closely connected to others in the network," she adds.
The only data set the algorithm uses is the real-life social network. “We don’t use personal characteristics of people other than their position in the networks or who they are connected to," says Vayanos. “As far as I know, there is no data that can help you say whether your personal characteristics are going to make you a good gatekeeper or not…. Our hope is to collect such data when we start our first intervention. Then, we can make even more informed decisions that not only take into account the person’s position in a network but also how likely they are to be a good gatekeeper or not."
As of now, Vayanos and her fellow researchers are hiring people who can help design an interface to show the gatekeeper choices in an attractive manner. They have also applied for IRB (institutional review board) approval to conduct an intervention on the University of Denver campus, with discussions also under way for a pilot on the USC campus. “Once it is out, it can be used anywhere. We are looking at college campuses and homeless shelters right now. But one could envisage using it in companies or doing it in a place where there is a community feeling," adds Vayanos.
AI and its subfields are also being deployed to use social media to study predictors or early signs of mental health problems. In 2018, researchers from the World Well-Being Project, a research group at the University of Pennsylvania’s Positive Psychology Center that analyses social media language, used an algorithm to study almost half a million status updates on Facebook from around 1,200 users to pick up signs indicative of depression. According to the findings of their paper, published in the journal Proceedings Of The National Academy Of Sciences, the researchers built a prediction model using the text of the Facebook posts, post length, frequency of posting, temporal posting patterns and demographics.
Similarly, in 2019, researchers from the University of Pennsylvania also studied expressions of loneliness among Twitter users with the help of natural language processing. Their paper and findings were published in the peer-reviewed open-access medical journal BMJ Open. The researchers collected approximately 400 million tweets in Pennsylvania between 2012-16. They identified users whose posts contained the words “lonely" or “alone" and compared them to a control group matched by age, gender and period of posting. Using natural-language processing, they “characterised the topics and diurnal patterns of users’ posts, their association with linguistic markers of mental health and if language can predict manifestations of loneliness", the paper explains. The findings reveal that Twitter timelines of more than 6,000 users, with posts including the words “lonely" or “alone", also included themes of difficult interpersonal relationships, psychosomatic symptoms, substance use, wanting change and unhealthy eating, among other things. The posts were also associated with linguistic indications of anger, depression and anxiety.
Though Vayanos believes analysing social media for signs of mental health issues can be interesting, she doesn’t believe you are necessarily well-positioned to identify warning signs if you are friends with particular individuals on Twitter or Facebook. “Your language online may show that you are experiencing suicidal ideation…. There is definitely value in that. But we pursued a completely different direction. It turns out that when you are faced with suicidal ideations, you do various things. For example: You give away your things. You do things that only someone who is close to you and interacts with you (in the real world) would see."
-
FIRST PUBLISHED24.01.2020 | 08:35 PM IST
-
TOPICS
- For all the latest Fashion News, Lifestyle News, Food News, Smart Living, Health Tips, and Relationships, only on Mint Lounge.