NEWS

AI algorithms to prevent suicide gain traction

Facebook is one of several companies exploring ways to detect online behaviours that have been linked to self-harm.
Photo of call box on Golden Gate Bridge
The incidence of suicide is rising in the United States among people aged 15-34.CREDIT: Justin Sullivan/Getty
A growing number of researchers and tech companies are beginning to mine social media for warning signs of suicidal thoughts. Their efforts build on emerging evidence that the language patterns of a person's social-media posts, as well as the subconscious ways they interact with their smartphone can hint at psychiatric trouble.
Businesses are just starting to test programs to automatically detect such signals. Mindstrong, for instance, an app developer in Palo Alto, California, is developing and testing machine-learning algorithms to correlate the language that people use and their behaviour — such as scrolling speed on smartphones — with symptoms of depression and other mental disorders. Next year, the company will expand its research to focus on behaviour associated with suicidal thoughts, which could eventually help health-care providers detect patients’ intentions to harm themselves more quickly. And in late November, Facebook announced that in much of the world, it was rolling out its own automated suicide-prevention tools. Tech giants Apple and Google are pursuing similar ventures.
Some mental-health professionals hope such tools could help reduce the number of people who attempt suicide, which is rising in the United States, where it is the second leading cause of death among people between the ages of 15 and 34. And young people are more likely to reach out for help on social media than to see a therapist or call a crisis hotline, according to social-work researcher Scottye Cash at Ohio State University in Columbus.
But Cash and other experts are concerned about privacy and the companies’ limited transparency, especially given the lack of evidence that digital interventions work. Facebook users deserve to know how their information is being used and to what effect, says Megan Moreno, a paediatrician at the University of Wisconsin in Madison. “How well are these [tools] working? Are they causing harm, are they saving lives?”
Machine intervention
Those at risk of suicide are difficult to identify, at least in the short term, and so suicide is difficult to prevent, says Matthew Nock, a psychologist at Harvard University in Cambridge, Massachusetts. According to Nock, most people who attempt suicide deny considering it when talking to mental-health professionals1. Social media, however, provides a window into their emotions in real time. “We can bring the lab to the person,” Nock says.
Machine-learning algorithms can help researchers and counsellors discern when an emotional social media post is a joke, an expression of normal angst, or a real suicide threat by identifying patterns a human might miss, says Bob Filbin, chief data scientist at the Crisis Text Line in New York City.
By analysing 54 million messages on Crisis Text Line, which enables people to talk to counsellors by text message, Filbin and his colleagues have seen that people who are contemplating ending their lives rarely use the word ‘suicide’; words like ‘ibuprofen’ or ‘bridge’ are better indicators of suicidal thoughts. With the help of such insights, Filbin claims that Crisis Text Line’s counsellors can usually determine within three messages whether they should alert emergency responders to an imminent threat
Mindstrong’s president, Thomas Insel, says that collecting “passive” data from a person’s devices may be more informative than having them answer a questionnaire, for instance. Mindstrong’s app would be installed on a patient’s phone by a mental health-care provider and would run in the background, collecting data. After it develops a profile of an individual’s typical digital behaviour, would be able to detect worrying changes, Insel says. The company has partnered with health-care companies to help users access medical care when the app spots trouble.
“I'm not too confident that anything requiring someone to open an app will be that useful in a moment of crisis,” Insel says.
Will it work?
The larger question may be how — and when — to intervene. False positives are likely to remain high, Nock says, and so doctors or companies using technology to detect suicide risk will have to decide what level of certainty is required before sending help.
There is also little evidence that resources such as suicide hotlines save lives, and interventions can backfire by making people feel more vulnerable, Moreno says. For instance, she and others have found that people sometimes block friends who report social-media posts containing possible suicide threats, making friends less likely to report them in the future2.
Facebook’s new suicide-prevention programme relies heavily on user reports, as well as proprietary algorithms that scan posts for ‘red flags’, then contacts the user or alerts a human moderator. The moderator decides whether to warn people in the user's network, provide links to resources such as Crisis Text Line or notify emergency responders.
But the company would not provide details on how the algorithms or moderators do their work, however, nor would the spokesperson specify whether it will follow up with users to validate the algorithms or evaluate the efficacy of intervention. In a statement, a Facebook spokesperson said that the tools were developed “in collaboration with experts” and that users cannot opt out of the service.
The company's tight-lipped approach has left some researchers concerned. “They have a responsibility to base all their decisions on evidence,” Cash says. Yet the company is providing little information that outside experts can use to judge its programme.
Nevertheless, Insel is glad that Facebook is trying. “You have to put this in the context of what we do now," he says, “which is not working.”