Friday, July 12, 2019

The AI That could Help Curb Youth Suicide

Alastair Grant / AP Photo
In suicide-prevention literature, “gatekeepers” are community members who may be able to offer help when someone expresses suicidal thoughts. It’s a loose designation, but it generally includes teachers, parents, coaches, and older co-workers—people with some form of authority and ability to intervene when they see anything troubling.
Could it also include Google? When users search certain key phrases related to suicide methods, Google’s results prominently feature the number for the National Suicide Prevention Lifeline. But the system isn’t foolproof. Google can’t edit webpages, just search results, meaning internet users looking for information about how to kill themselves could easily find it through linked pages or on forums, never having used a search engine at all. At the same time, on the 2019 internet, “run me over” is more likely to be a macabre expression of fandom than a sincere cry for help—a nuance a machine might not understand. Google’s artificial intelligence is also much less effective at detecting suicidal ideation when people search in languages other than English.
Ultimately, search results are a useful, but very broad, area in which to apply prevention strategies. After all, anyone could be looking for anything for any reason. Google’s latest foray into algorithmic suicide prevention is more targeted, for people who are already asking for help. In May, the tech giant granted $1.5 million to the Trevor Project, a California-based nonprofit that offers crisis counseling to LGBTQ teenagers via a phone line (TrevorLifeline), a texting service (TrevorText), and an instant-messaging platform (TrevorChat). The project’s leaders want to improve TrevorText and TrevorChat by using machine learning to automatically assess suicide risk. It’s all centered on the initial question that begins every session with a Trevor counselor: “What’s going on?”
“We want to make sure that, in a nonjudgmental way, we’ll talk suicide with them if it’s something that’s on their mind,” says Sam Dorison, the Trevor Project’s chief of staff. “And really let them guide the conversation. Do [they] want to talk about coming out [or] resources by LGBT communities within their community? We really let them guide the conversation through what would be most helpful to them.”
Currently, those who reach out enter a first-come, first-served queue.Trevor’s average wait time is less than five minutes, but in some cases, every second counts. Trevor’s leadership hopes that eventually, the AI will be able to identify high-risk callers via their response to that first question, and connect them with human counselors immediately.
Google’s AI will be trained using two data points: the very beginning of youths’ conversations with counselors, and the risk assessment counselors complete after they’ve spoken with them. The idea is that by looking at how initial responses compare with ultimate risk, the AI can be trained to predict suicide risk based on the earliest response.
“We think that if we’re able to train the model based on those first few messages and the risk assessment, that there’s a lot more things that you don’t see that a machine could pick up on and can potentially help us learn more about,” says John Callery, the director of technology for the Trevor Project. Counselors will continue to make their own assessments, Callery added, noting that Trevor’s deescalation rate is 90 percent.
Algorithms have incredible potential to recognize unseen patterns, but what’s essential to being a good gatekeeper is agency—stepping forward and intervening if something’s wrong. That may or may not be a thing we want to imbue technology with, though in some ways we already have. Public-health initiatives in Canada and the U.K. mine social-media data to predict suicide risk. Facebook uses AI to quickly flag live videos to police if algorithms detect self-harm or violence.
We query Google on everything from hangover cures to medical advice to how to get over a breakup. The results can be mixed, or even misleading, but the search bar doesn’t pass judgment.
“[Students] go home, they get online, and they can disclose any of this stuff to anybody in the whole world,” says Stephen Russell, the chair of human development and family science at the University of Texas at Austin. Russell has been conducting pioneering research on LGBTQ youth for decades and says that while troubled students “shouldn’t have to go to Google” to address these problems, training real-life gatekeepers to be open and engaged allies doesn’t always work, because of decades of stigma and bias against the queer community. “Even today I hear [administrators] say, ‘Well, we don’t have kids like that here.’ That’s been an ongoing dilemma,” he says.
Which is where the Trevor Project comes in. Eventually, the nonprofit’s leaders want an AI system that will predict what resources youths will need—housing, coming-out help, therapy—all by scanning the first few messages in a chat. Long term, they hope to evolve the AI so it can recognize patterns in metadata beyond just scanning initial messages. For example, if the AI could determine reading or education level from the messages, could it make inferences about how structural factors affect suicide risk? It seems impossible that following a tangled field of “if, then” statements could save someone’s life, but soon enough, it could.
We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.

No comments:

Post a Comment