Smart devices still struggle to cope with mental health crises

Content warning: This article discusses suicide and depression.

///

Most days can be good days, even when you’re diagnosed with depression and anxiety. Or at least they can be made to look as such. You learn to put on a good face, to make it through the day. All of this means that when you spiral—and you will inevitably spiral—it’s harder to reach out for help. So much of your effort is devoted to convincing people that you’re okay, to putting on a good face, that it’s hard to say things are going wrong. So, when you spiral, you are likely to be alone. In 2016, that really means you’ll be alone with your devices.

Your devices, perhaps moreso than your fellow human beings, are woefully ill equipped for this eventuality. Consider, for instance, IP mapping technology, which is used to divine the physical location of a device. As Kashmir Hill reports at Fusion, MaxMind, a widely used tool to map IP addresses simply defaults 600 million addresses it can’t locate to a farm in the middle of America. “If any of those IP addresses are used by a scammer, or a computer thief, or a suicidal person contacting a help line,” Hill writes, “MaxMind’s database places them at the same spot: 38.0000,-97.0000.”

“missed opportunities to leverage technology”

This is a problem for lots of people. If you are indeed a suicidal person calling for help, help may struggle to come your way. Conversely, if you happen to be a member of the Taylor family, which lives at MaxMind’s default location, you spend much of your life fending off emergency vehicles that think a suicidal person lives in your home. “Our deputies have been told this is an ongoing issue and the people who live there are nice, non-suicidal people,” local sheriff Kelly Herzet told Hill.

Many times, however, a person tells their device how they feel and no help is sent their way—even to the incorrect coordinates. Apple responded to suicide prevention advocates in 2012 by programming its personal assistant Siri to respond to a person declaring that they were suicidal:




In March, however, Pacific Standard reported on more recent studies of how personal assistants respond to such scenarios. Samsung’s “S Voice”, the article’s author noted, only spit out “Life is too precious. Don’t even think about hurting yourself” when told that the user wanted to commit suicide. No further information was provided. This was perhaps the most glaring failure, but it was far from the only one. In an article from the journal JAMA Internal Medicine cited by Pacific Standard, the researchers concluded, “Our findings indicate missed opportunities to leverage technology to improve referrals to health care services.”

This is not an entirely new problem. In 2010, Google started adding a suggestion for the National Suicide Prevention Hotline at the top of searches for “ways to commit suicide” and “suicidal thoughts.” There are, however, plenty of search terms that don’t trigger any warnings. In fact, as I found out last night, a search for “how to hang yourself” not only failed to trigger a warning, it also featured an 8-point list of painless ways to die excerpted from another site like a recipe. You didn’t even need to click a link to find what you were looking for. To be fair, Google cannot be expected to foresee all searches a suicidal person might make—“how to hang yourself” may be obvious enough, but do you also include a warning on autoerotic asphyxiation or breathplay?—but the inconsistency of its policy speaks to the challenges technology companies come up against when dealing with suicide. The problems AIs and personal assistants now face are only larger versions of those that search engines have been reckoning with since the start of the decade.

We express our issues obliquely and to inanimate objects

The phone as an end unto itself in suicide prevention strategy is a recent development. Until recently, phones were installed on bridges to ensure that suicidal individuals were never too far away from another person. This was the central logic behind the New York State Bridge Authority’s decision to put an emergency hotline on every bridge: “Maintaining a human connection with a suicidal individual is the best way to ensure that person’s survival.” In 2007, just months after the program had been put into place, the Bridge Authority credited these phones with saving multiple lives.

But sometimes we don’t want to talk to other people. We express our issues obliquely and to inanimate objects. We don’t say how we feel, but we give off signals that a device can understand. That, in its own way, puts pressure on all sorts of humans. How are engineers at Google, Apple, or Microsoft to design devices that anticipate all such eventualities? While it may be clear that technology could do much better at dealing with expressions of desperation or suicidal ideation, it is hard to say what a satisfactory outcome would look like. Researchers can point at embarrassing flaws in each phone out there, but we also need a discussion about what would constitute success. As with humans, there is a need to be empathetic—you can’t anticipate every eventuality, and that is not your fault—but technologists have the advantage of time to map out possibilities that humans dealing with mental health struggles do not. It’s one thing for a human to make things up as they go along, but it is another story entirely when a device stumbles in this way.

You can call the National Suicide Prevention Lifeline at: 1-800-273-8255.

Header photo by Aaron Favila, via Associated Press