“Big Technology” and Their Responsibility For Suicide Prevention

Sean Erreger LCSW
5 min readJan 21, 2019

--

From how we shop to how we receive healthcare, technology is having an impact on the decisions we make. As we rely on technology for our decisions and social interactions, companies like Amazon, Google, and Facebook are examining large amounts of data about us. This also helps them make decisions about advertising and user experience. As time goes on stewardship around this data is proving to be an increased responsibility for them.

They have more information about our habits and increasingly more details about our direct thought process. We are typically searching Google on something for a reason. We often post things on social media asking questions and frequently looking for support. These searches are benign when were are searching for groceries but when someone is deep emotional pain, this presents a host of ethical issues these companies should be considering.

Recent attention has been paid to Facebook’s response to those posting suicidal content and those who have livestreamed a suicide attempt.

These questions lead to complex problems for users and the companies. Big technology companies like Facebook have a unique opportunity to do something about suicide. From a public health perspective they control a lot of data that can be helpful but also can also be harmful. Facebook is an interesting “case study” as it has been the most publicized about this issue. But as social media and other companies begin to “understand” us more, they will have a responsibility to take action. Here are the points they need to be mindful of.

Assessment

Prior to taking action on suicidal content a careful assessment is needed. When Facebook is determining suicidal risk, how are they doing it? Over the summer Facebook gave a sneak peak into how they are using AI to assess suicidal risk. They appear to using a mix of AI and the humans to come behind and check. I found this explanation both worrying and reassuring. It is good to take steps to use tools such as Natural Language Processing and AI.

The science behind this is new but just as Facebook is consulting with The National Suicide Prevention Hotline other technology companies should be consulting with experts in the field. As suicidal ideation becomes more of a “risk” that big technology assumes, reaching out to research organizations such as the American Association For Suicidology is key. The “knowledge” gained from a large corpus of data is only as good as the people interpreting it. When assessing suicide nuance is key. Ensureing that AI understands this nuance or there is a way to manage this nuance (such as passing it to a human) is critical.

Dealing with concerns such as false positives and misinterpreting signals is going to be key but also how are we presenting the information to the user is important . Once you have assessed risk, what does experience look like to the user?

Intervention and Consent

This is a larger concern on the user side and I would argue that engaging with both users and professionals is key. The question I always ask about technology and mental health intervention is how it is any different from a face to face contact?

Consent is required for mental health treatment often requires consent (will get to the exceptions in a minute) but in the low to medium risk categories as face to face contact would require consent then some education and intervention. This is the key to the user interface of these platforms. To the extent big technology firms interested in this work can push consent, information, and minor intervention is key. Little nudges of “It looks like you are having a hard time, can we connect with ______ crisis service” or “It seems like you are depressed… do you need ______ …”. An additional layer of “because you said _____ we are going to ____. “ To convene users and professionals on what they would like the experience to looks like.

So here is where things get messy… and fast. What should you do when an algorithm decides you are high risk? When face to face, clinicians often have to make the decision that a suicidal person might require police intervention. This is a decision that should not be taken lightly. This process has nuance as well as laws and interventions vary on the state and local level. Not only that but one has to consider the training of the officer that is taking the call. Tech companies are going to have to get familiar with the “grey” complexities of deeming that someone is a danger to themselves.

There are obvious cases of someone recording an attempt or making it clear on a post, however there are many things that infer risk. In the above article about Facebook, Dr. John Torous warns of “practicing black box medicine”. I would agree that having a algorithm make a decision without informing users is not demonstrating consent. For tech companies interesting in tackling suicidal ideation in real time, there decisions shouldn’t be hidden behind a box. For an issue like suicide the user interface should attempt to mimic a face to face contact. That intervention should be done in partnership with the user, local authorities, and crisis services. This infrastructure is not easy to build but organizations like the National Suicide Prevention Lifeline and Crisis Text Line want to partner with you. You can find out about partnerships with Crisis Text Line here and learning more about the National Suicide Prevention Lifeline’s network.

Data Governance/Privacy

The next concern is what happens once data is collected. Keep in mind these companies have a large amount of our data but are not healthcare companies. Despite how should this data be governed and How are individuals privacy protected?

This this was an interesting way of framing this question. Should tech companies scanning our risky behaviors be held to the same standard as “medicine”. If they are going to provide symptom education and “intervention”, should they be held to the standards of health privacy laws such as HIPAA? If not those standards how can “big tech” best protect privacy. How long should information be stored? Can data be de-identified after so that companies can still “learn” from them?

These are critical questions are central to this debate. There are no easy answers to this but again there are questions of these decisions being made in a “black box”. That companies dealing with large amounts of data about your health should be transparent about how they are using it. Many argue that individuals should be compensated for their data if companies are going to “learn” from it.

Where Do We Go From Here

Large technology companies have an opportunity in that they have immense of amounts of data. From the public health perspective they can make an impact on health issues like suicide and other diseases. With this opportunity comes a responsibility to users to protect their rights and privacy. Being in the unique position to intervene with suicide in real time is critical work.

Tech teams need to work with practitioners to determine how this real time intervention is any different from face to face intervention. More importantly ask users how they would want this experience to look. To ask challenging questions on how to best serve the public while having ownership of personal health data. I hope that technology companies continue to ask these challenging questions. Not only that but provide the answers to the users and society at large.

This was originally posted on my blog Stuck On Social Work.

--

--

Sean Erreger LCSW

blogger, consultant #socialwork, #mentalhealth, suicide prevention. How tech & Social Media is changing change…blog: www.StuckOnSocialWork.Com