Searching for Mental Illness on Twitter: Data Jackpot or Ethical Minefield?
Just after Christmas, the New York Times published an article about the “Risks in using social media to spot signs of mental distress”. The piece detailed the failed launch of the application Samaritans Radar developed by the Samaritans, a well-known suicide prevention organization in Britain. The free app alerted users when someone they follow on Twitter wrote a worrisome post. For example, the app would detect phrases such as “tired of being alone” or “hate myself”. Soon after the launch, concerns were raised about the app identifying and targeting people who were already vulnerable. Several experts in the NY Times article also expressed concern about the app contributing to stigma, discrimination, and inaccurate diagnosis/labeling of people who may or may not actually have a mental illness. The app is currently disabled and the developers are working with partners to evaluate user concerns and test potential adaptations to the app.
While I agree with pulling the app to address privacy concerns, I was very surprised by the following quote which accompanied the NY Times article:
“If someone tweets ‘I’m going to kill myself,’ you can’t just jump in,” said Christophe Giraud-Carrier, a computer scientist at Brigham Young University who studies the role of social media in health surveillance. “There are all these psychological factors that come into play that may push someone over the edge.”
While I acknowledge that there are complex factors in suicide intervention, I worry that this statement discourages intervening online (or developing applications that could facilitate this intervention). A lot of suicide prevention work has focused on training community members (I would argue this includes your online community) to be active bystanders who intervene when they see someone at risk. For example, gatekeeper trainings are popular strategies which help participants to develop the knowledge, attitudes, and skills necessary to identify those at-risk for suicide, determine levels of risk, and make referrals when necessary. The National Suicide Prevention Lifeline provides guidance for helping online when someone might be suicidal. They also link to safety teams at each social media site, including Twitter.
From my review of the research and relevant articles, there seems to be an emerging line between using Twitter to gather anonymous, aggregate mental illness data and identifying and intervening with individual users. For example, researchers at Johns Hopkins University have had a very positive response to their research using Twitter to collect new data on post-traumatic stress disorder, depression, bipolar disorder, and seasonal affective disorder. The scholars emphasize that their findings do not disclose the names of people who publicly tweeted about their disorders. Their goal is to share timely prevalence data with treatment providers and public health officials.
Tell Me What You Think:
While I agree with pulling the app to address privacy concerns, I was very surprised by the following quote which accompanied the NY Times article:
“If someone tweets ‘I’m going to kill myself,’ you can’t just jump in,” said Christophe Giraud-Carrier, a computer scientist at Brigham Young University who studies the role of social media in health surveillance. “There are all these psychological factors that come into play that may push someone over the edge.”
While I acknowledge that there are complex factors in suicide intervention, I worry that this statement discourages intervening online (or developing applications that could facilitate this intervention). A lot of suicide prevention work has focused on training community members (I would argue this includes your online community) to be active bystanders who intervene when they see someone at risk. For example, gatekeeper trainings are popular strategies which help participants to develop the knowledge, attitudes, and skills necessary to identify those at-risk for suicide, determine levels of risk, and make referrals when necessary. The National Suicide Prevention Lifeline provides guidance for helping online when someone might be suicidal. They also link to safety teams at each social media site, including Twitter.
From my review of the research and relevant articles, there seems to be an emerging line between using Twitter to gather anonymous, aggregate mental illness data and identifying and intervening with individual users. For example, researchers at Johns Hopkins University have had a very positive response to their research using Twitter to collect new data on post-traumatic stress disorder, depression, bipolar disorder, and seasonal affective disorder. The scholars emphasize that their findings do not disclose the names of people who publicly tweeted about their disorders. Their goal is to share timely prevalence data with treatment providers and public health officials.
Tell Me What You Think:
- Do these two stories represent the boundaries of using Twitter to search for warning signs or symptoms of mental illness? In other words, is using Twitter to gather anonymous data sets the only way to use it ethically and safely?
- Or is there a way to overcome the privacy concerns to empower/enable/encourage users to intervene with their fellow users if necessary?
Comments
Post a Comment