Twitter changes policy to let families remove images of the deceased
Twitter announced Tuesday that it will begin removing imagery of certain deceased people following requests from family members and authorized individuals.
Twitter will begin removing photos and video of certain deceased people following the requests of family members and "other authorized individuals," the micro-blogging platform announced Tuesday.
"In order to respect the wishes of loved ones, Twitter will remove imagery of deceased individuals in certain circumstances," Twitter spokesman Nu Wexler said in a tweet.
In order to request the removal of media of a loved one "from when critical injury occurs to the moments before or after death," individuals must submit a request to email@example.com. Twitter says it will then review the request to determine whether to honor the family's wishes or if the images are newsworthy enough that they should remain on Twitter in order to ensure the public interest. Mr. Wexler adds that Twitter "may not be able to honor every request."
This announcement marks a shift for Twitter, which has traditionally maintained a more hands-off policy when deleting its content. According to its rules and terms of service, Twitter "does not screen content and does not remove potentially offensive content." That is, unless it violates Twitter's stated rules for acceptable content, including misleadingly impersonating others, violating trademarks, publishing someone else's private or confidential information, and posting threats of violence against others. Nor does Twitter allow for the publishing of "obscene or pornographic images."
All of the above, in addition to Twitter's other rules for compliance with its service, are grounds for suspending accounts.
For example, in the current conflict between Israel and Gaza, several accounts used by the Palestinian militant group Hamas were suspended by Twitter – Hamas Twitter accounts were also suspended in the 2012 fighting between Israel and Gaza.
And in the past 24 hours, video of the American journalist James Foley, who was purportedly beheaded by the extremist group the Islamic State, was circulated Tuesday on Twitter and uploaded to YouTube. That content was subsequently removed from both platforms. The FBI is now reviewing the content of the video, Global Post reports.
Following the public abandonment of Twitter by Robin Williams' daughter Zelda Williams, who said she was quitting Twitter after receiving hurtful and distorted images of her father in the wake of his suicide, Twitter suspended accounts deemed responsible for the abuse.
In a statement responding to Ms. Williams' decision to leave Twitter and other social media for "a good long time," Twitter said it would take a closer look at the way it handles forms of abuse that play out on its service.
"We will not tolerate abuse of this nature on Twitter," said Del Harvey, Twitter's head of trust and safety, last week in a statement. "We have suspended a number of accounts related to this issue for violating our rules and we are in the process of evaluating how we can further improve our policies to better handle tragic situations like this one. This includes expanding our policies regarding self-harm and private information, and improving support for family members of deceased users."
Facebook, too, has rules in place that let family members request the removal of content of family members. To process these requests, Facebook requires that "verified immediate family members" submit special requests for the removal of a loved one's account. These requests must be accompanied by documentation, including the deceased individual's birth certificate, death certificate, or legal proof that you are the representative of the deceased individual.
Twitter has not yet stated the exact kinds of verification it will require from family members in order to process media removal requests.
This new policy follows the European Union's so-called "right to be forgotten" ruling, which requires search engines such as Google and Microsoft's Bing to remove people's information from search results when individuals feel that that information infringes on their right to privacy. The ruling, which only applies in Europe, has raised questions about whether an individual's request for a degree of anonymity on the Internet infringes on the public's right to know. Google says it weighs privacy rights against the public interest. But it has already encountered roadblocks in its removal practices, saying it has received removal requests made with false and inaccurate information.