Two ideas for social media platforms to protect users who are receiving targeted abuse.

A journalist for another publication just rang me up asking about what can be done in India for addressing online abuse against the LGBTQI community. I really don’t have community specific solutions, but I do feel that there are two product changes that social media platforms can make in order to provide users with agency to protect themselves against targeted abuse online:

  1. Safe Mode: Sometimes targeted abusive campaigns last for an hour, sometimes for a week. For a user, a large part of their hurt comes from abuse they are unable to ignore because they’re being tagged, or their posts are being commented on. Platforms should allow a safe mode for accounts, which they can enable with just one click, which prevents others from tagging them or commenting on all of their social media updates. Essentially gives them the freedom to post updates, but blocks out the world so that they’re not exposed to hateful conduct.
  2. Targeted abuse reporting: Reporting abuse on platforms is often a struggle for users. Twitter now involves multiple tedious steps, as if they intend to discourage reporting. Imagine if you have to report each abusive account one by one, and highlight each abuse from that account. It’s painful for people to read such updates. One approach that platforms can take is allow users to report mass targeting, and define a period during which they were being targeted, so that all updates and accounts targeting them can be scrutinised for abuse by platforms.

Also, it’s important for platforms to have reviewers of abuse that understand local languages and local lingo, because often abuse can be in a manner that person who doesn’t understand the local culture will not be able to comprehend.