[ad_1]
For years, Instagram has been on a mission to become the best place online. It’s a quixotic mission for a social media company, especially one whose primary users are teenagers, an age group that has proven particularly adept at making each other miserable. Cyberbullying is difficult to define and even more difficult to measure; even Facebook, the parent company of Instagram, cannot estimate how prevalent the behavior is on its platform, or if it has worsened as Instagram replaces school cafeterias and shopping malls as the primary place where teens interact. Still, that hasn’t stopped Instagram from rolling out one feature after another to mitigate bad behavior, in its effort to clean up cyberbullying for good.
On Tuesday, Instagram adds two new tools to its repertoire. First, the platform will automatically hide comments that appear to constitute bullying, even if they are obviously not breaking the rules. Second, it will send out a new warning message for users whose comments are repeatedly flagged as toxic, hoping to change the behavior early on. Both tools will be rolled out to Instagram users around the world, starting with those who speak English, Portuguese, Spanish, French, Russian, Chinese, and Arabic.
Instagram already hides and proactively removes the most unpleasant comments. Now it is trying to make the most doubtful cases less visible. The new feature uses machine learning to find comments that resemble those reported on Instagram for harassment in the past. (The tool also captures emoji, as in “You look like 💩”). Of course, context matters. The same words can be playful jokes in one conversation and outright meanness in another. Even humans can’t always agree on whether or not something crosses the line: A 2017 study from the Pew Research Center found that Americans were divided on whether behaviors like intentionally insulting or shaming someone counted as “bullying” at all.
Instagram, in this case, has chosen to err on the side of caution. “We’re trying to catch as much bullying and harassment as we can,” says Carolyn Merrell, global director of policy programs for Instagram, who recognized a commitment between protecting its users and stifling their freedom of expression. While Facebook’s anti-bullying and harassment rules contain a dizzying array of prohibited language, Merrell says that “some comments may not violate our community guidelines, but may still be considered harmful, harassing, or intimidating.”
If checked by the AI, Instagram will now place those comments behind a text box that says “See hidden comments.” Anyone can tap that box to reveal the offending content, and people can still report those comments to Instagram if they violate community guidelines. Instagram will also give people the option to remove the content cover from the comments they receive on your page. So if the rude joke your friend makes on your photo gets buried by the cover of the comment, you can choose to move it back to the light.