The function was first introduced in 2017, and uses artificial intelligence to automatically detect and hide “toxic and divisive” comments aimed at at-risk groups. As of today, that has been bolstered to include “attacks on a person’s appearance or character, as well as threats to a person’s wellbeing or health”.
As well as filtering out malicious comments, the social media platform said it will investigate repeat offenders and take action (including banning the offending account) where appropriate.
The filter is automatically enabled, but can be adjusted via your account settings.
Research released last year by Ditch the Label, a British anti-bullying group, found that 42 per cent of young people had experienced bullying on Instagram, compared to 37 per cent on sibling network Facebook and 31 per cent on Snapchat.
In a statement issued via Instagram's blog, the social media platform's co-founder Kevin Systrom reiterated the company's zero-tolerance to bullying.
"Since Mike [Krieger] and I founded Instagram, it’s been our goal to make it a safe place for self-expression and to foster kindness within the community," he wrote. "This update is just the next step in our mission to deliver on that promise."