Instagram is protecting its users through a couple of brand new features in the process of being unrolled.
The first, a warning message pops up whenever someone tries to post a comment that may be considered abusive — or even hate speech. This new initiative combats bullying and makes users think twice before posting offensive or cruel remarks. In early stages, this method has proven effective as it gives users the chance to reflect and undo their insensitive comments.
In short, Instagram now asks — Are you sure you want to post this?
According to the Head of Instagram, Adam Mosseri, the AI-run warning feature has prevented some nasty comments already, even during the early testing period.
In the last few days, we started rolling out a new feature powered by AI that notifies people when their comment may be considered offensive before it’s posted. This intervention gives people a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification.
In addition, Instagram users now have a “restrict” function, in order to limit another user’s interactions with their account. Restricted users will still have the ability to comment, but said comments will only be viewable to themselves. Such comments can be reviewed and shared with others on a case-by-case basis by the restrictor.
All of this, in attempts to keep Instagram a supportive place for social media.
Source: The Next Web via youredm