Yesterday, Cheep appear it’s testing a affection to filter out potentially abhorrent direct messages. Any DMs in the “Message requests” folder absolute calumniating or inappropriate agreeable will be automatically moved to a area marked “Additional messages,” giving people the option to view the bulletin or assuredly delete it. 

The safety affection will hide the message’s agreeable and alter it with: “This bulletin is hidden because it may accommodate abhorrent content.”

Alongside this, the micro-blogging site acclimatized AI technology in April to automatically flag calumniating tweets after relying on human intervention. 

Because of these features, abhorrent tweets are now easier to report and take down. But these safety measures are yet to bring about abiding changes. Last year, a study by Amnesty International categorical the scale of threats made adjoin women on Twitter. It labeled the social belvedere as “a toxic place” and the “worlds better dataset of online abuse targeting women.”

For women, and other groups that are subjected to online harassment, a lot of abuse comes in the form of direct messages, (the affection requires both parties to follow each other, or for the almsman to keep their inbox open to DMs from anyone).

Twitter’s recent step to curb the abuse found on its belvedere is agnate to Bumble’s recent safety feature that uses to AI to automatically detect and blur abhorrent “lewd images” giving users the choice to view, block, or report the image to the app’s moderators.  

Read next: Google and Apple are allure the apprenticeship sector with new tools for acceptance