For the advancing series, Code Word, we’re exploring if — and how — technology can assure individuals adjoin sexual advance and harassment, and how it can help and abutment survivors.

It’s almost absurd to name an industry that doesn’t rely on email or other abode messaging platforms like Slack. While instant, online messaging has assuredly revolutionized the abode — defining itself as a collaborative tool that encourages artlessness within teams behindhand of the aggregation size — it’s created a new, and easier way for perpetrators to discreetly harass coworkers during work hours.

While the albatross ultimately falls on HR departments to boldness any aggravation claims raised at work, Slack takes no albatross for aggravation that takes place on the platform. Currently, there’s no way to mute, hide, or block users on the belvedere and the company’s “Acceptable Use Policy” defines what is and isn’t accustomed — declining to acknowledgment its stance on aggravation or unwelcome, inappropriate sexual comments. 

To help anticipate abode harassment, programmers from the Chicago-based AI firm NexLP have developed #MeTooBots. The tool, which is currently used by 50 accumulated clients, automatically monitors and flags cyberbullying and sexual aggravation found in chat amid colleagues via online chat platforms and shared work documents. 

As first appear by The Guardian, the creators of #MeTooBots face a number of challenges to finer tackle sexual aggravation since it comes in many forms, many of which are subtle and context-dependent. Jay Leib, the Chief Executive of NexLP, told The Guardian: “I wasn’t aware of all the forms of harassment. I anticipation it was just talking dirty. It comes in so many altered ways. It might be 15 letters … it could be racy photos.”

The tool uses an algorithm accomplished to analyze sexual harassment, once flagged, the appear animadversion or chat is sent to the company’s HR manager. What the bot considers as non-consensual sexual comments is unknown, about Leib told The Guardian that the tool searches for “anomalies in the language, frequency, or timing of advice patterns across weeks, while consistently acquirements how to spot harassment.”

While a bot that acts as a second attestant to abeyant sexual aggravation could potentially deter this kind of behavior, it doesn’t abode the source of the issue — abode ability — and comes with its own loopholes. For example, offenders may learn how to trick the software and turn to other chat platforms that aren’t currently monitored by the harassment-preventative bots. 

#MeTooBots may be addition archetype of how we rely on AI too heavily to fix human and deeply-embedded civic issues, not to acknowledgment the implications of how the data that’s calm is adequate and distributed. 

While it’s able to see technology take on a role that could potentially anticipate aggravation in the workplace, article that 37 percent of women in the tech appear last year in the US  — not to acknowledgment the unreported cases. We can’t use AI as a quick fix to a cultural issue that women have been accountable to for decades. 

Read next: UK money saving expert featured in fake cryptocurrency ads on Instagram — AGAIN!