Over the past few years, Facebook has been trying to answer a difficult question: How do you stop terrorists from overextension their hate online?

The social network, which now has more than 1.9 billion account users worldwide. It’s been frequently challenged to stem the flow of agreeable and accord from terrorists in recent times, and it’s implemented abundant approach to abode the issue, with capricious degrees of success.

Back in 2015, it took down the contour of one of the attackers complex in the San Bernardino cutting as it independent pro-ISIS content. It also said that it belted the accounts of some pro-Western Ukrainians after they were accused of hate speech that year. These efforts were driven by Facebook’s own agreeable ecology mechanisms manned by humans, as well as letters from users.

That wasn’t enough to stop the families of three victims in the San Bernardino attack from suing Facebook last month for enabling the terrorists to spread their advertising and put their loved ones at risk. And in April, the social arrangement came under fire for declining to acknowledge to letters of agreeable depicting abominable acts of terror by a announcer from The Times who’d set up a fake contour to test the company’s appraisal mechanism.

Admittedly, while there’s a albatross to police agreeable and anticipate the spread of abhorrent messaging, Facebook also has to tread so as to not stifle users’ abandon of speech, become a target for governments who want to censor social media, and invade people’s aloofness in this effort.

In February 2016, The Wall Street Journal noted that Facebook had “assembled a team focused on agitator agreeable and is allowance advance “counter speech,” or posts that aim to discredit active groups like Islamic State.”

Is there a better, faster way of acclamation this? Facebook believes that AI can help. For starters, it’s now begun using automatic systems to assay photos and videos of terrorists by analogous uploaded media adjoin its database of flagged content, and prevents it from overextension across its network.

The aggregation says it’s also attempting to assay text posts to see if there are letters praising or abutment agitator groups, so it can take added action. This is still in the works, and Facebook hopes its algorithms will become more able as they appointment more data.

It’s also alive on ways to assay actual accompanying to posts and groups that abutment terrorism, so as to sniff out clusters of sympathizers. Plus, it’s trying to assay fake accounts created by people who’ve been booted off the platform, so it can stop them in their tracks even if they go by a altered name.

These measures are accurate by a team of more than 150 experts focused solely on counterterrorism efforts.

That seems like a good start, but there’s acutely a lot more that can be done to quell the rise of terrorism. Last January, a number of top admiral from Silicon Valley heavyweights like Apple, Google, Twitter and of course, Facebook, met with senior admiral from the White House and US intelligence agencies to look at how they could coact to fight this battle together.

The aggregation also partnered with Microsoft, YouTube and Twitter to build a shared database of hashes to accurately and calmly assay agreeable featuring agitator adumbration on their platforms.

Read next: Apple's ex-CEO John Sculley is on a mission: "The US healthcare system is absolutely fixable.”