Another would apply machine learning algorithms to look for patterns in the language of terrorist propaganda so it could be identified and removed more quickly.
Facebook's blog post on Thursday was the first in a planned series of announcements to address "hard questions" facing the company, Elliot Schrage, vice president for public policy and communications, said in a statement. But it acknowledged that "in the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online". "This work is never finished because it is adversarial, and the terrorists are continuously evolving their methods, too", Bickert and Fishman concede.
Facebook believes that it can do better to find and stop terrorists from sharing their contents on its platform by using technology particularly artificial intelligence.
Facebook says that it sends all ambiguous removals to humans to review - and is hiring large numbers of new content moderators to go through it.
Facebook Inc on Thursday offered additional insight on its efforts to remove terrorism content, a response to political pressure in Europe to militant groups using the social network for propaganda and recruiting.
After a terrorist attack at London Bridge earlier this month, May, the British prime minister, accused technology companies of fostering a breeding ground for terrorists and called for tough internet regulation.
The company is working on using artificial intelligence to try to prevent users who have had one account removed for posting terrorist content from creating new accounts with different identities, according to the post.
Right now, the social giant is focusing on terrorist groups based in the Middle East like ISIS and Al Qaeda, but eventually, they hope these tools will be good counterterrorism measures against any similar organization. It's also hired more than 150 counter-terrorism specialists, including academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts and engineers.
"Encryption technology has many legitimate uses, from protecting our online banking to keeping our photos safe".
Facebook also uses these tools on its other platforms, like Instagram and WhatsApp, though it stressed that it does not have the ability to read encrypted messages.
But the company included new information about its human efforts as well.
"We want to find terrorist content immediately, before people in our community have seen it", Bickert and Fishman write.
"Crucially, our campaign will also include exploring creating a legal liability for tech companies if they fail to take the necessary action to remove unacceptable content", May said at a joint news conference.
Facebook has partnered with Microsoft, YouTube and Twitter to develop and shared industry database of "hashes" or digital fingerprints of terrorist content.
In the post, Bickert and Fishman admit "AI can't catch everything".
The website's use of AI will automatically detect extremist propaganda, language and groups of terrorist accounts.