Facebook says it is hiring 3,000 additional workers to block live streams of violence and respond more swiftly to reports of inappropriate material.
It will bring the number of human monitors for the social-media site to 7,500.
That follows complaints that the automated software it is using cannot sufficiently monitor offensive posts – including pornography and violence.
In one of the worst examples a man in Thailand put out live images of him killing his baby daughter which was viewed 370,000 times over more than a day before Facebook removed the video.
Chief Executive Mark Zuckerberg said: “We’re working to make these videos easier to report so we can take the right action sooner – whether that’s responding quickly when someone needs help or taking a post down.”
In March, the company said it planned to use artificial intelligence technology to help spot users with suicidal tendencies and get them assistance as well as look for potentially offensive material.
Not there yet technologically
Facebook receives millions of reports from users each week, and like other large Silicon Valley companies, it relies on its thousands of human monitors to review the reports.
“Despite industry claims to the contrary, I don’t know of any computational mechanism that can adequately, accurately, 100 percent do this work in lieu of humans. We’re just not there yet technologically,” said Sarah Roberts, a professor of information studies at UCLA who looks at content monitoring.
The workers who monitor material generally work on contract in places such as India and the Philippines, and they face difficult working conditions because of the hours they spend making quick decisions while sifting through traumatic material, Roberts said.