The social media company targeted accounts which produced inappropriate content from several areas including graphic violence, terrorist propaganda, and hate speech.
837 million pieces of spam were removed in Q1 2018, all of which were found and flagged by Facebook's systems before anyone even reported it.
The company removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, almost triple the 1.2 million a quarter earlier, according to the report.
The prevalence of graphic violence was higher and received 22 to 27 views-an increase from the previous quarter that suggests more Facebook users are sharing violent content on the platform, the company said.
Galaxy S10 may come with Samsung's own in-display fingerprint sensor
Now, specifications of a couple of rumored Android phones have surfaced online hinting that the launch is just around the corner. The Note 9 is also set to include the same stereo speakers used in the S9, which will also be an improvement from the Note 8.
On Tuesday, May 15, Guy Rosen, Facebook's Vice President of Product Management, posted a blog post on the company's newsroom.
Overall, Facebook estimates that around 3% to 4% of the active Facebook accounts on the site during this time period were still fake.
- Facebook took enforcement action against 21 million posts containing nudity.
For serious issues like graphic violence and hate speech, our technology still doesn't work that well and so it needs to be checked by our review teams.
"We have a lot of work still to do to prevent abuse". As indicated by the article, by Sheera Frenkel, Facebook has been under pressure to remove nudity, violence and hate speech, among other "inflammatory content".
Thomas Tuchel confirmed as new manager of Paris Saint Germain
Really, it's a " win the Champions League or lose your job" situation, one that is ultra intense. The club from the French capital is not a member of that exclusive elite club of European clubs.
Facebook's new report, which it plans to update twice a year, comes a month after the company published its internal rules for how reviewers decide what content should be removed.
Nearly 86 percent was found by the firm's technology before it was reported by users. The problem with trying to proactively scour Facebook for hate speech is that the company's AI can only understand so much at the moment.
The report also covers fake accounts, which has gotten more attention in recent months after it was revealed that Russian agents used fake accounts to buy ads to try to influence the 2016 elections.
"We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too", he said. "This is the same data we use to measure our progress internally - and you can now see it to judge our progress for yourselves".
Meanwhile, Facebook said on Monday it has suspended around 200 apps as part of its investigation into whether companies misused personal user data gathered from the social network.
Australia's Margaret river shooting: All fingers point at 'murderer' grandfather
In his news conference, Dawson said authorities were called to the property at 5:15 a.m. Friday by a male whom he did not identify, but who had a connection to the property.