- The Washington Times - Tuesday, May 31, 2016

Facebook CEO Mark Zuckerberg can claim a new milestone for his company: Artificial intelligence now flags more offensive photos than humans.

Hundreds of thousands of posts appear on Facebook every single minute, but the task of monitoring it all is increasingly less reliant on human eyes. Officials for the social media giant now say AI is so advanced that it shoulders more weight than people in terms of quarantining potential violations of its terms of service.

“One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people. The higher we push that to 100 percent, the fewer offensive photos have actually been seen by a human,” Joaquin Candela, Facebook’s director of engineering for applied machine learning, told Tech Crunch on Tuesday.

Mr. Candela was in San Francisco for MIT’s Technology Review Emtech Digital conference. He was joined by Hussein Mehanna, Facebook’s director of core machine learning.

“We share our research openly,” Mr. Mehanna said of the company’s willingness to provide data with competitors, the website reported. “We don’t see AI as our secret weapon just to compete with other companies. Advancing AI is something you want to do for the rest of the community and the world because it’s going to touch the lives of many more people.”

Tech Crunch’s report comes the same day that Facebook, Google, Microsoft Corp., and Twitter Inc. all vowed to work together to police hate speech in Europe. The plan is to identify and remove language that runs afoul of European Union laws within 24 hours of an alleged infraction.

“With a global community of 1.6 billion people we work hard to balance giving people the power to express themselves whilst ensuring we provide a respectful environment,” said Monika Bickert, head of global policy management at Facebook, in the statement, Bloomberg reported. “There’s no place for hate speech on Facebook.”

Tech Insider noted that while such technology might be necessary given the size of Facebook, the company and its competitors run the risk of creating the AI equivalent of “draconian thought police.”

“Built wrong, or taught with overly conservative rules, AI could censor art and free expression that might be productive or beautiful even if it’s controversial,” the technology website reported.

Facebook sees its AI as an essential bulwark against threatening language, pornography, violent images, and other forms of harmful material.

“I personally believe it’s not a win-lose situation, it’s a win-win situation,” Mr. Mehanna said of open-sourcing Facebook’s AI. “If we improve the state of AI in the world, we will definitely eventually benefit.”

Sign up for Daily Newsletters

Copyright © 2019 The Washington Times, LLC. Click here for reprint permission.

The Washington Times Comment Policy

The Washington Times welcomes your comments on Spot.im, our third-party provider. Please read our Comment Policy before commenting.


Click to Read More and View Comments

Click to Hide