Facebook mentioned it researchers are creating new technology they hope will help in ongoing efforts to make its platforms AI have the ability to detect harassment. In the Web-Enabled Simulation (WES), hundreds of bots programmed to copy negative human behavior were set free in a research environment, and Facebook engineers figured out the best alternatives.
WES is divided into three different categories, Mark Harman a Facebook researcher mentioned in an interview. To begin with, it makes use of machine learning to train bots to mimic real human behavior on Facebook. Next, WES can automatically interact with bots on a large scale, from thousands to millions. Lastly, WES releases the bots on Facebook’s’ actual production base.
It allows the bots to communicate with each other and real content on Facebook but keeps it separate from real users. WW – a testing environment, bots take action like trying to purchase or sell forbidden items such as drugs and guns. These tests make the bots use Facebook like an average person would, conducting a series of research and visiting people’s pages.
Technicians can then test whether the bot can maneuver safeguards violate Community Standards, according to the statement. The plan is for engineers to find patterns in the outcome of these tests, and make use of that information to test ways to make it tougher for users to violate Community Standards.
Facebook has long wanted to create ways to prevent harassment, misinformation, and all sorts of abomination of the platform. In a conference held in 2018, the Chief Technology Officer of Facebook said the organization was investing heavily in artificial intelligence research and finding ways to make it work at a large scale with little to no human supervision. WES seems to be evidence of that.
TECH NEWS>>>Facebook Is About To Create Pages That Feels More Like Twitter and Instagram