As everyone knows, software algorithms continue to be extended to every aspect of technology. Now, FACEIT will use ‘machine learning’ for a new problem: toxicity in game chat. The company has partnered with Google Cloud and Jigsaw (formerly Google Labs) to develop a specialized AI that bans toxic players. It is currently in place and has already banned over 20,000 CSGO players.
FACEIT AI is called Minerva, and “after months of testing to minimize false positives,” it officially went live on FACEIT at the end of August. Since then, the AI has issued 90,0000 warnings and bans. 20,000 people for chat abuse and spam, all “without outside interference.”
“If a chat message has content that is considered toxic. Minerva will issue a warning about this behavior, while repeated messages in chat are flagged as spam.” FACEIT explained on the blog. “Minerva makes a decision just seconds after the game is over: if behavior is detected, it will either send a warning or ban the violator.”
FACEIT reports that since Minerva came out, toxic messages have dropped by 20.13%, from 2,280,769 in August to 1,821,732 in September.
So, next time you want to ‘disturb’, remember that the AI is watching, or better yet, respect everyone for a cleaner game environment.
Source link: FACEIT and Google developed an AI that bans 20,000 CSGO gamers from being toxic in a month
– https://emergenceingames.com/