Nasty LoL Players Make It Into Article from Highest Impact Scientific Journal

BJ@LW&WW

Gold Belt
@Gold
Joined
Jun 3, 2007
Messages
22,678
Reaction score
1,634
http://www.nature.com/news/can-a-video-game-company-tame-toxic-behaviour-1.19647

It took less than a minute of playing League of Legends for a homophobic slur to pop up on my screen. Actually, I hadn't even started playing. It was my first attempt to join what many agree to be the world's leading online game, and I was slow to pick a character.

“Choose FA GO TT.”


League of Legends has 67 million players and grossed an estimated US$1.25 billion in revenue last year. But it also has a reputation for toxic in-game behaviour, which its parent company, Riot Games in Los Angeles, California, sees as an obstacle to attracting and retaining players. So the company has hired a team of researchers to study the social — and antisocial — interactions between its users.


Common wisdom holds that the bulk of the cruelty on the Internet comes from a sliver of its inhabitants — the trolls. Indeed, Lin's team found that only about 1% of players were consistently toxic. But it turned out that these trolls produced only about 5% of the toxicity in League of Legends. “The vast majority was from the average person just having a bad day,” says Lin.


Lin borrowed a concept from classic psychology. In late 2012, he initiated a massive test of priming, the idea that imagery or messages presented just before an activity can nudge behaviours in one direction or another.

nature-riot-games-31-03-16-online.png



Riot introduced the Tribunal, which gives players a chance to serve as judge and jury to their peers. In it, volunteers review chat logs from a player who has been reported for bad behaviour, and then vote on whether the offender deserves punishment. The Tribunal, which started in 2011, gave players a greater sense of control over establishing community norms, says Lin. And it revealed some of the things that triggered the most rebukes: homophobic and racial slurs.


When players were informed only of what kind of behaviour had landed them in trouble, 50% did not misbehave in a way that would warrant another punishment over the next three months. When they were sent reform cards that included the judgements from the Tribunal and that detailed the chats and actions that had resulted in the ban, the reform rate went up to 70%.


But the process was slow; reform cards might not show up until two weeks to a month after an offence. “If you look at any classic literature on reinforcement learning, the timing of feedback is super critical,” says Lin. So he and his team used the copious data they were collecting to train a computer to do the work much more quickly. “We let loose machine learning,” Lin says. The automated system could provide nearly instantaneous feedback; and when abuse reports arrived within 5–10 minutes of an offence, the reform rate climbed to 92%.
 
It's true. That game is full of rude people. I've only found one comeback that silences them when they start with the attacks. It goes something along the line of their mom wishing she had an abortion.
 
Back
Top