Beyond the Banhammer: How AI is Changing Content Moderation

Beyond the Banhammer: How AI is Changing Content Moderation

Presented by Cohere


How do you create a more inviting and inclusive gaming community? At this VB On-Demand event, AI/ML experts from Cohere and Google Cloud dive into managing player-generated content at scale to increase retention, promote positivity, and build a fun gaming community

Watch for free on demand here!


Building and nurturing strong communities is crucial in a crowded games market, says David Wynn, head of solutions consulting at Google Cloud for Games. Last year over 10,000 new games were released on Steam alone, and that record seems to be broken again this year. As studios and publishers battle for player attention, they’re finding that communities can make the experience stickier and more meaningful to their player base. But that’s only if it doesn’t fall prey to the toxicity that can plague so many online spaces.

“Building community helps embed the fundamental human aspects of talking with friends, sharing experiences, and building relationships,” says Wynn. “If it works as part of the gaming experience you’re trying to create, then it becomes all the more imperative to make sure you’re designing it right and making sure it’s a good experience for everyone. involved.”

The challenges are fundamental, ones that are built into human interaction in each crowded arena, full of a diversity of experiences, from race, gender, and class, to religion, and more. But also, the wide range of differences in how people like to interact, expect to interact, and are incentivized to interact, Wynn says, and all of that combined creates the community of a game or title.

“People will bring their own experiences, perspectives and potential challenges to the community. Even though we’re creating virtual worlds, they still come from here and they bring everything they experience here,” he says. “We can create, through tools and through the knowledge that others have already acquired, experiences to change the way they interact. Multiplicity and scale are two things studios and publishers need to keep in mind, because the world is going to hit us hard. As much as we would like to think that we can build our own islands, people have come from somewhere and they bring it with them.

What can go wrong is unique to a title in terms of how a community experience is shaped to facilitate your goals, the complexity of an experience you design, and the investment of your players, and it has a direct impact on your styles of moderation and intervention. A scowling face can mean a bad day; it could also indicate a broader, more insidious trend, or signal that a new level of moderation is needed.

Add AI to the content moderation mix

At the time, the number of interventions available when things got toxic was limited, both in theory and in practice. A moderator or administrator can apply the banhammer if they decide the behavior is unacceptable – if they see it at the right time or if it is reported at the right time. Or certain types of words can be blocked with a simple string substitution, so they appear as four asterisks instead of an F-bomb. These are effective tools that get the point across, even if it This is a pretty blunt approach, hard to fine-tune, and nearly impossible to scale.

Natural language processing (NPL), AI, and machine learning-based models have opened up much more sophisticated interventions with even more readily available classification. Whether your moderation team is overworked or your usual methods end up returning false positives, these algorithms allow community owners to spot problems before they start, and they can do it at scale.

“AI takes resources, effort and attention to train, but its operation is particularly resource-efficient and, at scale, opens up a whole new avenue for identifying the behavior we want to either minimize or amplify,” says Wynn. “It also creates new types of interventions, whether through chatbots or interesting types of augmentation that aren’t just ‘if, if else’ string substitutions.

AI/ML can also analyze broader patterns – not just text, but also communications that include voice transcripts, identifying behaviors like bereavement or giving other players a hard time. These are the types of things that, in synthetic environments, need to be reliably identified so they can be addressed or mitigated quickly.

“None of this is new. I’m sure people were looking to make Pong annoying to play against him when he first came out,” Wynn says. “But what you see with the new AI/ML models being developed and publishing is that you don’t have to be a data scientist to translate these big language models into something that actually works for your game, even if you’re a small studio, or you’re trying to get out of it yourself. Instead, you have an API from someone like Cohere that you can just grab and start playing with right away to see the benefits.

To learn more about identifying the patterns that cause communities to start breaking down, AI/ML solutions available to everyone, the most effective ways to implement them, and more, don’t miss this event. VB On-Demand.

Watch for free on demand here!

Agenda

  • Adapt the tools to your community’s unique vernacular and policies
  • Increase the ability to understand the nuances and context of human language
  • Use AI language that learns as toxicity evolves
  • Significant acceleration of the ability to identify large-scale toxicity

Presenters

  • David WynHead of Solutions Consulting, Google Cloud for Games
  • mike laviaCorporate Sales Manager, Cohere
  • Dean TakahashiLead Writer, GamesBeat (moderator)

Leave a Reply

Your email address will not be published. Required fields are marked *