VR

A new challenge for Meta: How to moderate the metaverse

The company faces familiar moderation questions as its metaverse products grow.
article cover

Meta

3 min read

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

In December, Meta made its initial stab at the metaverse—Horizon Worlds—open to all Quest headset users in the US and Canada. Since then, its monthly active users have grown tenfold, to 300,000, per The Verge.

  • Meta doesn’t disclose headset sales, but Qualcomm estimated that the company had sold 10+ million of its Quest 2 units as of November 2021.

But with user growth has come heightened scrutiny around an increasingly familiar thorn in the Facebook parent company’s side: content moderation. In recent months, reporting has raised questions about a number of key metaverse moderation issues as well, namely:

Child safety. Although Horizon Worlds is officially limited to adults (18+), The Washington Post reports that kids are “flocking” to the platform, and that Meta “declined to say whether it had taken any measures aimed at protecting children from [grooming or exploitation],” or whether it had received any reports of it.

  • In response, the company stated that it was committed to a goal of making Horizon “safe” and said that it is actively monitoring platform usage to inform policy decisions.

Sexual harassment allegations. Two women have alleged that their avatars were virtually groped by others in Horizon Worlds or Venues. Meta added a “personal boundary” feature earlier this month, which will be turned on by default, preventing avatars from getting into each other’s personal space. It can't be disabled.

Misinformation. After Meta reportedly declined to answer specific questions about how it would moderate safety issues in Horizon World—like the potential for child abuse, harassment, and misinformation—BuzzFeed News built its own private World, dubbed the “Qniverse,” to test the company’s VR-moderation systems. Its conclusion: Content that is banned on Instagram and Facebook did not appear to be banned in Horizon Worlds.

  • BuzzFeed packed the Qniverse full of phrases that Meta has “explicitly promised to remove from Facebook and Instagram” (e.g., “Covid is a hoax”), but found that even after flagging the group—multiple times—through Horizon’s user reporting function, the problematic World was not found to violate Meta’s Content in VR Policy.
  • Once BuzzFeed brought the Qniverse to the attention of Meta's comms team, the company reversed its previous moderation rulings and removed the World.

Looking ahead…Meta’s Horizon Worlds and Venues have seen rapid growth in recent months, but from a very small base. And as The Verge points out, it’s impossible to know if Meta can sustain that growth rate.

But as the company invests more and more resources into building a VR experience capable of captivating the masses, it will continue to face questions about how to keep that platform safe for an increasingly bigger user base.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.