Narrow rules and unreliable automated enforcement systems left a lot of room for bad behavior—but they made the company’s child-safety statistics look pretty good according to Meta’s metric of choice: prevalence.
Defined as the percentage of content viewed worldwide that explicitly violates a Meta rule, prevalence was the company’s preferred measuring stick for the problems users experienced. Yet Meta’s publicly released prevalence numbers were invariably tiny.
According to prevalence, child exploitation was so rare on the platform that it couldn’t be reliably estimated, less than 0.05%, the threshold for functional measurement. Content deemed to encourage self-harm, such as eating disorders, was just as minimal, and rule violations for bullying and harassment occurred in just eight of 10,000 views.
For a consultant, Bejar had unusually deep roots at Facebook. PHOTO: IAN BATES FOR THE WALL STREET JOURNAL |
“There’s a grading-your-own-homework problem,” said Zvika Krieger, a former director of responsible innovation at Meta who worked with the Well-Being Team. “Meta defines what constitutes harmful content, so it shapes the discussion of how successful it is at dealing with it.”
Proving to Meta’s leadership that the company’s prevalence metrics were missing the point was going to require data the company didn’t have. So Bejar and a group of staffers from the Well-Being Team started collecting it.