Among users under the age of 16, 26% recalled having a bad experience in the last week due to witnessing hostility against someone based on their race, religion or identity. More than a fifth felt worse about themselves after viewing others’ posts, and 13% had experienced unwanted sexual advances in the past seven days.
The initial figures had been even higher, but were revised down following a reassessment. Stone, the spokesman, said the survey was conducted among Instagram users worldwide and did not specify a precise definition for unwanted advances.
Image credit: Backlinko,Facebook Demographic Statistics 2023 |
The vast gap between the low prevalence of content deemed problematic in the company’s own statistics and what users told the company they experienced suggested that Meta’s definitions were off, Bejar argued. And if the company was going to address issues such as unwanted sexual advances, it would have to begin letting users “express these experiences to us in the product.”
Other teams at Instagram had already worked on proposals to address the sorts of problems that BEEF highlighted. To minimize content that teenagers told researchers made them feel bad about themselves, Instagram could cap how much beauty- and fashion-influencer content users saw. It could reconsider its AI-generated “beauty filters,” which internal research suggested made both the people who used them and those who viewed the images more self-critical. And it could build ways for users to report unwanted contacts, the first step to figuring out how to discourage them.
One experiment run in response to BEEF data showed that when users were notified that their comment or post had upset people who saw it, they often deleted it of their own accord. “Even if you don’t mandate behaviors,” said Krieger, “you can at least send signals about what behaviors aren’t welcome.”