Allegations of Suppressed Research on Child Safety by Meta Employees

Recent revelations have sparked significant concern regarding the safety of children on social media platforms. Four whistleblowers, comprising two current and two former employees, have come forward with claims that their employer may have intentionally suppressed critical research related to children’s safety. This alarming situation has been highlighted in a report by a prominent news outlet.

According to the whistleblowers, shortly after internal documents were leaked, which indicated that the company’s own studies found Instagram could negatively impact the mental health of teenage girls, the organization altered its policies regarding sensitive research topics. These changes occurred just six weeks after the whistleblower Frances Haugen’s disclosures in 2021, which ignited a series of congressional hearings focused on child safety in the digital realm—an issue that continues to be a pressing concern for governments worldwide.

The report details that the company proposed two strategies to mitigate the risks associated with conducting sensitive research. One approach involved involving legal counsel in research discussions, thereby safeguarding communications under attorney-client privilege. Another suggestion was for researchers to present their findings in a more ambiguous manner, steering clear of terms that could imply non-compliance or illegality.

One of the whistleblowers, Jason Sattizahn, a former researcher with expertise in virtual reality, recounted an incident where he was instructed to erase recordings of an interview. This interview included a disturbing account from a teenager who alleged that his younger brother had been sexually approached on the company’s virtual reality platform.

A spokesperson for the company stated that global privacy laws mandate the deletion of any information collected from minors under the age of 13 without verified parental consent. However, the whistleblowers argue that the documents they provided to Congress illustrate a broader trend of discouragement among employees when it comes to discussing and investigating the safety of children under 13 using the company’s virtual reality applications.

The company responded to these allegations by asserting that the examples cited by the whistleblowers are being misrepresented to fit a false narrative. They emphasized that since the beginning of 2022, they have approved nearly 180 studies related to social issues, including youth safety and well-being.

In a related lawsuit filed earlier this year, a former employee raised similar concerns about the company’s practices. She claimed that while she was responsible for strategies to market the virtual reality platform to teenagers and international users, she felt that the app lacked sufficient measures to prevent underage users from accessing it. Additionally, she highlighted ongoing issues with racism within the platform.

Furthermore, while the focus of these allegations is primarily on the company’s virtual reality products, there is also growing scrutiny regarding how other offerings, such as AI chatbots, may impact minors. Recent reports have indicated that the company’s guidelines previously permitted chatbots to engage in romantic or suggestive conversations with children, raising further questions about the safety measures in place for young users.

Leave a Comment