“Not his main concern” – Why I urgently warn parents about Meta

0:00 / 0:00

I’ll start with a sentence every mother and father should know:

According to court records, company CEO Mark Zuckerberg is said to have stated that the safety of children is “not his main concern” as he focuses on the Metaverse.

This wording comes from an internal text message from 2021, quoted in newly unsealed documents from a US class action lawsuit against Meta. In it, Zuckerberg essentially writes that he would not claim child safety is his personal main focus if he is “more focused on a number of other areas”—such as building the Metaverse.  

Meta denies that this could be interpreted as a misplacement of priorities. Legally, the case is still open. For me, however, the crucial question is another:

What does it mean for you as a mother or father if the CEO himself explicitly does not name child safety as his personal priority—while his company is aggressively fighting for your children’s attention?

1. The new wave of revelations: Project Mercury & Co.

In recent days, extensive court records have become public. They contain internal studies and chat logs that Meta originally wanted to keep secret.

Key point: A research project codenamed “Project Mercury” (2020). Together with the polling institute Nielsen, Meta investigated what happens when people deactivate Facebook for a week. According to internal documents, the result was:

People who did not use Facebook for a week reported less depression, anxiety, loneliness, and social comparison.  

Instead of publishing these results or adapting their own products, the project was discontinued according to the lawsuit. Internally, it was said that the negative effects were distorted by “existing media coverage”—while researchers signaled that the effects were robust. Employees compared the approach to the tobacco industry, which suppressed its own cancer data for decades.  

At the same time, there are external studies that show almost the same thing: Even a short social media break measurably improves well-being, depression, and anxiety.  

If a corporation knows both internal and external evidence that less use improves mental health—and still continues to optimize for maximum use—then that is not an “accident.” It is a business decision.

2. Children as a target group—explicitly

The major US class action lawsuit “In re: Social Media Adolescent Addiction / Personal Injury” (MDL 3047) now bundles over 2,000 cases from families, school districts, and states against Meta and other platforms. They all allege that the products are deliberately designed to be addictive and massively harm children.  

From the newly unsealed documents:

• Meta’s internal research assumed that millions of under-13s use Facebook and Instagram, even though this is officially prohibited.  

• In internal chats, an employee complains: “Zuck has been talking about this for a while … targeting 11-year-olds feels like the tobacco industry—we’re basically saying: We have to get them hooked young.”  

• Strategy papers specifically analyze the psychology of “tweens” (5–12 years) to develop products for this age group.  

Added to this is the major lawsuit by 33 US states, accusing Meta of deliberately exploiting dopamine mechanisms and social comparison processes to draw young people into a kind of endless scroll loop. The attorneys general openly speak of a youth mental health crisis fueled by Instagram and others.  

In short: Children and adolescents are not collateral damage for Meta, but a strategically planned market.

3. Zuckerberg’s public line: “No causality, but we’re investing so much”

Publicly, Zuckerberg paints a different picture.

Before the US Senate in 2024, he visibly apologized to parents whose children were exploited or driven to suicide via social media—and at the same time emphasized that existing research shows no causal link between social media use and poorer mental health in adolescents.  

In his written statement, Meta lists long tables of safety features and billions in investments in moderation teams, AI filters, and reporting systems.  

Also in 2021, after the revelations of whistleblower Frances Haugen, Zuckerberg wrote on Facebook that at the core of all allegations is the claim that Meta puts profit over safety—“that’s simply not true.”  

This is how tobacco companies talk, too.

They deny causality, emphasize voluntary filters, and point to “lack of clear studies,” while internally knowing exactly how addictive their product is.

4. When algorithms find vulnerable teens—and give them even more

In October 2025, another internal Meta study became public: Researchers examined what Instagram shows to those teenagers who say they regularly feel bad about their bodies because of the app. The result:

• This group saw three times as much “eating-disorder-adjacent” content (content close to eating disorders, extreme thinness ideals, body shaming) as peers without this problem.

• Overall, more than a quarter of their feed consisted of “risky” or distressing content (provocation, suffering, self-harm).  

The researchers themselves point out that no clear causal direction can be derived from this data—do vulnerable teens seek out such content, or does the feed make them vulnerable?

The crucial point is something else: Instagram algorithms apparently profile vulnerable teenagers so that they encounter potentially harmful content much more often.

Meta spokesperson Andy Stone explains that the study proves Meta’s commitment to better products. At the same time, the report admits that Meta’s existing filters do not detect 98.5 % of this sensitive content.  

To me, that sounds less like “protection” and more like loss of control at full speed.

5. Meta bots: Flirting AI with minors

While all this is going on, Meta is rapidly integrating its AI assistants into WhatsApp, Instagram, and Facebook. Internal guidelines published by Reuters in August 2025 show:

• The bot rules explicitly allowed minors to be drawn into conversations that are “romantic or sensual.”  

• The AI was allowed to provide false medical and legal information as long as it attached a brief disclaimer.  

Meta confirmed the authenticity of the documents and stated that problematic passages have since been removed. After massive public pressure and senators announcing an investigation, the company announced new “PG-13 guidelines” for teen accounts and promised to allow parents to completely disable AI chats for their children in the future.  

The order is important:

1. First, flirting AI avatars are rolled out to children.

2. Then an investigation uncovers it.

3. Only then do restrictions and apologies follow.

The pattern repeats itself.

6. “Metaverse first, safety later”—the VR front

Zuckerberg’s Metaverse vision means: Even more immersive environments, even fewer barriers, even more direct interaction—including with strangers.

Whistleblowers and researchers presented documents to the US Congress in September 2025, according to which Meta’s legal department systematically tried to slow down or water down research on child safety in VR:

• After researchers documented cases where children under 13 in Horizon Worlds were sexually approached or harassed by adults, supervisors allegedly demanded that corresponding notes be deleted or toned down.  

• Internal instructions recommend referring to “alleged youth” rather than “children” in reports to be less legally vulnerable.  

• A former marketing manager of Horizon Worlds accuses Meta before the US consumer protection agency FTC of knowingly allowing children under 13 into the Metaverse via adult accounts to inflate user numbers.  

Meta rejects all allegations as “distorted” and points to new youth protection features, parent dashboards, and default privacy settings.

From a parent’s perspective, the sober calculation remains:

A hyper-immersive environment + known problems with harassment, grooming, and lack of moderation + legally redefining “children” = not a place where your 10- or 12-year-old child should be wandering around alone.

7. How it all fits together neurobiologically

• The child’s brain develops impulse control (prefrontal cortex) significantly later than the reward system (dopamine).

• Social media design—endless feed, variable rewards, likes, stories, “reels”—works exactly with these mechanisms: small, unpredictable dopamine kicks.

• Especially children and adolescents with ADHD, autism, or already high sensitivity are more susceptible to this pattern: They often regulate emotions more through external stimuli.

If a product is designed so that even adults have trouble putting it down—then it is systematically unfair for 11-, 13-, or 15-year-olds. That’s why many lawsuits speak of “product design-induced addiction,” not “poor media literacy.”

Meta’s own studies on “problematic use,” in which a significant proportion of users experience symptoms such as loss of control and withdrawal, show that Meta internally knows exactly how addictive the use is. Publicly, only the small proportion with “severe” problems was communicated for a long time—the rest disappeared under the rug.  

8. What does this mean concretely for parents?

I’ll summarize this structurally. This is not legal advice, but a risk assessment from the perspective of neurobiology and existing evidence.

8.1 Meta products for children under 13

My clear recommendation:

No Facebook, no Instagram, no Horizon Worlds, no Meta AI chatbots for children under 13. Period.

Meta officially prohibits this anyway. Internally, however, the documents show that millions of children under 13 are on the platforms—often with the company’s knowledge.  

If your child still has accounts here (friends, school, “everyone else is doing it”):

• Delete the account or at least move it completely to a parent email address and manage it yourself.

• Explain honestly why: “Not because you are wrong—but because this platform deliberately overwhelms you.”

8.2 Teenagers (13–17)

If a complete ban is not realistic, then:

1. Structure instead of panic bans

Strictly limited usage times (e.g., 30–60 minutes/day) with a clear daily structure.

• No smartphone in the bedroom—especially with ADHD/autism, sleep problems are otherwise pre-programmed.

2. Pull the brakes on the product side

• Set accounts to private, limit followers to real contacts.

• Block direct messages from strangers, go through reporting functions together.

Deactivate AI chats in Meta apps as soon as the new parent settings are available—and strictly prohibit them until then.  

3. Coach on content

• Look at feeds together: “How do you feel after 10 minutes of reels? Lighter or smaller?”

• Actively address eating disorders, self-harm, suicide, extreme ideals—don’t wait for your teen to bring it up.

4. VR / Metaverse

• For minors, I currently consider Meta’s VR worlds to be not responsible: too much uncertainty, too many documented assaults, too little reliable moderation.  

8.3 Schools and daycare centers

If Meta approaches your school with “safety roadshows” or “educational partnerships”—a loud trend according to the new court documents—then the following applies:

• Demand written disclosure of what data is collected, who finances the materials, and which lobbying organizations are behind it.  

• Insist on independent sources (universities, public institutions, NGOs without tech money) when it comes to media literacy or “online safety.”

9. “Not his main concern”—what you can make of it

If the CEO of a corporation writes in internal messages that child safety is not his personal main concern because he prioritizes other projects like the Metaverse, then I believe him.  

• At the same time, internal studies show that less Facebook/Instagram use reduces mental health problems—and these studies are stopped.  

• Internal data proves that vulnerable teenagers see particularly large amounts of problematic content—and the filters detect almost none of it.  

• AI chatbots flirt with minors until the media and politics intervene.  

• VR researchers report sexual assaults on children, and instead of consistent protection, there is legal wordplay.  

If you ask me as Dr. AuDHS how you should assess this, then my sober diagnosis is:

Meta behaves toward children like a high-risk company that only systematically reacts when public pressure is greater than growth pressure.

And you don’t let your children play alone in the living room with such companies.

To take away in one sentence

As long as Mark Zuckerberg does not clearly and verifiably show that child safety is actually his main concern—including transparent research, independent audits, and safety-by-design—you should treat Meta products for your children like cigarettes or alcohol:

available, legal—but taboo or strictly rationed for minors.

×