SANTA FE, New Mexico — The day after New Mexico Attorney General Raúl Torrez secured a historic $375 million judgment against Meta Platforms over its failure to protect children from predatory behavior on Instagram and Facebook, legal officials in at least five additional states announced Monday they are advancing or accelerating parallel investigations targeting the company's algorithmic design and child safety practices.
AG offices in Colorado, Illinois, Washington, Arizona, and Massachusetts confirmed they are reviewing the New Mexico case's evidentiary record, which prosecutors said revealed internal Meta documents showing the company was aware of systemic risks to minors and failed to act. Legal experts described the New Mexico judgment as a 'template ruling' that lowers the evidentiary bar for other states pursuing similar claims.
Meta issued a statement Monday disputing the characterization of the New Mexico verdict and indicating it would appeal, arguing that the judgment misrepresents the company's ongoing investment in child safety tools, including AI-based content moderation and age verification systems. The company pointed to its Teen Accounts feature and parental supervision tools as evidence of proactive reform.
The development adds significant legal and financial pressure to Meta at a moment when the company is already navigating regulatory scrutiny over its AI governance practices. Analysts at Bernstein estimated Monday that a coordinated multi-state action could expose Meta to aggregate liability in the range of $2 billion to $4 billion, potentially reshaping how social platforms approach algorithmic recommendation systems for underage users across the industry.
Child safety advocacy groups, including the Technology Coalition and the National Center for Missing and Exploited Children, called on Congress Monday to use the New Mexico outcome as a catalyst to pass pending federal legislation that would impose nationwide safety design standards on social media platforms serving users under 18.