Sunday, January 18, 2026 | ๐Ÿ”ฅ trending
๐Ÿ”ฅ
TrustMeBro
news that hits different ๐Ÿ’…
๐Ÿ’ฐ business

Big AI Labs Flunk Safety Test: Meta, Xai Get Worst Grades

New safety index reveals Meta, Deepseek, and Xai received the lowest possible grades. Even top labs like OpenAI barely scored a C.

โœ๏ธ
main character energy ๐Ÿ’ซ
Saturday, December 6, 2025 ๐Ÿ“– 2 min read
Big AI Labs Flunk Safety Test: Meta, Xai Get Worst Grades
Image: Fortune

Whatโ€™s Happening Major AI labs like Meta, Deepseek, and Xai have reportedly received โ€˜some of the worst grades possibleโ€™ on a new existential safety index. This assessment highlights significant concerns about how these powerful AI systems are being developed. Meanwhile, industry leaders Anthropic, OpenAI, and Google DeepMind managed to secure the top three spots. However, even their best scores were only C+ or C, indicating a widespread lack of top-tier safety protocols across the board. ## Why This Matters These low safety scores are โ€˜kind of jarringโ€™ because they involve companies building some of the most advanced AI models. The implications for future AI development and deployment are substantial, raising questions about responsible innovation. Existential safety refers to the risks AI poses to humanityโ€™s long-term survival, from autonomous systems making critical errors to the potential for uncontrollable superintelligence. Poor grades suggest these labs may not be adequately addressing these profound dangers. This situation matters for several key reasons:

  • Public Trust: Low safety scores erode public confidence in AI developers and the technology itself.
  • Regulatory Scrutiny: It could trigger increased government oversight and calls for stricter regulations on AI development.
  • Future Risks: Inadequate safety measures now could lead to unforeseen and potentially catastrophic consequences as AI becomes more powerful. ## The Bottom Line The findings from this existential safety index paint a concerning picture, showing that even leading AI companies are struggling to implement strong safety measures. With the rapid advancement of AI, are we prioritizing innovation over the fundamental safeguards required for our collective future?
โœจ

Originally reported by Fortune

Got a question about this? ๐Ÿค”

Ask anything about this article and get an instant answer.

Answers are AI-generated based on the article content.

vibe check:

more like this ๐Ÿ‘€