Meta Platforms has made a dramatic shift in its artificial intelligence strategy. The company created Meta Superintelligence Labs (MSL) in June 2025, bringing together top AI talent under one organization with a single goal: building superintelligence that exceeds human intelligence in all areas.
This move came after Meta's flagship Llama 4 models disappointed the AI community in April 2025. The company faced criticism over rushed releases and benchmark manipulation. Now, Meta has appointed 28-year-old Alexandr Wang as Chief AI Officer, restructured its entire AI division, and invested billions in compute infrastructure.
The question everyone asks: Can Meta catch up to rivals like OpenAI, Anthropic, and Google in the race toward AGI?
Meta Superintelligence Labs: A Complete AI Reorganization
Meta Superintelligence Labs represents the biggest change to Meta's AI operations since the company founded its Fundamental AI Research (FAIR) lab in 2013. MSL consolidated AI projects into one division with the goal of creating "personal superintelligence," where AI is built for individual fulfillment rather than just enterprise or research applications.
The reorganization dissolved Meta's previous AGI Foundations group and created four focused teams: research, training, products, and infrastructure. This structure aims to speed up decision-making and unify Meta's AI strategy across all divisions.
Leadership Changes That Shocked the Industry
Alexandr Wang joined Meta in June 2025 as the company's first-ever Chief AI Officer after Meta purchased 49% of his company, Scale AI, in a deal worth $14.3 billion. Wang now leads MSL alongside Nat Friedman, former CEO of GitHub, who serves as vice president of product and applied research.
The appointment raised eyebrows across Silicon Valley. Wang is an entrepreneur, not a computer scientist with decades of AI research experience. At just 28 years old, he's leading teams of PhD researchers and veteran AI scientists.
Yann LeCun, Meta's departing chief AI scientist and one of the "godfathers of AI," called Wang "young" and "inexperienced" in research. LeCun acknowledged Wang learns quickly but questioned whether he understands how to attract and manage research talent.
The Numbers Behind Meta's AI Investment
Meta committed massive resources to its superintelligence push:
| Investment Category | Amount | Purpose |
|---|---|---|
| Scale AI Acquisition | $14.3 billion | Acquire Wang and data infrastructure expertise |
| AI Infrastructure (2025) | Up to $65 billion | Build compute clusters and data centers |
| Signing Bonuses | $100-150 million | Attract top AI researchers from rivals |
| Louisiana Data Center | $2.7 billion | Hyperion facility for training and inference |
CEO Mark Zuckerberg called 2025 "a defining year for AI" and positioned these investments as critical to Meta's future. The company aims to deploy one of the world's largest AI compute infrastructures by 2026.
The Llama 4 Crisis That Changed Everything
Before MSL existed, Meta faced a crisis that shook confidence in its AI capabilities. The Llama 4 models, released in April 2025, became a turning point for the company.
What Went Wrong With Llama 4
Meta researchers used different versions of Llama 4 Maverick and Llama 4 Scout models on different benchmarks to improve test results, rather than using a single version for all benchmarks as is standard practice. This practice made the models appear more capable than they actually were.
When independent researchers tested Llama 4 after its public release, their results didn't match Meta's claimed benchmarks. The AI community reacted with anger and disappointment. What was supposed to be Meta's competitive answer to GPT-4 and Claude became a public relations disaster.
LeCun later admitted the "results were fudged a little bit" and that CEO Mark Zuckerberg was "really upset and basically lost confidence in everyone who was involved" in the release.
The Fallout and Restructuring
Zuckerberg "sidelined the entire GenAI organisation" following the Llama 4 controversy. This decision led to:
- Creation of Meta Superintelligence Labs as a new organization
- Hiring of Wang and Friedman to lead AI efforts
- 600 job cuts in October 2025 targeting FAIR and other AI teams
- Shift in focus from open-source to potentially closed-source models
The cuts hit Meta's Fundamental AI Research (FAIR) group, the lab founded in 2013 by Yann LeCun, along with product AI and infrastructure teams. However, the company's newer TBD Lab, focused on training next-generation foundation models, remained protected and continued hiring.
FAIR's Decline: From Research Powerhouse to Marginalized Lab
Meta's Fundamental AI Research lab once stood as the crown jewel of corporate AI research. FAIR attracted top academic talent by offering university-style freedom with corporate resources and salaries.
The Golden Years
FAIR partnered with Google, Amazon, IBM, and Microsoft in 2016 to create the Partnership on Artificial Intelligence to Benefit People and Society. The lab's contributions shaped the entire AI field:
- Released PyTorch in 2017, now one of the most popular machine learning frameworks
- Advanced self-supervised learning techniques
- Pioneered work in computer vision and natural language processing
- Published groundbreaking research in generative AI
Corporate Control Tightens
Meta recently tightened oversight on FAIR's work, requiring an extra layer of review before research papers can be published, a move that sparked outrage inside the lab. Researchers who once enjoyed remarkable independence now face alignment with Meta's product roadmap and brand protection concerns.
Seven former Meta employees described FAIR as "slowly but surely withering" as blue-sky research within Big Tech companies has slowed, with FAIR getting less computing power than teams focused on generative AI.
LeCun's Departure
In November 2025, Yann LeCun announced he was leaving Meta to launch his own AI startup focused on "world models" - AI systems that understand real-world physics rather than just generating language. LeCun said Meta's new AI hires are "completely LLM-pilled," while he maintains LLMs are a "dead end" for achieving superintelligence.
His departure marks the end of an era. LeCun joined Meta in 2013 and became chief AI scientist in 2018. His exit, combined with other departures, signals a fundamental shift in Meta's AI culture from academic research to product-focused development.
Meta's New AGI Roadmap
Despite setbacks, Meta outlined an aggressive timeline for reaching superintelligence. The company's approach differs from competitors by emphasizing "personal superintelligence" - AI systems designed for individual users rather than enterprise applications.
Near-Term Milestones
Meta's public roadmap includes:
By Late 2025:
- Release next-generation LLaMA models with enhanced reasoning
- Improve planning abilities across model families
- Demonstrate progress beyond Llama 4's capabilities
By 2026:
- Show prototype AI agents capable of autonomous learning
- Build systems that can set and pursue goals in simulated environments
- Deploy agents that work effectively in real-world tasks
- Launch the "Avocado" model as Llama's successor
Beyond 2026:
- Achieve scalable, general-purpose AI systems
- Meet AGI benchmarks across multiple domains
- Create AI that matches or exceeds human intelligence
The Avocado Model: Meta's Next Big Bet
Meta is developing a next-generation AI model codenamed Avocado, originally planned for late 2025 but now delayed to the first quarter of 2026. Unlike the open-source Llama series, Avocado may adopt a closed-source commercial approach.
The shift from open-source to closed-source represents a strategic pivot. Meta previously positioned itself as a champion of open AI, releasing Llama models freely to developers. But disappointing results and concerns about Chinese companies like DeepSeek using Llama's architecture prompted a rethink.
Avocado is described as a "frontier-level" large model targeting GPT-5 and Gemini 3 Ultra performance, with Meta evaluating a complete closed-source approach offering only API and hosted services.
Technical Advances: World Models and New Architectures
MSL continues several promising research directions that could differentiate Meta's approach to superintelligence.
World Models Research
MSL focuses on "world models" - AI models that understand the physical dynamics of the real world and predict how interactions between different objects will play out. This research culminated in Video Joint Embedding Predictive Architecture 2 (V-JEPA 2), released in June 2025.
World models represent a departure from pure language model scaling. They aim to give AI systems grounded understanding of physics, spatial relationships, and cause-and-effect in the real world.
Recent Research Releases
FAIR announced five projects in mid-2025 advancing perception, language modeling, robotics, and collaborative AI agents, with releases including a large-scale vision encoder called Perception Encoder. Other releases focused on:
- Fine-grained video understanding with 2.5 million human-labeled samples
- Dynamic Byte Latent Transformer for byte-level language processing
- Meta Locate 3D for robot spatial understanding
- Social agent research for human-AI collaboration
These research directions show Meta still invests in fundamental AI capabilities beyond just scaling language models.
The Competitive Landscape: How Meta Stacks Up
Meta faces intense competition in the race toward superintelligence. Understanding where the company stands requires looking at the broader AI landscape.
Current AI Leaders Comparison
| Company | Latest Model | Strengths | Market Position |
|---|---|---|---|
| OpenAI | GPT-4 Turbo | Leading commercial AI, strong reasoning | Market leader |
| Anthropic | Claude 3 Opus | Constitutional AI, safety focus | Strong challenger |
| Gemini Ultra | Multimodal capabilities, search integration | Tech giant resources | |
| Meta | Llama 4 | Open-source approach, social media data | Catching up |
| DeepSeek | DeepSeek-V3 | Cost-efficient training, strong performance | Emerging threat |
Meta's Challenges
Several obstacles stand in Meta's path to AGI leadership:
Trust Deficit: The Llama 4 benchmark manipulation damaged Meta's credibility with developers and researchers. Rebuilding that trust will take time and consistent delivery.
Talent Retention: LeCun predicted "a lot of people who haven't yet left will leave" Meta's GenAI team. High turnover disrupts long-term research projects.
Technical Debt: Meta's focus on rapid product development may sacrifice the deep research needed for breakthrough advances.
Cultural Clash: Friction between academic-minded researchers and product-focused leadership creates internal tension.
Open Source vs. Closed Source: Meta's Strategic Dilemma
Meta built its AI reputation on open-source contributions. PyTorch powers much of the AI industry. Llama models gave developers free access to capable language models. This openness created goodwill and accelerated adoption.
Why Meta May Abandon Open Source
The Avocado model's potential shift to closed-source reflects several concerns:
Competitive Pressure: The Llama 4 release failed to meet developer expectations, positioning Meta behind closed-model ecosystems from OpenAI, Google, and Anthropic.
Chinese Competition: Companies like DeepSeek used Llama's open architecture to build competitive models at a fraction of Meta's cost. This undercut Meta's advantage while potentially benefiting geopolitical rivals.
Monetization: Closed models allow direct revenue through API access and licensing. Open models primarily drive value through Meta's existing products.
Control: Proprietary models give Meta more control over capabilities, safety features, and commercial terms.
The Case for Staying Open
Abandoning open source carries risks:
- Damages Meta's reputation as an open AI champion
- Reduces developer ecosystem and community contributions
- Limits academic research using Meta's models
- Removes differentiation from OpenAI and Google
Meta hasn't made a final decision. The company may pursue a hybrid approach, keeping some models open while making frontier models proprietary.
What Industry Experts Say
The AI community has strong opinions about Meta's superintelligence push and recent changes.
Skepticism About Leadership
Former Meta employees and AI researchers expressed concerns about Wang's appointment. Beyond LeCun's public criticism, others privately questioned whether an entrepreneur without deep AI research experience can lead teams of PhD scientists to breakthrough discoveries.
Supporters point to Wang's track record building Scale AI from a pool house startup to a company valued at $29 billion. They argue his product focus and execution skills may be exactly what Meta needs to ship competitive AI products.
Concerns About Research Culture
Former FAIR researchers say the organization has been "slowly but surely withering" as blue sky research has slowed and the lab gets less computing power than commercial teams. This shift from research to product focus worries scientists who believe AGI breakthroughs require fundamental research, not just engineering optimization.
Questions About Timelines
Many experts view Meta's aggressive AGI timeline with skepticism. Building superintelligence by 2026 or shortly after seems unrealistic given current technological limitations and the company's recent struggles.
Zuckerberg revealed that Meta has witnessed "early glimpses of self-improvement with the models", but these claims remain unverified and may represent incremental progress rather than paradigm shifts.
Lessons From Meta's Journey
Meta's superintelligence push offers important lessons for the AI industry and other companies pursuing AGI.
1. Benchmarks Can Mislead
The Llama 4 controversy shows how benchmark gaming damages credibility. When companies optimize for test scores rather than real-world capabilities, users lose trust and competitors gain advantages through honest reporting.
2. Culture Matters in AI Labs
Successful AI research requires balancing academic freedom with commercial goals. Meta's struggle to maintain FAIR's research culture while pushing for product delivery illustrates this tension. Companies that tilt too far in either direction risk losing talent or relevance.
3. Leadership Changes Create Uncertainty
Bringing in outside leaders disrupts established teams. While fresh perspectives can spark innovation, transitions create instability. Meta's rapid leadership changes and restructuring may slow progress in the short term even if they improve long-term direction.
4. Open Source Creates Complications
Releasing powerful AI models openly accelerates adoption but creates competitive and geopolitical risks. Meta's open-source strategy helped build its AI reputation but may have strengthened rivals. Finding the right balance between openness and control remains an unsolved challenge.
The Road Ahead for Meta AI
Meta's path to superintelligence faces significant obstacles but isn't impossible. The company has key advantages:
Financial Resources: Meta's $65 billion AI investment dwarfs most competitors' budgets. Money can't guarantee breakthrough discoveries, but it enables experimentation at massive scale.
Data Access: Meta's social media platforms generate unique training data from billions of users. This proprietary dataset could fuel models with better understanding of human communication and behavior.
Product Integration: Unlike pure research labs, Meta can deploy AI across Facebook, Instagram, WhatsApp, and Reality Labs. This distribution gives Meta immediate feedback and real-world testing.
Infrastructure Expertise: Meta built some of the world's largest computing systems. This experience translates directly to building AI training clusters.
Critical Success Factors
For Meta to achieve its superintelligence goals, several things must happen:
Talent Retention: The company must stop hemorrhaging top researchers. Creating an environment where scientists want to stay requires more than high salaries.
Technical Breakthroughs: Incremental improvements won't close the gap with leaders. Meta needs architectural innovations or training methods that leapfrog competitors.
Execution Discipline: Avoiding another Llama 4-style release requires better processes, testing, and communication. Wang's product focus could help here.
Strategic Clarity: Meta must decide whether to pursue open or closed models and commit to that path. Wavering between approaches wastes resources and confuses the market.
What This Means for the AGI Race
Meta's superintelligence push intensifies competition among tech giants racing toward AGI. The company's restructuring, massive investments, and willingness to make dramatic changes show how seriously all players take this race.
Industry Implications
Meta's moves will ripple through the AI industry:
Talent Wars: Meta's reported $100 million signing bonuses will force competitors to match compensation or lose researchers.
Compute Arms Race: Meta's infrastructure investments push others to match that scale or find more efficient approaches.
Open Source Retreat: If Meta closes its models, the open-source AI movement loses a major champion. This could concentrate AI power in fewer hands.
Benchmark Scrutiny: The Llama 4 controversy will make the industry more skeptical of claimed performance numbers. Third-party verification will become essential.
Timeline Compression
Meta's aggressive roadmap puts pressure on competitors to accelerate their own timelines. This compression carries risks - rushing development to beat rivals may sacrifice safety testing or lead to overhyped releases.
Final Thoughts
Meta's creation of Superintelligence Labs represents a bold bet on centralized AI development under new leadership. The company acknowledged that its previous approach wasn't working and made dramatic changes.
Whether these changes will succeed remains uncertain. Wang and Friedman inherit significant challenges: damaged credibility from Llama 4, departing research talent, intense competition, and ambitious timelines. But they also control vast resources and lead a company willing to make big bets.
The next 18 months will determine whether Meta's restructuring pays off. If Avocado delivers on its promises and MSL ships breakthrough capabilities, the company could regain its position as an AI leader. If results disappoint again, Meta risks falling permanently behind in the most important technology race of our time.
For the broader AI field, Meta's experience offers a case study in how not to release models, how to restructure for focus, and how difficult the path to superintelligence truly is. Even companies with enormous resources and top talent struggle to make consistent progress toward AGI.
The race to superintelligence continues. Meta has placed its bets and committed its resources. Now comes the hard part: actually building the future the company promises.
