ThePromptBuddy logoThePromptBuddy

Inside the $1B Bet on World Models: A Complete Guide to the New Robotics AI Paradigm

World models vs LLMs: inside Yann LeCun’s $1.03B AMI Labs bet, JEPA architecture, and why physical AI could reshape robotics and the future of AI.

Sankalp Dubedy
March 19, 2026
World models vs LLMs: inside Yann LeCun’s $1.03B AMI Labs bet, JEPA architecture, and why physical AI could reshape robotics and the future of AI.

The most significant funding story in AI right now has nothing to do with ChatGPT or another chatbot. On March 10, 2026, Yann LeCun — one of the most respected names in AI research — launched AMI Labs with $1.03 billion in seed funding. That is Europe's largest seed round in history. The valuation landed at $3.5 billion before the company had a single commercial product.

The bet is on world models: a fundamentally different approach to AI that challenges the dominance of large language models (LLMs). This is not a minor technical argument. It is a direct challenge to the way the entire AI industry has been built over the last five years.

This guide explains what world models are, why they matter for robotics, who is building them, how much money is flowing in, and what the competitive landscape looks like as of March 2026.


What Is a World Model? The Simple Explanation

A large language model like ChatGPT learns by predicting the next word in a sentence. It reads billions of pages of text and gets very good at pattern-matching language. The result is a system that can write, summarize, and reason about text with impressive skill.

A world model works differently. Instead of predicting words, it builds an internal representation of how the physical world works. It learns cause and effect. It can predict what will happen next when a robot arm moves in a certain direction. It understands that objects have weight, that gravity pulls things down, and that a glass placed near a table edge will fall if nudged.

Here is the core distinction in a simple table:

FeatureLarge Language Model (LLM)World Model
Primary training dataText from the internetSensor data, video, physical interactions
Core objectivePredict the next token/wordPredict future states of the physical world
StrengthCoding, writing, summarization, Q&ARobotics, planning, physical reasoning
Key weaknessHallucinations; no physical groundingUnproven at commercial scale; needs years of R&D
Architecture exampleGPT-4, Llama, ClaudeJEPA (LeCun), NVIDIA Cosmos, World Labs
Best suited forDigital tasksReal-world physical tasks

LeCun's argument, which he has made publicly for years, is that LLMs will never achieve true general intelligence because they have no model of physical reality. They cannot predict consequences of actions. They cannot plan. And this, he argues, is exactly why we still do not have a domestic robot that can clean a house as well as a human child can.


The $1.03B Bet: AMI Labs and the JEPA Architecture

Who Founded It

AMI Labs — short for Advanced Machine Intelligence Labs, pronounced "ah-mee" after the French word for "friend" — was officially launched on March 10, 2026. Yann LeCun co-founded it alongside Alexandre LeBrun, a serial French AI entrepreneur who previously founded VirtuOz (acquired by Nuance/Microsoft) and Wit.ai (acquired by Meta after Y Combinator).

LeCun left Meta in November 2025, shortly after the company shifted focus toward catching up with generative AI competitors. He had spent over a decade leading Meta's Fundamental AI Research (FAIR) group.

The Leadership Team

NameRole at AMI LabsPrevious Role
Alexandre LeBrunCEOCEO of Nabla; founder of Wit.ai (acquired by Meta)
Laurent SollyCOOMeta's VP for Europe
Saining XieChief Science OfficerResearcher at Google DeepMind and Meta
Pascale FungChief Research & Innovation OfficerSenior Director at Meta FAIR
Michael RabbatVP of World ModelsDirector of Research Science at Meta

This is a significant concentration of AI talent. LeCun effectively rebuilt the core of Meta's FAIR team under the AMI banner.

The Investors

The seed round was co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Other backers include Nvidia, Samsung, Toyota Ventures, Sea, and Temasek. Angel investors include former Google CEO Eric Schmidt, Mark Cuban, and World Wide Web inventor Tim Berners-Lee.

The Technology: JEPA

AMI Labs is building world models using a framework called JEPA — Joint Embedding Predictive Architecture. LeCun proposed this in 2022. It is the technical core of what makes world models different from LLMs.

ConceptHow LLMs Do ItHow JEPA Does It
Learning objectivePredict raw output (next token)Predict abstract representations in latent space
Data handlingProcesses all tokens equallyFocuses on meaningful patterns; ignores noise
Physical reasoningCannot reason about physics directlyCan model physical dynamics
Action predictionNot designed for thisAction-conditioned: predicts what happens if you do X
Hallucination riskHigh for physical/factual claimsStructurally reduced for physical tasks

In plain terms: JEPA does not try to reconstruct every pixel of a video. Instead, it learns the abstract rules of how things change — what is important about how the world evolves — and ignores the unpredictable noise. This makes it far better suited for robotics, where a robot needs to plan actions based on how the environment will respond.

AMI Labs CEO Alexandre LeBrun put it plainly: generative architectures "mimic intelligence; they don't genuinely understand the world." He argues that factories, hospitals, and robots operating in open environments demand AI that grasps reality, not just predicts text.


The Broader World Model Investment Landscape

AMI Labs is not alone. A wave of world model funding has arrived in a short window.

CompanyFundingFocusKey Investors
AMI Labs$1.03B seed (Mar 2026)JEPA-based world models for robotics, healthcare, industrialBezos Expeditions, Nvidia, Samsung, Toyota
World Labs (Fei-Fei Li)$1B (Feb 2026)Spatial intelligence and world modelsAndreessen Horowitz, others
SpAItial$13M seedEuropean world model startupEuropean VCs
Mind Robotics$615M total (Mar 2026)Factory robots using Rivian dataAccel, Andreessen Horowitz
Figure AI$675MHumanoid robotsOpenAI, Jeff Bezos, Nvidia
Physical Intelligence$400MAI control systems for robotsGeneral Catalyst, others

The broader humanoid robotics investment landscape reached $4.6 billion in 2025 alone, with over $2.26 billion raised in global robotics funding in Q1 2025 alone.


NVIDIA's Cosmos: The Platform Play

While AMI Labs is pursuing fundamental research, NVIDIA is taking a platform approach to world models for physical AI. Its Cosmos platform — a family of World Foundation Models (WFMs) — was first launched at CES 2025 and has been updated rapidly since.

What NVIDIA Cosmos Offers

Cosmos ComponentWhat It Does
Cosmos PredictGenerates future video states from text, image, or video input; used to create synthetic training data for robots
Cosmos TransferConverts simulation footage into photorealistic video; used for autonomous vehicle and robot training
Cosmos Reason 2A 7B-parameter vision language model (VLM) that enables robots to reason about the physical world and understand complex instructions
Isaac GR00T N1.6Open reasoning VLA (Vision Language Action) model for humanoid robots; uses Cosmos Reason as its "thinking brain"

As NVIDIA CEO Jensen Huang stated at GTC: "Just as large language models revolutionized generative and agentic AI, Cosmos world foundation models are a breakthrough for physical AI."

Cosmos WFMs have been downloaded over 3 million times on Hugging Face. Cosmos Reason 1 currently sits at the top of the Physical Reasoning Leaderboard on Hugging Face with over 1 million downloads.

Who Is Using NVIDIA Cosmos

CompanyUse Case
Agility RoboticsScaling photorealistic training data for factory robots
Figure AIGenerating richer training data for humanoid robots
Boston DynamicsTraining Atlas humanoid robots in Isaac Lab Arena
LEM SurgicalTraining autonomous arms for surgical robots
SalesforceAnalyzing robot-captured video; halved incident resolution times
Franka RoboticsPowering dual-arm manipulator with GR00T N models

Other Major Players: A Competitive Map

World models are no longer just an academic idea. Tech giants and startups across the world are building their own versions.

CompanyProductApproach
NVIDIACosmos / GR00TGenerative world foundation models for robot training and simulation
Google DeepMindGemini Robotics-ER 1.5Vision-language-action models integrated with Gemini
AlibabaRynnBrainHelps robots comprehend physical environments; identifies objects
TeslaOptimus AIProprietary AI for humanoid robot; uses proprietary data pipeline
AMI LabsJEPA-based world modelsAbstract representation learning; research-first, not product-first
World Labs (Fei-Fei Li)Spatial intelligenceFocused on spatial understanding and 3D world comprehension
Boston DynamicsAtlas (with NVIDIA)Fully electric humanoid running on Jetson Thor; deployed in Hyundai factories

Why LLMs Fall Short for Robotics

To understand why world models are attracting a billion dollars in funding, you need to understand where LLMs fail in physical applications. LeCun has been making this argument for years, and the robotics industry is now aligned with his diagnosis.

The core problem is three-fold:

  1. Hallucination is dangerous in the real world. An LLM confidently providing a wrong answer in a chatbot is annoying. An LLM-powered robot confidently making the wrong physical action in a factory or hospital is catastrophic.

  2. LLMs cannot plan action sequences. They predict the next token. They do not model what happens if a robot arm moves 10 centimeters to the left. World models can predict the consequences of actions before executing them — a critical property called "action-conditioned prediction."

  3. LLMs have no understanding of Moravec's Paradox. Moravec's Paradox is the observation that tasks easy for humans (picking up a glass, walking up stairs) are extremely hard for AI and robots, while tasks hard for humans (playing chess, solving equations) are easy for AI. World models address the physical intuition side of this paradox directly — by building robots that learn spatial intuition through experience, not through text description.

ChallengeLLM ApproachWorld Model Approach
Picking up an irregular objectCannot reliably reason from text descriptionLearns from sensor data and physical interaction
Navigating an unfamiliar roomNo physical grounding; relies on language patternsBuilds internal map from camera/sensor inputs
Predicting if an action is safeNo action-conditioned modelingSimulates outcomes before executing
Avoiding catastrophic errorsRelies on RLHF guardrailsStructural advantage: does not hallucinate physics

The Robotics Market: Scale and Speed

World models are arriving at exactly the right time. The humanoid and industrial robotics market is entering a period of rapid deployment.

2025–2026 Humanoid Robotics Data

MetricData
Humanoid robots deployed globally in 2025~16,000 units
Units deployed in China in 2025~13,000
Year-over-year growth in 2025~500%
Projected global deployments by end of 2026Up to ~100,000 units
Projected market size by 2032Up to $1 billion
Total AI investment in humanoid robots in 2025$4.6 billion

China is dominating current deployment numbers. Agibot Innovation (Shanghai) shipped approximately 5,200 units in 2025, while Unitree Robotics (Hangzhou) shipped around 4,200. Without a major push from Western companies, Chinese firms are expected to capture the majority of the projected 100,000 units that may be deployed in 2026.

Industrial Robotics Context

MetricData
Cumulative installed industrial robots globally (2025)Surpassed 5 million units
Annual installations (2025–2026)~500,000/year
Expected annual installations by 2030~1 million/year
Price trendDeclining ~3.2% per year since 2018

The Deloitte analysis of this sector specifically identifies world models (alongside LLMs and VLA models) as one of the key drivers expected to unlock robotics growth between 2026 and 2030.


Applications Being Targeted Right Now

Where World Models Are Being Deployed in 2026

IndustryUse CaseKey Risk Without World Models
ManufacturingRobots that handle irregular objects, assemble components with human-like dexterityBrittle behaviors that fail when conditions change
HealthcareSurgical assistance, diagnostic support, patient handling robotsHallucinations with life-threatening consequences
Autonomous VehiclesTraining data generation, sim-to-real transferDangerous edge cases not covered by real-world data
AerospaceAnalyzing aircraft component designs for optimizationErrors in high-stakes safety-critical systems
Retail/WarehousingRobotic picking, package handling, inventory managementInability to generalize to novel object shapes
WearablesContext-aware health monitoring devicesPoor real-world sensor interpretation

AMI Labs specifically targets manufacturers, aerospace companies, biomedical firms, and pharmaceutical groups. These are industries where errors have "significant consequences" — a phrase that captures exactly why the JEPA approach is commercially relevant.


The Honest Risks: What Could Go Wrong

World models are not a guaranteed success. Balanced analysis requires acknowledging the real risks.

RiskDetails
Research-to-product gapAMI Labs CEO acknowledges commercial products could be "several years away." The company is deliberately research-first.
Generalization challengeWorld models have their own generalization challenges in novel environments. JEPA does not automatically solve all failure modes.
LLM competitionOpenAI, Google, and Anthropic are investing heavily in their own physical-world AI approaches. LLM-based robotics is advancing fast.
Investor expectationsA $3.5B valuation for a research lab with no product creates structural tension. Research labs at this scale typically need either a deep-pocketed patron or a clear revenue path.
Regulatory timelineHealthcare applications require FDA certification — a long, uncertain process that does not match startup funding timelines.
China competitionChinese companies already lead in humanoid robot deployment. AMI's Europe-based positioning may constrain access to certain markets.

As analyst Nick Patience of Futurum noted: the AMI team quality is not in question. The structural tension between a research-first mandate and investor expectations calibrated to a billion-dollar raise is the real challenge.


Tips for Following This Space

If you want to track the world model paradigm effectively, here is a practical guide to the signals worth watching.

  1. Watch Cosmos download numbers on Hugging Face. NVIDIA tracks these publicly. Growth in downloads signals developer adoption, which leads commercial deployment.
  2. Track AMI Labs partner announcements. Their first partner is Nabla (healthcare). Each new industry partner is a signal of which sectors are ready for world model integration.
  3. Monitor China vs. West humanoid deployment numbers. CounterPoint Research publishes quarterly data. The 2026 deployment figure will be a key indicator of whether Western world model research is translating into deployed systems.
  4. Follow NVIDIA GTC announcements. Jensen Huang has consistently previewed the physical AI roadmap at GTC. The March 2025 GTC introduced Cosmos. Watch for GTC 2026 updates.
  5. Note LeCun's public writing. LeCun remains one of the most transparent AI researchers. His LinkedIn posts and public talks often signal where JEPA research is heading months before formal publications.

The Paradigm Question: Will World Models Win?

LeCun has made a clear prediction: "We are going to have AI systems that have human-like intelligence, but they're not going to be built on LLMs." He expects world models and planning to be the path forward. But he also acknowledges this will take years, not months.

The counter-argument is also credible. LLM-based approaches to robotics are advancing rapidly. Vision-Language-Action (VLA) models are already being deployed commercially. Boston Dynamics, Figure AI, and Agility Robotics are all finding real-world success with systems that integrate LLMs, not pure world models.

The most likely near-term outcome is a hybrid: robots that use world model foundations for physical reasoning and planning, combined with LLM-style language understanding for receiving and interpreting instructions. NVIDIA's GR00T N1.6 — which uses Cosmos Reason (a world model-adjacent VLM) as a reasoning layer on top of a VLA action model — already points in this direction.

The $1.03 billion flowing into AMI Labs is not a bet that LLMs will disappear. It is a bet that the next leap in physical AI — robots and machines that can operate reliably in hospitals, factories, and homes — will require something fundamentally different from predicting the next word.


Conclusion

The world model moment has arrived. In the span of a few weeks in early 2026, Yann LeCun launched AMI Labs with $1.03 billion, Fei-Fei Li's World Labs raised another $1 billion, and NVIDIA's Cosmos platform crossed 3 million downloads while being adopted by companies from Boston Dynamics to surgical robot makers.

The technical argument behind world models is sound and well-supported: LLMs lack physical grounding, cannot reliably plan action sequences, and are structurally prone to hallucinations that are merely embarrassing in a chatbot but dangerous in a robot.

Whether AMI Labs' JEPA-based approach will prove out at commercial scale within the next two to five years remains genuinely uncertain. The team is world-class. The funding is historic. The market timing — with humanoid robot deployments growing 500% year-over-year — is strong.

What is certain is that the AI industry's next major architectural debate has begun. World models versus LLMs is the new intelligence paradigm war, and it is backed by serious money, serious science, and the most credentialed figures in the field. Understanding this shift now puts you ahead of the conversation that the rest of the industry will be having for the next decade.

Join other AI professionals

Get the latest AI prompts, tool reviews, and model insights delivered straight to your inbox, completely free.