The most significant funding story in AI right now has nothing to do with ChatGPT or another chatbot. On March 10, 2026, Yann LeCun — one of the most respected names in AI research — launched AMI Labs with $1.03 billion in seed funding. That is Europe's largest seed round in history. The valuation landed at $3.5 billion before the company had a single commercial product.
The bet is on world models: a fundamentally different approach to AI that challenges the dominance of large language models (LLMs). This is not a minor technical argument. It is a direct challenge to the way the entire AI industry has been built over the last five years.
This guide explains what world models are, why they matter for robotics, who is building them, how much money is flowing in, and what the competitive landscape looks like as of March 2026.
What Is a World Model? The Simple Explanation
A large language model like ChatGPT learns by predicting the next word in a sentence. It reads billions of pages of text and gets very good at pattern-matching language. The result is a system that can write, summarize, and reason about text with impressive skill.
A world model works differently. Instead of predicting words, it builds an internal representation of how the physical world works. It learns cause and effect. It can predict what will happen next when a robot arm moves in a certain direction. It understands that objects have weight, that gravity pulls things down, and that a glass placed near a table edge will fall if nudged.
Here is the core distinction in a simple table:
| Feature | Large Language Model (LLM) | World Model |
|---|---|---|
| Primary training data | Text from the internet | Sensor data, video, physical interactions |
| Core objective | Predict the next token/word | Predict future states of the physical world |
| Strength | Coding, writing, summarization, Q&A | Robotics, planning, physical reasoning |
| Key weakness | Hallucinations; no physical grounding | Unproven at commercial scale; needs years of R&D |
| Architecture example | GPT-4, Llama, Claude | JEPA (LeCun), NVIDIA Cosmos, World Labs |
| Best suited for | Digital tasks | Real-world physical tasks |
LeCun's argument, which he has made publicly for years, is that LLMs will never achieve true general intelligence because they have no model of physical reality. They cannot predict consequences of actions. They cannot plan. And this, he argues, is exactly why we still do not have a domestic robot that can clean a house as well as a human child can.
The $1.03B Bet: AMI Labs and the JEPA Architecture
Who Founded It
AMI Labs — short for Advanced Machine Intelligence Labs, pronounced "ah-mee" after the French word for "friend" — was officially launched on March 10, 2026. Yann LeCun co-founded it alongside Alexandre LeBrun, a serial French AI entrepreneur who previously founded VirtuOz (acquired by Nuance/Microsoft) and Wit.ai (acquired by Meta after Y Combinator).
LeCun left Meta in November 2025, shortly after the company shifted focus toward catching up with generative AI competitors. He had spent over a decade leading Meta's Fundamental AI Research (FAIR) group.
The Leadership Team
| Name | Role at AMI Labs | Previous Role |
|---|---|---|
| Alexandre LeBrun | CEO | CEO of Nabla; founder of Wit.ai (acquired by Meta) |
| Laurent Solly | COO | Meta's VP for Europe |
| Saining Xie | Chief Science Officer | Researcher at Google DeepMind and Meta |
| Pascale Fung | Chief Research & Innovation Officer | Senior Director at Meta FAIR |
| Michael Rabbat | VP of World Models | Director of Research Science at Meta |
This is a significant concentration of AI talent. LeCun effectively rebuilt the core of Meta's FAIR team under the AMI banner.
The Investors
The seed round was co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Other backers include Nvidia, Samsung, Toyota Ventures, Sea, and Temasek. Angel investors include former Google CEO Eric Schmidt, Mark Cuban, and World Wide Web inventor Tim Berners-Lee.
The Technology: JEPA
AMI Labs is building world models using a framework called JEPA — Joint Embedding Predictive Architecture. LeCun proposed this in 2022. It is the technical core of what makes world models different from LLMs.
| Concept | How LLMs Do It | How JEPA Does It |
|---|---|---|
| Learning objective | Predict raw output (next token) | Predict abstract representations in latent space |
| Data handling | Processes all tokens equally | Focuses on meaningful patterns; ignores noise |
| Physical reasoning | Cannot reason about physics directly | Can model physical dynamics |
| Action prediction | Not designed for this | Action-conditioned: predicts what happens if you do X |
| Hallucination risk | High for physical/factual claims | Structurally reduced for physical tasks |
In plain terms: JEPA does not try to reconstruct every pixel of a video. Instead, it learns the abstract rules of how things change — what is important about how the world evolves — and ignores the unpredictable noise. This makes it far better suited for robotics, where a robot needs to plan actions based on how the environment will respond.
AMI Labs CEO Alexandre LeBrun put it plainly: generative architectures "mimic intelligence; they don't genuinely understand the world." He argues that factories, hospitals, and robots operating in open environments demand AI that grasps reality, not just predicts text.
The Broader World Model Investment Landscape
AMI Labs is not alone. A wave of world model funding has arrived in a short window.
| Company | Funding | Focus | Key Investors |
|---|---|---|---|
| AMI Labs | $1.03B seed (Mar 2026) | JEPA-based world models for robotics, healthcare, industrial | Bezos Expeditions, Nvidia, Samsung, Toyota |
| World Labs (Fei-Fei Li) | $1B (Feb 2026) | Spatial intelligence and world models | Andreessen Horowitz, others |
| SpAItial | $13M seed | European world model startup | European VCs |
| Mind Robotics | $615M total (Mar 2026) | Factory robots using Rivian data | Accel, Andreessen Horowitz |
| Figure AI | $675M | Humanoid robots | OpenAI, Jeff Bezos, Nvidia |
| Physical Intelligence | $400M | AI control systems for robots | General Catalyst, others |
The broader humanoid robotics investment landscape reached $4.6 billion in 2025 alone, with over $2.26 billion raised in global robotics funding in Q1 2025 alone.
NVIDIA's Cosmos: The Platform Play
While AMI Labs is pursuing fundamental research, NVIDIA is taking a platform approach to world models for physical AI. Its Cosmos platform — a family of World Foundation Models (WFMs) — was first launched at CES 2025 and has been updated rapidly since.
What NVIDIA Cosmos Offers
| Cosmos Component | What It Does |
|---|---|
| Cosmos Predict | Generates future video states from text, image, or video input; used to create synthetic training data for robots |
| Cosmos Transfer | Converts simulation footage into photorealistic video; used for autonomous vehicle and robot training |
| Cosmos Reason 2 | A 7B-parameter vision language model (VLM) that enables robots to reason about the physical world and understand complex instructions |
| Isaac GR00T N1.6 | Open reasoning VLA (Vision Language Action) model for humanoid robots; uses Cosmos Reason as its "thinking brain" |
As NVIDIA CEO Jensen Huang stated at GTC: "Just as large language models revolutionized generative and agentic AI, Cosmos world foundation models are a breakthrough for physical AI."
Cosmos WFMs have been downloaded over 3 million times on Hugging Face. Cosmos Reason 1 currently sits at the top of the Physical Reasoning Leaderboard on Hugging Face with over 1 million downloads.
Who Is Using NVIDIA Cosmos
| Company | Use Case |
|---|---|
| Agility Robotics | Scaling photorealistic training data for factory robots |
| Figure AI | Generating richer training data for humanoid robots |
| Boston Dynamics | Training Atlas humanoid robots in Isaac Lab Arena |
| LEM Surgical | Training autonomous arms for surgical robots |
| Salesforce | Analyzing robot-captured video; halved incident resolution times |
| Franka Robotics | Powering dual-arm manipulator with GR00T N models |
Other Major Players: A Competitive Map
World models are no longer just an academic idea. Tech giants and startups across the world are building their own versions.
| Company | Product | Approach |
|---|---|---|
| NVIDIA | Cosmos / GR00T | Generative world foundation models for robot training and simulation |
| Google DeepMind | Gemini Robotics-ER 1.5 | Vision-language-action models integrated with Gemini |
| Alibaba | RynnBrain | Helps robots comprehend physical environments; identifies objects |
| Tesla | Optimus AI | Proprietary AI for humanoid robot; uses proprietary data pipeline |
| AMI Labs | JEPA-based world models | Abstract representation learning; research-first, not product-first |
| World Labs (Fei-Fei Li) | Spatial intelligence | Focused on spatial understanding and 3D world comprehension |
| Boston Dynamics | Atlas (with NVIDIA) | Fully electric humanoid running on Jetson Thor; deployed in Hyundai factories |
Why LLMs Fall Short for Robotics
To understand why world models are attracting a billion dollars in funding, you need to understand where LLMs fail in physical applications. LeCun has been making this argument for years, and the robotics industry is now aligned with his diagnosis.
The core problem is three-fold:
-
Hallucination is dangerous in the real world. An LLM confidently providing a wrong answer in a chatbot is annoying. An LLM-powered robot confidently making the wrong physical action in a factory or hospital is catastrophic.
-
LLMs cannot plan action sequences. They predict the next token. They do not model what happens if a robot arm moves 10 centimeters to the left. World models can predict the consequences of actions before executing them — a critical property called "action-conditioned prediction."
-
LLMs have no understanding of Moravec's Paradox. Moravec's Paradox is the observation that tasks easy for humans (picking up a glass, walking up stairs) are extremely hard for AI and robots, while tasks hard for humans (playing chess, solving equations) are easy for AI. World models address the physical intuition side of this paradox directly — by building robots that learn spatial intuition through experience, not through text description.
| Challenge | LLM Approach | World Model Approach |
|---|---|---|
| Picking up an irregular object | Cannot reliably reason from text description | Learns from sensor data and physical interaction |
| Navigating an unfamiliar room | No physical grounding; relies on language patterns | Builds internal map from camera/sensor inputs |
| Predicting if an action is safe | No action-conditioned modeling | Simulates outcomes before executing |
| Avoiding catastrophic errors | Relies on RLHF guardrails | Structural advantage: does not hallucinate physics |
The Robotics Market: Scale and Speed
World models are arriving at exactly the right time. The humanoid and industrial robotics market is entering a period of rapid deployment.
2025–2026 Humanoid Robotics Data
| Metric | Data |
|---|---|
| Humanoid robots deployed globally in 2025 | ~16,000 units |
| Units deployed in China in 2025 | ~13,000 |
| Year-over-year growth in 2025 | ~500% |
| Projected global deployments by end of 2026 | Up to ~100,000 units |
| Projected market size by 2032 | Up to $1 billion |
| Total AI investment in humanoid robots in 2025 | $4.6 billion |
China is dominating current deployment numbers. Agibot Innovation (Shanghai) shipped approximately 5,200 units in 2025, while Unitree Robotics (Hangzhou) shipped around 4,200. Without a major push from Western companies, Chinese firms are expected to capture the majority of the projected 100,000 units that may be deployed in 2026.
Industrial Robotics Context
| Metric | Data |
|---|---|
| Cumulative installed industrial robots globally (2025) | Surpassed 5 million units |
| Annual installations (2025–2026) | ~500,000/year |
| Expected annual installations by 2030 | ~1 million/year |
| Price trend | Declining ~3.2% per year since 2018 |
The Deloitte analysis of this sector specifically identifies world models (alongside LLMs and VLA models) as one of the key drivers expected to unlock robotics growth between 2026 and 2030.
Applications Being Targeted Right Now
Where World Models Are Being Deployed in 2026
| Industry | Use Case | Key Risk Without World Models |
|---|---|---|
| Manufacturing | Robots that handle irregular objects, assemble components with human-like dexterity | Brittle behaviors that fail when conditions change |
| Healthcare | Surgical assistance, diagnostic support, patient handling robots | Hallucinations with life-threatening consequences |
| Autonomous Vehicles | Training data generation, sim-to-real transfer | Dangerous edge cases not covered by real-world data |
| Aerospace | Analyzing aircraft component designs for optimization | Errors in high-stakes safety-critical systems |
| Retail/Warehousing | Robotic picking, package handling, inventory management | Inability to generalize to novel object shapes |
| Wearables | Context-aware health monitoring devices | Poor real-world sensor interpretation |
AMI Labs specifically targets manufacturers, aerospace companies, biomedical firms, and pharmaceutical groups. These are industries where errors have "significant consequences" — a phrase that captures exactly why the JEPA approach is commercially relevant.
The Honest Risks: What Could Go Wrong
World models are not a guaranteed success. Balanced analysis requires acknowledging the real risks.
| Risk | Details |
|---|---|
| Research-to-product gap | AMI Labs CEO acknowledges commercial products could be "several years away." The company is deliberately research-first. |
| Generalization challenge | World models have their own generalization challenges in novel environments. JEPA does not automatically solve all failure modes. |
| LLM competition | OpenAI, Google, and Anthropic are investing heavily in their own physical-world AI approaches. LLM-based robotics is advancing fast. |
| Investor expectations | A $3.5B valuation for a research lab with no product creates structural tension. Research labs at this scale typically need either a deep-pocketed patron or a clear revenue path. |
| Regulatory timeline | Healthcare applications require FDA certification — a long, uncertain process that does not match startup funding timelines. |
| China competition | Chinese companies already lead in humanoid robot deployment. AMI's Europe-based positioning may constrain access to certain markets. |
As analyst Nick Patience of Futurum noted: the AMI team quality is not in question. The structural tension between a research-first mandate and investor expectations calibrated to a billion-dollar raise is the real challenge.
Tips for Following This Space
If you want to track the world model paradigm effectively, here is a practical guide to the signals worth watching.
- Watch Cosmos download numbers on Hugging Face. NVIDIA tracks these publicly. Growth in downloads signals developer adoption, which leads commercial deployment.
- Track AMI Labs partner announcements. Their first partner is Nabla (healthcare). Each new industry partner is a signal of which sectors are ready for world model integration.
- Monitor China vs. West humanoid deployment numbers. CounterPoint Research publishes quarterly data. The 2026 deployment figure will be a key indicator of whether Western world model research is translating into deployed systems.
- Follow NVIDIA GTC announcements. Jensen Huang has consistently previewed the physical AI roadmap at GTC. The March 2025 GTC introduced Cosmos. Watch for GTC 2026 updates.
- Note LeCun's public writing. LeCun remains one of the most transparent AI researchers. His LinkedIn posts and public talks often signal where JEPA research is heading months before formal publications.
The Paradigm Question: Will World Models Win?
LeCun has made a clear prediction: "We are going to have AI systems that have human-like intelligence, but they're not going to be built on LLMs." He expects world models and planning to be the path forward. But he also acknowledges this will take years, not months.
The counter-argument is also credible. LLM-based approaches to robotics are advancing rapidly. Vision-Language-Action (VLA) models are already being deployed commercially. Boston Dynamics, Figure AI, and Agility Robotics are all finding real-world success with systems that integrate LLMs, not pure world models.
The most likely near-term outcome is a hybrid: robots that use world model foundations for physical reasoning and planning, combined with LLM-style language understanding for receiving and interpreting instructions. NVIDIA's GR00T N1.6 — which uses Cosmos Reason (a world model-adjacent VLM) as a reasoning layer on top of a VLA action model — already points in this direction.
The $1.03 billion flowing into AMI Labs is not a bet that LLMs will disappear. It is a bet that the next leap in physical AI — robots and machines that can operate reliably in hospitals, factories, and homes — will require something fundamentally different from predicting the next word.
Conclusion
The world model moment has arrived. In the span of a few weeks in early 2026, Yann LeCun launched AMI Labs with $1.03 billion, Fei-Fei Li's World Labs raised another $1 billion, and NVIDIA's Cosmos platform crossed 3 million downloads while being adopted by companies from Boston Dynamics to surgical robot makers.
The technical argument behind world models is sound and well-supported: LLMs lack physical grounding, cannot reliably plan action sequences, and are structurally prone to hallucinations that are merely embarrassing in a chatbot but dangerous in a robot.
Whether AMI Labs' JEPA-based approach will prove out at commercial scale within the next two to five years remains genuinely uncertain. The team is world-class. The funding is historic. The market timing — with humanoid robot deployments growing 500% year-over-year — is strong.
What is certain is that the AI industry's next major architectural debate has begun. World models versus LLMs is the new intelligence paradigm war, and it is backed by serious money, serious science, and the most credentialed figures in the field. Understanding this shift now puts you ahead of the conversation that the rest of the industry will be having for the next decade.


